Second Law Analysis of Nanofluid Flow within a Circular Minichannel Considering Nanoparticle Migration*Entropy* **2016**, *18*(10), 378; doi:10.3390/e18100378 (registering DOI) - 21 October 2016**Abstract **

►
Figures
In the current research, entropy generation for the water–alumina nanofluid flow is studied in a circular minichannel for the laminar regime under constant wall heat flux in order to evaluate irreversibilities arising from friction and heat transfer. To this end, simulations are
[...] Read more.

In the current research, entropy generation for the water–alumina nanofluid flow is studied in a circular minichannel for the laminar regime under constant wall heat flux in order to evaluate irreversibilities arising from friction and heat transfer. To this end, simulations are carried out considering the particle migration effects. Due to particle migration, the nanoparticles incorporate non-uniform distribution at the cross-section of the pipe, such that the concentration is larger at central areas. The concentration non-uniformity increases by augmenting the mean concentration, particle size, and Reynolds number. The rates of entropy generation are evaluated both locally and globally (integrated). The obtained results show that particle migration changes the thermal and frictional entropy generation rates significantly, particularly at high Reynolds numbers, large concentrations, and coarser particles. Hence, this phenomenon should be considered in examinations related to energy in the field of nanofluids.
Full article

Correction: Jacobsen, C.S., et al. Continuous Variable Quantum Key Distribution with a Noisy Laser. *Entropy* 2015, *17*, 4654–4663*Entropy* **2016**, *18*(10), 373; doi:10.3390/e18100373 - 20 October 2016**Abstract **
n/a
Full article

►
Figures
On the Virtual Cell Transmission in Ultra Dense Networks*Entropy* **2016**, *18*(10), 374; doi:10.3390/e18100374 - 20 October 2016**Abstract **

►
Figures
Ultra dense networks (UDN) are identified as one of the key enablers for 5G, since they can provide an ultra high spectral reuse factor exploiting proximal transmissions. By densifying the network infrastructure equipment, it is highly possible that each user will have
[...] Read more.

Ultra dense networks (UDN) are identified as one of the key enablers for 5G, since they can provide an ultra high spectral reuse factor exploiting proximal transmissions. By densifying the network infrastructure equipment, it is highly possible that each user will have one or more dedicated serving base station antennas, introducing the user-centric virtual cell paradigm. However, due to irregular deployment of a large amount of base station antennas, the interference environment becomes rather complex, thus introducing severe interferences among different virtual cells. This paper focuses on the downlink transmission scheme in UDN where a large number of users and base station antennas is uniformly spread over a certain area. An interference graph is first created based on the large-scale fadings to give a potential description of the interference relationship among the virtual cells. Then, base station antennas and users in the virtual cells within the same maximally-connected component are grouped together and merged into one new virtual cell cluster, where users are jointly served via zero-forcing (ZF) beamforming. A multi-virtual-cell minimum mean square error precoding scheme is further proposed to mitigate the inter-cluster interference. Additionally, the interference alignment framework is proposed based on the low complexity virtual cell merging to eliminate the strong interference between different virtual cells. Simulation results show that the proposed interference graph-based virtual cell merging approach can attain the average user spectral efficiency performance of the grouping scheme based on virtual cell overlapping with a smaller virtual cell size and reduced signal processing complexity. Besides, the proposed user-centric transmission scheme greatly outperforms the BS-centric transmission scheme (maximum ratio transmission (MRT)) in terms of both the average user spectral efficiency and edge user spectral efficiency. What is more, interference alignment based on the low complexity virtual cell merging can achieve much better performance than ZF and MRT precoding in terms of average user spectral efficiency.
Full article

Non-Asymptotic Confidence Sets for Circular Means*Entropy* **2016**, *18*(10), 375; doi:10.3390/e18100375 - 20 October 2016**Abstract **

►
Figures
The mean of data on the unit circle is defined as the minimizer of the average squared Euclidean distance to the data. Based on Hoeffding’s mass concentration inequalities, non-asymptotic confidence sets for circular means are constructed which are universal in the sense
[...] Read more.

The mean of data on the unit circle is defined as the minimizer of the average squared Euclidean distance to the data. Based on Hoeffding’s mass concentration inequalities, non-asymptotic confidence sets for circular means are constructed which are universal in the sense that they require no distributional assumptions. These are then compared with asymptotic confidence sets in simulations and for a real data set.
Full article

Isothermal Oxidation of Aluminized Coatings on High-Entropy Alloys*Entropy* **2016**, *18*(10), 376; doi:10.3390/e18100376 - 20 October 2016**Abstract **

►
Figures
The isothermal oxidation resistance of Al_{0.2}Co_{1.5}CrFeNi_{1.5}Ti_{0.3} high-entropy alloy is analyzed and the microstructural evolution of the oxide layer is studied. The limited aluminum, about 3.6 at %, leads to the non-continuous alumina. The present alloy
[...] Read more.

The isothermal oxidation resistance of Al_{0.2}Co_{1.5}CrFeNi_{1.5}Ti_{0.3} high-entropy alloy is analyzed and the microstructural evolution of the oxide layer is studied. The limited aluminum, about 3.6 at %, leads to the non-continuous alumina. The present alloy is insufficient for severe circumstances only due to chromium oxide that is 10 μm after 1173 K for 360 h. Thus, the aluminized high-entropy alloys (HEAs) are further prepared by the industrial packing cementation process at 1273 K and 1323 K. The aluminizing coating is 50 μm at 1273 K after 5 h. The coating growth is controlled by the diffusion of aluminum. The interdiffusion zone reveals two regions that are the Ti-, Co-, Ni-rich area and the Fe-, Cr-rich area. The oxidation resistance of aluminizing HEA improves outstandingly, and sustains at 1173 K and 1273 K for 441 h without any spallation. The alumina at the surface and the stable interface contribute to the performance of this Al_{0.2}Co_{1.5}CrFeNi_{1.5}Ti_{0.3} alloy.
Full article

Point Information Gain and Multidimensional Data Analysis*Entropy* **2016**, *18*(10), 372; doi:10.3390/e18100372 - 19 October 2016**Abstract **

►
Figures
We generalize the point information gain (PIG) and derived quantities, i.e., point information gain entropy (PIE) and point information gain entropy density (PIED), for the case of the Rényi entropy and simulate the behavior of PIG for typical distributions. We also use
[...] Read more.

We generalize the point information gain (PIG) and derived quantities, i.e., point information gain entropy (PIE) and point information gain entropy density (PIED), for the case of the Rényi entropy and simulate the behavior of PIG for typical distributions. We also use these methods for the analysis of multidimensional datasets. We demonstrate the main properties of PIE/PIED spectra for the real data with the examples of several images and discuss further possible utilizations in other fields of data processing.
Full article

Study on the Stability and Entropy Complexity of an Energy-Saving and Emission-Reduction Model with Two Delays*Entropy* **2016**, *18*(10), 371; doi:10.3390/e18100371 - 19 October 2016**Abstract **

►
Figures
In this paper, we build a model of energy-savings and emission-reductions with two delays. In this model, it is assumed that the interaction between energy-savings and emission-reduction and that between carbon emissions and economic growth are delayed. We examine the local stability
[...] Read more.

In this paper, we build a model of energy-savings and emission-reductions with two delays. In this model, it is assumed that the interaction between energy-savings and emission-reduction and that between carbon emissions and economic growth are delayed. We examine the local stability and the existence of a Hopf bifurcation at the equilibrium point of the system. By employing System Complexity Theory, we also analyze the impact of delays and the feedback control on stability and entropy of the system are analyzed from two aspects: single delay and double delays. In numerical simulation section, we test the theoretical analysis by using means bifurcation diagram, the largest Lyapunov exponent diagrams, attractor, time-domain plot, Poincare section plot, power spectrum, entropy diagram, 3-D surface chart and 4-D graph, the simulation results demonstrating that the inappropriate changes of delays and the feedback control will result in instability and fluctuation of carbon emissions. Finally, the bifurcation control is achieved by using the method of variable feedback control. Hence, we conclude that the greater the value of the control parameter, the better the effect of the bifurcation control. The results will provide for the development of energy-saving and emission-reduction policies.
Full article

From Tools in Symplectic and Poisson Geometry to J.-M. Souriau’s Theories of Statistical Mechanics and Thermodynamics*Entropy* **2016**, *18*(10), 370; doi:10.3390/e18100370 - 19 October 2016**Abstract **

I present in this paper some tools in symplectic and Poisson geometry in view of their applications in geometric mechanics and mathematical physics. After a short discussion of the Lagrangian an Hamiltonian formalisms, including the use of symmetry groups, and a presentation
[...] Read more.

I present in this paper some tools in symplectic and Poisson geometry in view of their applications in geometric mechanics and mathematical physics. After a short discussion of the Lagrangian an Hamiltonian formalisms, including the use of symmetry groups, and a presentation of the Tulczyjew’s isomorphisms (which explain some aspects of the relations between these formalisms), I explain the concept of manifold of motions of a mechanical system and its use, due to J.-M. Souriau, in statistical mechanics and thermodynamics. The generalization of the notion of thermodynamic equilibrium in which the one-dimensional group of time translations is replaced by a multi-dimensional, maybe non-commutative Lie group, is fully discussed and examples of applications in physics are given.
Full article

A Hydrodynamic Model for Silicon Nanowires Based on the Maximum Entropy Principle*Entropy* **2016**, *18*(10), 368; doi:10.3390/e18100368 - 19 October 2016**Abstract **

►
Figures
Silicon nanowires (SiNW) are quasi-one-dimensional structures in which the electrons are spatially confined in two directions, and they are free to move along the axis of the wire. The spatial confinement is governed by the Schrödinger–Poisson system, which must be coupled to
[...] Read more.

Silicon nanowires (SiNW) are quasi-one-dimensional structures in which the electrons are spatially confined in two directions, and they are free to move along the axis of the wire. The spatial confinement is governed by the Schrödinger–Poisson system, which must be coupled to the transport in the free motion direction. For devices with the characteristic length of a few tens of nanometers, the transport of the electrons along the axis of the wire can be considered semiclassical, and it can be dealt with by the multi-sub-band Boltzmann transport equations (MBTE). By taking the moments of the MBTE, a hydrodynamic model has been formulated, where explicit closure relations for the fluxes and production terms (i.e., the moments on the collisional operator) are obtained by means of the maximum entropy principle of extended thermodynamics, including the scattering of electrons with phonons, impurities and surface roughness scattering. Numerical results are shown for a SiNW transistor.
Full article

Chemical Reactions Using a Non-Equilibrium Wigner Function Approach*Entropy* **2016**, *18*(10), 369; doi:10.3390/e18100369 - 19 October 2016**Abstract **

A three-dimensional model of binary chemical reactions is studied. We consider an ab initio quantum two-particle system subjected to an attractive interaction potential and to a heat bath at thermal equilibrium at absolute temperature $T>0$ . Under the sole action
[...] Read more.

A three-dimensional model of binary chemical reactions is studied. We consider an ab initio quantum two-particle system subjected to an attractive interaction potential and to a heat bath at thermal equilibrium at absolute temperature $T>0$ . Under the sole action of the attraction potential, the two particles can either be bound or unbound to each other. While at $T=0$ , there is no transition between both states, such a transition is possible when $T>0$ (due to the heat bath) and plays a key role as ${k}_{\mathrm{B}}T$ approaches the magnitude of the attractive potential. We focus on a quantum regime, typical of chemical reactions, such that: (a) the thermal wavelength is shorter than the range of the attractive potential (lower limit on *T*) and (b) $(3/2){k}_{\mathrm{B}}T$ does not exceed the magnitude of the attractive potential (upper limit on *T*). In this regime, we extend several methods previously applied to analyze the time duration of DNA thermal denaturation. The two-particle system is then described by a non-equilibrium Wigner function. Under Assumptions (a) and (b), and for sufficiently long times, defined by a characteristic time scale *D* that is subsequently estimated, the general dissipationless non-equilibrium equation for the Wigner function is approximated by a Smoluchowski-like equation displaying dissipation and quantum effects. A comparison with the standard chemical kinetic equations is made. The time *τ* required for the two particles to transition from the bound state to unbound configurations is studied by means of the mean first passage time formalism. An approximate formula for *τ*, in terms of *D* and exhibiting the Arrhenius exponential factor, is obtained. Recombination processes are also briefly studied within our framework and compared with previous well-known methods.
Full article

Methodology for Simulation and Analysis of Complex Adaptive Supply Network Structure and Dynamics Using Information Theory*Entropy* **2016**, *18*(10), 367; doi:10.3390/e18100367 - 18 October 2016**Abstract **

►
Figures
Supply networks existing today in many industries can behave as complex adaptive systems making them more difficult to analyze and assess. Being able to fully understand both the complex static and dynamic structures of a complex adaptive supply network (CASN) are key
[...] Read more.

Supply networks existing today in many industries can behave as complex adaptive systems making them more difficult to analyze and assess. Being able to fully understand both the complex static and dynamic structures of a complex adaptive supply network (CASN) are key to being able to make more informed management decisions and prioritize resources and production throughout the network. Previous efforts to model and analyze CASN have been impeded by the complex, dynamic nature of the systems. However, drawing from other complex adaptive systems sciences, information theory provides a model-free methodology removing many of those barriers, especially concerning complex network structure and dynamics. With minimal information about the network nodes, transfer entropy can be used to reverse engineer the network structure while local transfer entropy can be used to analyze the network structure’s dynamics. Both simulated and real-world networks were analyzed using this methodology. Applying the methodology to CASNs allows the practitioner to capitalize on observations from the highly multidisciplinary field of information theory which provides insights into CASN’s self-organization, emergence, stability/instability, and distributed computation. This not only provides managers with a more thorough understanding of a system’s structure and dynamics for management purposes, but also opens up research opportunities into eventual strategies to monitor and manage emergence and adaption within the environment.
Full article

Intelligent Security IT System for Detecting Intruders Based on Received Signal Strength Indicators*Entropy* **2016**, *18*(10), 366; doi:10.3390/e18100366 - 16 October 2016**Abstract **

►
Figures
Given that entropy-based IT technology has been applied in homes, office buildings and elsewhere for IT security systems, diverse kinds of intelligent services are currently provided. In particular, IT security systems have become more robust and varied. However, access control systems still
[...] Read more.

Given that entropy-based IT technology has been applied in homes, office buildings and elsewhere for IT security systems, diverse kinds of intelligent services are currently provided. In particular, IT security systems have become more robust and varied. However, access control systems still depend on tags held by building entrants. Since tags can be obtained by intruders, an approach to counter the disadvantages of tags is required. For example, it is possible to track the movement of tags in intelligent buildings in order to detect intruders. Therefore, each tag owner can be judged by analyzing the movements of their tags. This paper proposes a security approach based on the received signal strength indicators (RSSIs) of beacon-based tags to detect intruders. The normal RSSI patterns of moving entrants are obtained and analyzed. Intruders can be detected when abnormal RSSIs are measured in comparison to normal RSSI patterns. In the experiments, one normal and one abnormal scenario are defined for collecting the RSSIs of a Bluetooth-based beacon in order to validate the proposed method. When the RSSIs of both scenarios are compared to pre-collected RSSIs, the RSSIs of the abnormal scenario are about 61% more different compared to the RSSIs of the normal scenario. Therefore, intruders in buildings can be detected by considering RSSI differences.
Full article

Boltzmann Sampling by Degenerate Optical Parametric Oscillator Network for Structure-Based Virtual Screening*Entropy* **2016**, *18*(10), 365; doi:10.3390/e18100365 - 13 October 2016**Abstract **

►
Figures
A structure-based lead optimization procedure is an essential step to finding appropriate ligand molecules binding to a target protein structure in order to identify drug candidates. This procedure takes a known structure of a protein-ligand complex as input, and structurally similar compounds
[...] Read more.

A structure-based lead optimization procedure is an essential step to finding appropriate ligand molecules binding to a target protein structure in order to identify drug candidates. This procedure takes a known structure of a protein-ligand complex as input, and structurally similar compounds with the query ligand are designed in consideration with all possible combinations of atomic species. This task is, however, computationally hard since such combinatorial optimization problems belong to the non-deterministic nonpolynomial-time hard (NP-hard) class. In this paper, we propose the structure-based lead generation and optimization procedures by a degenerate optical parametric oscillator (DOPO) network. Results of numerical simulation demonstrate that the DOPO network efficiently identifies a set of appropriate ligand molecules according to the Boltzmann sampling law.
Full article

The Shell Collapsar—A Possible Alternative to Black Holes*Entropy* **2016**, *18*(10), 363; doi:10.3390/e18100363 - 12 October 2016**Abstract **

►
Figures
This article argues that a consistent description is possible for gravitationally collapsed bodies, in which collapse stops before the object reaches its gravitational radius, the density reaching a maximum close to the surface and then decreasing towards the centre. The way towards
[...] Read more.

This article argues that a consistent description is possible for gravitationally collapsed bodies, in which collapse stops before the object reaches its gravitational radius, the density reaching a maximum close to the surface and then decreasing towards the centre. The way towards such a description was indicated in the classic Oppenheimer-Snyder (OS) 1939 analysis of a dust star. The title of that article implied support for a black-hole solution, but the present article shows that the final OS density distribution accords with gravastar and other shell models. The parallel Oppenheimer-Volkoff (OV) study of 1939 used the equation of state for a neutron gas, but could consider only stationary solutions of the field equations. Recently we found that the OV equation of state permits solutions with minimal rather than maximal central density, and here we find a similar topology for the OS dust collapsar; a uniform dust-ball which starts with large radius, and correspondingly small density, and collapses to a shell at the gravitational radius with density decreasing monotonically towards the centre. Though no longer considered central in black-hole theory, the OS dust model gave the first exact, time-dependent solution of the field equations. Regarded as a limiting case of OV, it indicates the possibility of neutron stars of unlimited mass with a similar shell topology. Progress in observational astronomy will distinguish this class of collapsars from black holes.
Full article

Entropy Rate Estimates for Natural Language—A New Extrapolation of Compressed Large-Scale Corpora*Entropy* **2016**, *18*(10), 364; doi:10.3390/e18100364 - 12 October 2016**Abstract **

►
Figures
One of the fundamental questions about human language is whether its entropy rate is positive. The entropy rate measures the average amount of information communicated per unit time. The question about the entropy of language dates back to experiments by Shannon in
[...] Read more.

One of the fundamental questions about human language is whether its entropy rate is positive. The entropy rate measures the average amount of information communicated per unit time. The question about the entropy of language dates back to experiments by Shannon in 1951, but in 1990 Hilberg raised doubt regarding a correct interpretation of these experiments. This article provides an in-depth empirical analysis, using 20 corpora of up to 7.8 gigabytes across six languages (English, French, Russian, Korean, Chinese, and Japanese), to conclude that the entropy rate is positive. To obtain the estimates for data length tending to infinity, we use an extrapolation function given by an ansatz. Whereas some ansatzes were proposed previously, here we use a new stretched exponential extrapolation function that has a smaller error of fit. Thus, we conclude that the entropy rates of human languages are positive but approximately 20% smaller than without extrapolation. Although the entropy rate estimates depend on the script kind, the exponent of the ansatz function turns out to be constant across different languages and governs the complexity of natural language in general. In other words, in spite of typological differences, all languages seem equally hard to learn, which partly confirms Hilberg’s hypothesis.
Full article

Measures of Difference and Significance in the Era of Computer Simulations, Meta-Analysis, and Big Data*Entropy* **2016**, *18*(10), 361; doi:10.3390/e18100361 - 9 October 2016**Abstract **

►
Figures
In traditional research, repeated measurements lead to a sample of results, and inferential statistics can be used to not only estimate parameters, but also to test statistical hypotheses concerning these parameters. In many cases, the standard error of the estimates decreases (asymptotically)
[...] Read more.

In traditional research, repeated measurements lead to a sample of results, and inferential statistics can be used to not only estimate parameters, but also to test statistical hypotheses concerning these parameters. In many cases, the standard error of the estimates decreases (asymptotically) with the square root of the sample size, which provides a stimulus to probe large samples. In simulation models, the situation is entirely different. When probability distribution functions for model features are specified, the probability distribution function of the model output can be approached using numerical techniques, such as bootstrapping or Monte Carlo sampling. Given the computational power of most PCs today, the sample size can be increased almost without bounds. The result is that standard errors of parameters are vanishingly small, and that almost all significance tests will lead to a rejected null hypothesis. Clearly, another approach to statistical significance is needed. This paper analyzes the situation and connects the discussion to other domains in which the null hypothesis significance test (NHST) paradigm is challenged. In particular, the notions of effect size and Cohen’s *d* provide promising alternatives for the establishment of a new indicator of statistical significance. This indicator attempts to cover significance (precision) and effect size (relevance) in one measure. Although in the end more fundamental changes are called for, our approach has the attractiveness of requiring only a minimal change to the practice of statistics. The analysis is not only relevant for artificial samples, but also for present-day huge samples, associated with the availability of big data.
Full article

Tolerance Redistributing of the Reassembly Dimensional Chain on Measure of Uncertainty*Entropy* **2016**, *18*(10), 348; doi:10.3390/e18100348 - 9 October 2016**Abstract **

►
Figures
How to use the limited precision of remanufactured parts to assemble higher-quality remanufactured products is a challenge for remanufacturing engineering under uncertainty. On the basis of analyzing the uncertainty of remanufacturing parts, this paper takes tolerance redistributing of the reassembly (remanufactured assembly)
[...] Read more.

How to use the limited precision of remanufactured parts to assemble higher-quality remanufactured products is a challenge for remanufacturing engineering under uncertainty. On the basis of analyzing the uncertainty of remanufacturing parts, this paper takes tolerance redistributing of the reassembly (remanufactured assembly) dimensional chain as the research object. An entropy model to measure the uncertainty of assembly dimensional is built, and we quantify the degree of the uncertainty gap between reassembly and assembly. Then, in order to make sure the uncertainty of reassembly is not lower than that of assembly, the tolerance redistribution optimization model of the reassembly dimensional chain is proposed which is based on the tolerance of the grading allocation method. Finally, this paper takes the remanufactured gearbox assembly dimension chain as an example. The redistribution optimization model saves 19.11% of the cost with the assembly precision of remanufactured products. It provides new technical and theoretical support to expand the utilization rate of remanufactured parts and improve reassembly precision.
Full article

Metric for Estimating Congruity between Quantum Images*Entropy* **2016**, *18*(10), 360; doi:10.3390/e18100360 - 9 October 2016**Abstract **

►
Figures
An enhanced quantum-based image fidelity metric, the QIFM metric, is proposed as a tool to assess the “congruity” between two or more quantum images. The often confounding contrariety that distinguishes between classical and quantum information processing makes the widely accepted peak-signal-to-noise-ratio (PSNR)
[...] Read more.

An enhanced quantum-based image fidelity metric, the QIFM metric, is proposed as a tool to assess the “congruity” between two or more quantum images. The often confounding contrariety that distinguishes between classical and quantum information processing makes the widely accepted peak-signal-to-noise-ratio (PSNR) ill-suited for use in the quantum computing framework, whereas the prohibitive cost of the probability-based similarity score makes it imprudent for use as an effective image quality metric. Unlike the aforementioned image quality measures, the proposed QIFM metric is calibrated as a pixel difference-based image quality measure that is sensitive to the intricacies inherent to quantum image processing (QIP). As proposed, the QIFM is configured with in-built non-destructive measurement units that preserve the coherence necessary for quantum computation. This design moderates the cost of executing the QIFM in order to estimate congruity between two or more quantum images. A statistical analysis also shows that our proposed QIFM metric has a better correlation with digital expectation of likeness between images than other available quantum image quality measures. Therefore, the QIFM offers a competent substitute for the PSNR as an image quality measure in the quantum computing framework thereby providing a tool to effectively assess fidelity between images in quantum watermarking, quantum movie aggregation and other applications in QIP.
Full article

Ordering Quantiles through Confidence Statements*Entropy* **2016**, *18*(10), 357; doi:10.3390/e18100357 - 8 October 2016**Abstract **

►
Figures
Ranking variables according to their relevance to predict an outcome is an important task in biomedicine. For instance, such ranking can be used for selecting a smaller number of genes for then applying other sophisticated experiments only on genes identified as important.
[...] Read more.

Ranking variables according to their relevance to predict an outcome is an important task in biomedicine. For instance, such ranking can be used for selecting a smaller number of genes for then applying other sophisticated experiments only on genes identified as important. A nonparametric method called Quor is designed to provide a confidence value for the order of arbitrary quantiles of different populations using independent samples. This confidence may provide insights about possible differences among groups and yields a ranking of importance for the variables. Computations are efficient and use exact distributions with no need for asymptotic considerations. Experiments with simulated data and with multiple real -omics data sets are performed, and they show advantages and disadvantages of the method. Quor has no assumptions but independence of samples, thus it might be a better option when assumptions of other methods cannot be asserted. The software is publicly available on CRAN.
Full article

Realistic Many-Body Quantum Systems vs. Full Random Matrices: Static and Dynamical Properties*Entropy* **2016**, *18*(10), 359; doi:10.3390/e18100359 - 8 October 2016**Abstract **

►
Figures
We study the static and dynamical properties of isolated many-body quantum systems and compare them with the results for full random matrices. In doing so, we link concepts from quantum information theory with those from quantum chaos. In particular, we relate the
[...] Read more.

We study the static and dynamical properties of isolated many-body quantum systems and compare them with the results for full random matrices. In doing so, we link concepts from quantum information theory with those from quantum chaos. In particular, we relate the von Neumann entanglement entropy with the Shannon information entropy and discuss their relevance for the analysis of the degree of complexity of the eigenstates, the behavior of the system at different time scales and the conditions for thermalization. A main advantage of full random matrices is that they enable the derivation of analytical expressions that agree extremely well with the numerics and provide bounds for realistic many-body quantum systems.
Full article