Entropy and Mixing Entropy for Weakly Nonlinear Mechanical Vibrating Systems

In this paper, we examine Khinchin’s entropy for two weakly nonlinear systems of oscillators. We study a system of coupled Duffing oscillators and a set of Henon–Heiles oscillators. It is shown that the general method of deriving the Khinchin’s entropy for linear systems can be modified to account for weak nonlinearities. Nonlinearities are modeled as nonlinear springs. To calculate the Khinchin’s entropy, one needs to obtain an analytical expression of the system’s phase volume. We use a perturbation method to do so, and verify the results against the numerical calculation of the phase volume. It is shown that such an approach is valid for weakly nonlinear systems. In an extension of the author’s previous work for linear systems, a mixing entropy is defined for these two oscillators. The mixing entropy is the result of the generation of entropy when two systems are combined to create a complex system. It is illustrated that mixing entropy is always non-negative. The mixing entropy provides insight into the energy behavior of each system. The limitation of statistical energy analysis motivates this study. Using the thermodynamic relationship of temperature and entropy, and Khinchin’s entropy, one can define a vibrational temperature. Vibrational temperature can be used to derive the power flow proportionality, which is the backbone of the statistical energy analysis. Although this paper is motivated by statistical energy analysis application, it is not devoted to the statistical energy analysis of nonlinear systems.


Introduction
Statistical energy analysis (SEA) is a method of predicting energy transmission in systems that are undergoing high-frequency vibration. SEA is a computationally inexpensive reduced order model that serves as an alternative to numerical approaches such as finite element analysis [1]. In classical SEA, a power flow proportionality law is used to relate the ensemble average energies of individual structural or acoustic subsystems to the ensemble average energy flow rate between subsystems [1][2][3][4][5][6]. These methods are based on the first law of thermodynamics, i.e., the principle of conservation of energy. This principle is used to obtain the energy balance equations of the systems [7]. In SEA, energy is assumed to flow from systems with higher average modal energies to systems with lower average modal energies [8].
The classical SEA approach is valid under a number of limiting assumptions [1,[8][9][10][11]. Efforts to examine SEA for nonlinear systems have been limited in scope [12]. To the best of our knowledge, there are not many papers devoted to nonlinear SEA. It has been accepted that SEA is used for linear systems. Ref. [13] studies the coupling between a linear acoustic fluid and a nonlinear structure. It shows that although the steady-state response of the structure strongly depends on the structural nonlinearities, the basic power proportionality law of SEA is preserved. Ref. [12] studies the energy cascade phenomenon due to large deformations in coupled vibrating plates. Ref. [12] suggests

Motivation
The foundational equation of SEA is the power flow proportionality law, which states that the average power flow (energy exchange rate) between two subsystems is proportional to the difference between the average modal energies of the two subsystems: where P 12 is the power flow between the subsystems, and E 1 /N 1 and E 2 /N 2 are the average modal energies of the subsystems. Equation (1) is a consequence of the first law of thermodynamics, and is a special case of a fundamental energy relationship [14]: where T 1 and T 2 denote the vibroacoustic temperatures of the first and second subsystems, respectively. Equation (2) states that the power flow between mechanically vibrating systems is proportional to the difference between their vibroacoustic temperatures. It is understood that the entropy concept can be used to obtain relationships between energy and vibroacoustic temperature. Carcaterra has used entropy to show that T = E/N gives the vibroacoustic temperature for linear systems [15].
Since entropy approaches make no assumptions about system linearity, they can be applied to nonlinear systems. Therefore, it is expected that entropy analyses can be used to obtain SEA power flow relationships for nonlinear vibroacoustic systems. This expectation is the main motivation for the work presented in this paper. In this paper we show that although the calculation is more complex, perturbation method can be used to obtain the temperatures for weakly nonlinear vibrating systems.
With the goal of weakening the constraints of SEA, entropy has also been examined in the context of strongly coupled oscillators [25] and nonconservative oscillators [26]. This paper is an extension of the author's previous work to nonlinear systems. By examining entropy for nonlinear systems, vibroacoustic temperatures can be obtained which will enable the development of SEA power flow relationships for nonlinear vibroacoustic systems.

Overview of Khinchin's Entropy
Consider a mechanical system, S, that is described by a set of N positions, q i , and N generalized velocities, p i =q i , where i = 1, 2, ..., N. The micro-state of S is defined as the set of {q,q}. The macro-state of S is given by its total energy, which is a global function of the micro-state [7]. Assume we can describe this system with Hamiltonian equations: There are infinitely many possible micro-states for a given macro-state for each system. Thus, knowledge about the exact system configuration is lost when only the macro-state of the system is known. The main goal of SEA is to estimate the energies of systems and the subsystems without knowing the exact configuration, {q,q}. The concepts of micro-and macro-states play an important role when defining entropy [7,15].
Khinchin's definition of entropy [18] has been introduced for vibrating mechanical systems [7,15,17]. A measure of the number of possible micro-states of a system should be defined to obtain Khinchin's entropy. Consider a system, S, described by N position and velocity pairs. In 2N-dimensional phase space, the trajectory of this system sweeps out a closed equi-energy surface over time. There are infinite possible micro-states that fall on this trajectory. The volume, V, enclosed by a system's closed-surface trajectory in phase space is called phase volume. The derivative of the phase volume, V, with respect to energy is called a structure function, Ω. Both phase volume, V, and the structure function, Ω are measure numbers of possible micro-states of the system.
where the energy, E, is the total mechanical energy of the system, the summation of potential and kinetic energy. The structure function is useful because it is an invariant measure [7,21]. Taking the Laplace transform of the structure function yields the generating function of the system, The generating function is central towards defining Khinchin's entropy. The generating function corresponding to S is given by the product of the generating functions of its subsystems, assuming the total energy is the sum of subsystems' energies. Such subsystems are referred to as non-overlapping subsystems [7,17]. Denoting the generating function of each subsystem as Φ i , the generating function of the entire system is Equation (6) is the law of composition of the generating function for a system composed of N non-overlapping subsystems [7,17]. The law of composition is a useful tool for determining the generating functions of weakly coupled systems. This law is proved in Ref. [7].
Khinchin's entropy is defined as [18] H Subject to where E is the energy of the system, C is an arbitrary scaling constant, and Φ(σ) is the generating function with the argument s = σ. This condition, Equation (8), ensures that H is minimized at s = σ. The entropy of a system must equal the sum of the entropies of its constituent subsystems.
Khinchin's entropy satisfies this additive entropy property [7]. One can derive an expression for Khinchin's entropy of a system by following these steps: [7] 1.
Calculating the equi-energy volume enclosed by the system's closed-surface trajectory in phase space, V.
This procedure does not make assumptions about the linearity of the system, and is not limited in this respect [7]. As long as the equi-energy volume can be determined, the remaining steps of the procedure are standard. For linear systems, the equi-energy volume is an ellipsoid, and can be determined analytically [7,25,27]. However, this is not generally the case for nonlinear systems.

Linear Systems
For an N-degree-of-freedom (N-DOF) linear conservative system, the energy of the system can be expressed in terms of its modal coordinates, q i andq i : where a i and b i are constants that depend on the modal stiffnesses and masses. Dividing Equation (10) by E yields where λ i = a i /E for i = 1, 2, . . . N and Expressing Equation (11) in matrix form [25], where Λ Λ Λ is a symmetric coefficient matrix defined as and the coordinate matrix q q q = {q 1 , . . . , q N ,q 1 , . . . ,q N } T . From here, the i th semi-axis of the phase volume is The volume of an ellipsoid is the product of the semi-axes, multiplied by a constant. Therefore, Thus, the volume enclosed by the phase-space trajectory of an N-DOF conservative linear system is where B is a constant that depends upon the physical parameters of the system, such as the stiffness and modal mass. The structure function, defined in Equation (4), is which yields the generating function Substituting Equation (18) into Equation (8) and solving for s = σ returns σ = N/E. Substituting σ into Equation (18) yields the generating function corresponding to the volume in Equation (16): Then, substituting Equation (19) into Equation (7) and expanding the result yields where H L is the entropy of an N-DOF linear system. Since the constant C is arbitrary, it can be defined such that [22] According to the thermodynamic definition of entropy, the temperature of the system is given by T = (dH/dE) −1 = 1/σ, where σ is the value that satisfies Equation (8). For the entropy given in Equation (21), this yields T = E/N. The parameter T = E/N gives the average modal energy of the system [15], which is consistent with the vibroacoustic temperatures used in the power flow proportionality of SEA [14].

Nonlinear Systems
For nonlinear systems, the equi-energy volume enclosed by the system's trajectory in phase space is not an ellipsoid. Moreover, this volume does not generally have a closed-form solution. It is possible to determine the volume either numerically or through the use of elliptic integrals [21]. However, the inability to obtain a volume in closed-form is problematic, since such a result is required for determining the generating function. For systems with weak nonlinearity, the equi-energy volume in phase space can be approximated by taking a polynomial series expansion about a parameter governing the strength of nonlinearity [7,28]. Assuming weak nonlinearity and using such an approach, a volume can be obtained in the form of a polynomial: where B i are constants that are dependent upon the physical parameters of the system, such as the spring stiffnesses, modal masses, and parameters associated with the nonlinarity. Accordingly, the structure function is given by where Q i := iBE i−1 . The Laplace transform of Ω(E) from E to s yields the generating function: From here, the procedure for determining entropy is the same as for linear systems. Khinchin's entropy can be found using Equations (7) and (8). In general, T = E/N for nonlinear systems.

Mixing Entropy
Khinchin's entropy satisfies the entropy inequality property [7,17]: where H is the entropy of the system and H * is the hypothetical decoupled entropy of the system [17]. The hypothetical decoupled entropy of each subsystem, H * i , is defined as the entropy of that subsystem obtained by neglecting coupling effects. In other words, Khinchin's definition of entropy for an isolated system is used to obtain decoupled entropies for each subsystem. The hypothetical decoupled entropy of a composite system, H * , is given by the sum of the hypothetical decoupled entropies of its subsystems: The mixing entropy of a system is defined as H mix , mixing entropy, is the difference between the actual entropy and the hypothetical decoupled entropy of the system. Therefore, the mixing entropy is the entropy produced by the coupling of systems. When coupling is introduced between oscillators, their behavior becomes more complex. Since entropy is a measure of system disorder, the act of combining oscillators cannot cause a decrease in the total entropy. Moreover, the entropy of the system will increase if the oscillators are interacting with one another. Thus, the mixing entropy must be strictly nonnegative. This has been shown to be true for linear systems [27]. It will be demonstrated that the mixing entropy is nonnegative for the nonlinear systems considered in this paper.

Entropy of Duffing Oscillators
In this section, entropy will be examined for Duffing oscillators. In Section 4.1, a Duffing oscillator of constant energy is considered and Section 4.2 discusses a set of Duffing oscillators coupled by a linear spring. Figure 1 shows a Duffing oscillator. This oscillator has position x, velocityẋ, mass m, spring stiffness k, and a nonlinear spring stiffness k nl , associated with an anharmonic potential. The anharmonic potential, U nl (x), is given by

A Duffing Oscillator
where ν is a small parameter that governs the strength of nonlinearity. The energy, E, of the Duffing oscillator is Therefore, the equation of motion is Since the oscillator is not externally forced or damped, the total energy of the system is constant. The Duffing oscillator experiences a restoring force, kx + νk nl x 3 , from the linear and nonlinear springs. For the numerical results presented in this section, the oscillator is given a linear stiffness k = 1, a nonlinear stiffness k nl = 1, mass m = 0.1, and an initial velocity excitation v 0 = √ 2 such that the energy is given by E = 0.1. All values are given in SI units.

The Phase Volume
The equi-energy volume enclosed by the trajectory of the system cannot be determined in closed form. Therefore, a method based on series expansion and perturbation is used to calculate the volume enclosed by the equi-energy surface. Rearranging Equation (29), the velocity of the oscillator iṡ The volume enclosed by the Duffing oscillator's trajectory in phase space is In Equation (32), x M is the abscissa of intersection between Σ and the x axis [15]. This is the maximum position of the oscillator. Equation (32) utilizes the symmetry of the trajectory in phase space to integrate over only the section of the surface in the first quadrant. The closed-surface trajectory, Σ, of the oscillator is shown in Figure 2 for several values of ν. It is shown that increasing the strength of nonlinearity causes the trajectory to compress symmetrically about theẋ axis. The system is stable since all forces acting on the oscillator are restoring forces. Therefore, increasing the strength of nonlinearity, ν, will not cause the system to deviate from its oscillatory behavior.  (32) can only be carried out numerically or with the use of elliptic integrals [21]. This is problematic, because in order to determine entropy, it is necessary to have a phase-space volume, V, that is in closed-form with respect to the system energy, i.e., V(E). In other words, it must be possible to obtain a generating function, and thereby an entropy, from V. Thus, a series expansion of velocity is used to determine the phase-space volume, V, for weak nonlinearity, or small values of ν. In this section, a fourth order approximation is shown. One can obtain the results for lower order expansions by eliminating higher order terms from the fourth order solution. Taking a fourth order series expansion of the velocity,ẋ, about ν = 0 and denoting the i th order term of the expansion asẋ [1] ν +ẋ [2]

The integration in Equation
wherex is the approximate velocity obtained via series expansion. Obtaining the coefficients, ,ẋ [1] ,ẋ [2] ,ẋ [3] ,ẋ [4] }, using a Taylor series approximation yieldṡ The maximum velocity,ẋ M , is found by setting x = 0 in Equation (33): In order to integrate over the velocity approximated by Equation (33), the limits of integration must be determined. These are given, respectively, by zero and the maximum oscillator position, x M . Applying a fourth order series expansion to x M , The perturbation method can be used to find the coefficients of the expansion of x M about ν = 0. First,ẋ is set to zero in Equation (29): Then, Equation (36) is substituted into Equation (37). Retaining only the zeroth order terms with respect to ν, Solving for x M yields Then, Equation (36) is substituted into Equation (37). Retaining only up through the first order terms with respect to ν, Substituting Equation (39) into Equation (40) and solving for x [1] M , This procedure is applied recursively to find subsequent terms of higher order. Obtaining terms through the fourth order in ν, Having approximated x M , the volume enclosed by the oscillator's trajectory in phase space is given by substituting Equation (36) into Equation (32): Assuming that ν is sufficiently small due to the weak nonlinearity and one can repesents the phase volume and a polynomial (Equation (16)), the trapezoidal rule can be used to approximate the integration in Equation (43). However, it can be verified that the higher order terms of the maximum position, x M , have a negligible effect on the phase-space volume. See Appendix A Neglecting these terms, Equation (43) reduces to Using first through fourth order expansions in velocity, the phase-space volume is obtained from Equation (44): The phase-space volume of a linear oscillator with mass m and spring stiffness k is Comparing Equations (45) and (46), it is clear that the zeroth order terms of the nonlinear approximations are equivalent to the linear result, as expected. Figure 3 compares the volume approximations from Equation (45) to the exact volume from Equation (32) as functions of the strength of nonlinearity, ν. Figure 3a compares these volumes for ν = 0 through ν = 1, and Figure 3b compares them for ν = 0 through ν = 6. The exact volume is obtained by numerically integrating Equation (32). In each subplot, a horizontal dashed line denotes the phase-space volume of a linear oscillator, V L , corresponding to ν = 0. It is shown that each nonlinear phase volume converges with the volume of a linear oscillator as ν → 0. For the small values of ν examined in Figure 3a, it can be observed that accuracy is improved by increasing the order of expansion with respect to ν. The fourth order approximation, V (4) , is the most accurate for weak nonlinearity. However, Figure 3b shows that higher order approximations do not improve accuracy as ν becomes large. In fact, the higher order approximations deviate more from the exact volume than those of lower order for large ν. This is a consequence of expanding Equation (32) about ν = 0 to find the phase volume, which assumes weak nonlinearity. Figure 4 shows the percent error between each of the phase volume approximations and the exact phase volume. The percent error in V (i) is defined as It is shown in Figure 4a that each approximation has an error of less than 0.5 percent for nonlinearity smaller than ν = 1. As shown in Figure 4b, the approximation error actually increases more rapidly for higher order expansions as ν becomes large. The higher order expansions demonstrate improved results for small ν, as shown in Figure 4a. Sufficiently accurate results are obtained when ν = 1, which is the case where the nonlinear spring stiffness is given by k nl = k. For this case, the nonlinearity is not weak, at least in comparison to the linear spring stiffness. Therefore, this method for obtaining the phase volume can be applied to a wider range of nonlinearities than what is suggested by the relative stiffness of the nonlinear spring, if appropriate care is taken.    Useful insight can be gained by fixing the strength of nonlinearity to ν = 1 and varying the linear natural frequency of the system, ω = √ k/m, while maintaining k nl = k. The phase volume approximations are compared to the exact volume in Figure 5a, and their percent errors are shown in Figure 5b. Figure 5a contains three regimes. At low frequencies, the effect of the nonlinearity ν is greater, so strong nonlinearity is achieved and the volume approximations fail. As the natural frequency increases, ν = 1 becomes the weakly nonlinear case. This regime (approximately 1.5 < ω < 2 for ν = 1), represents the region where the volume approximations become viable. Figure 5b confirms that the errors in the phase volume approximations become small at ω ≈ 2. For ω > 2, the nonlinear volume, V, is effectively indistinguishable from the linear system volume, V L . These observations suggest that such an approach is especially viable for systems undergoing high frequency vibration. Such systems are the focus of SEA.   Figure 5. Comparison of (a) the first through fourth order approximate volumes, V (1) through V (4) , and the exact volume, V, and (b) the error in the first through fourth order approximate volumes, for ω ≈ 0.09 through ω = 3.0.

Khinchin's Entropy
Khinchin's entropy will be derived using only the first and second order expansions of the phase volume about ν = 0. This approach can be applied to higher order if more accuracy is desired. Defining the structure function to the first and second order with respect to ν, the generating function is given by taking the Laplace transform of the structure function: The zeroth order term of the generating function is equivalent to the generating function of a linear oscillator [7,17]. Khinchin's entropy must satisfy both Equation (7) and the constraint equation, Equation (8). The parameter σ is obtained to the first and second order with respect to ν by substituting Equation (49a) and Equation (49b) into Equation (8) and solving for σ (1) and σ (2) , respectively. The series expansions of the temperatures, defined as T = 1/σ, can be taken to the first and second order about ν = 0: The presence of weak nonlinearity results in an increase in the temperature of the Duffing oscillator relative to that of a linear oscillator, for which the temperature is given by T L = E. Substituting Equation (50a) and Equation (50b) into Equation (7) and setting C = 0 yields entropy as a function of energy: Higher order terms are neglected in Equation (51). The entropies from Equation (51) are compared to the entropy of a linear oscillator [7,17] (ν = 0) in Figure 6. Figure 6a shows that H (1) and H (2) are indistinguishable from one another for ν < 0.2. This is expected, since both results should be accurate for a weakly nonlinear oscillator. Figure 6b shows that H (1) and H (2) diverge rapidly as the strength of nonlinearity grows beyond ν ≈ 1. For ν > 1, the assumption of weak nonlinearity fails, and this approach becomes unreliable.  Figure 7 shows a two-degree-of-freedom system of coupled Duffing oscillators [15,21]. Each oscillator has a spring with a linear restoring force as well as a nonlinear spring with a cubic restoring force. The oscillators are coupled by a linear spring with stiffness k c . The total energy of the system is

Coupled Duffing Oscillators
where the oscillator energies are given by and the energy associated with the coupling of the oscillators is In Equations (52) through (53), k 1 and k 2 are the linear spring stiffnesses of oscillators 1 and 2, respectively. k 3 and k 4 are the nonlinear spring stiffness of oscillators, m 1 and m 2 are the oscillator masses, and ν remains a dimensionless scaling parameter that governs the strength of nonlinearity of the oscillators. In this analysis, it is assumed that the oscillators are weakly coupled, i.e., the energy associated with coupling is small compared to the subsystem energies. That is to say, For the numerical results presented in this section, the following values are selected: linear spring stiffnesses k 1 = k 2 = 1, nonlinear spring stiffnesses k 3 = k 4 = 1, oscillator masses m 1 = m 2 = 0.1, and coupling spring stiffness k c = 0.01. All values are given in SI units. Values of ν will be selected such that the system satisfies the assumption of weak nonlinearity. Similarly, the coupling stiffness k c is sufficiently small for the system to satisfy the weak coupling assumption, but not so small that the oscillators do not interact with one another.
The weak coupling assumption, Equation (55), enables the use of the law of composition of the generating function from Equation (6). The energies of the oscillators are compared to the total system energy for ν = 0.2 in Figure 8. Figure 8 verifies that the total energy of the system is approximately equal to the sum of the oscillator energies. Thus, the weak coupling assumption is valid for the selected strength of nonlinearity.
Using Lagrangian mechanics, the equations of motion for the coupled Duffing system are

Entropy of the Coupled Duffing System
The entropy of the coupled Duffing system will be found by utilizing the law of composition from Equation (6) to determine the generating function of the system. The generating function of the entire system, Φ, is taken to be the product of the generating functions of each oscillator, Φ 1 and Φ 2 . The generating functions of each oscillator are found to the first and second order by substituting the appropriate physical parameters into Equation (49). Then, the generating function of the entire system is found by taking the product of the first and second order generating functions of each oscillator: Substituting Equation (49) for each oscillator into Equation (57) and and simplifying it results in The temperature T (1) can be found by substituting the generating function from Equation (57a) into the constraint equation, Equation (8), to find σ (1) . Then, T (1) = 1/σ (1) . T (2) can be found similarly by solving for σ (2) using Equation (57b). When finding σ (2) , the series expansion of the left side of Equation (8) is taken to the second order with respect to ν. The resulting equation is then solved for σ (2) . Taking the series expansions of T (1) and T (2) to the first and second order about ν = 0 yields The temperatures in Equation (59) agree in their first and zeroth order terms, such that T (2) = T (1) + T [2] (ν), where T [2] (ν) is the second order term with respect to ν. The zeroth order term is identical to the result for linear oscillators, which is consistent with previous results by Carcaterra [7]. The same approach that is used here to obtain the temperature can be applied to systems with many degrees of freedom. This would allow for the extension of SEA into the nonlinear regime, which is a subject of priority for future work.
Using Equation (7) and setting C = 0, the entropy of the coupled Duffing system is found by dropping all but the the first and second order terms, respectively, from the following equations: This yields where D (2) is the second order series expansion of E/T (2) about ν: While Equation (62) does not have to be obtained by series expansion, this produces a mathematically elegant result without a significant loss of accuracy.

Mixing Entropy of the Coupled Duffing System
The entropy inequality property, Equation (27), states the mixing entropy of a composite system must be strictly nonnegative, where the mixing entropy is defined as the difference between the actual entropy of the system and the sum of the hypothetical decoupled entropies of the subsystems. The hypothetical decoupled entropy of each oscillator is the entropy that is found by neglecting the entropy that is generated by coupling. In other words, Equation (51) yields the hypothetical decoupled entropy of each oscillator with the substitution of the appropriate physical parameters.
The hypothetical decoupled entropies of the oscillators are denoted as H * (1) 1 (E 1 ) and H * (1) 2 (E 2 ) using the first order approximation with respect to ν, and as H * (2) 1 (E 1 ) and H * (2) 2 (E 2 ) using the second order approximation. The total hypothetical decoupled entropy is denoted as using the first and second order approximations with respect to ν, respectively. From here, the mixing entropy is the difference between the actual entropy of the system and the total hypothetical decoupled entropy: It is important to note that the actual entropy, H, is a function of the total energy of the system. Since the system is conservative, the total energy and entropy are both constant. However, the hypothetical decoupled entropy is a function of both subsystem energies, and is therefore a function of time. Consequently, the mixing entropy is a function of time as well.
The actual entropy, total hypothetical decoupled entropy, and mixing entropy of the coupled Duffing system are plotted to the first and second order for ν = 0.2 in Figure 9. For ν = 0.2, the first and second order approximations are indistinguishable for the actual, hypothetical decoupled, and mixing entropies. Thus, a first order approximation with respect to ν is sufficient in this case. Figure 9 verifies that the actual entropy of the system is constant, while the hypothetical decoupled entropy is time-dependent. As stated by the entropy inequality property, the mixing entropy is nonnegative. The mixing entropies shown in Figure 9 are similar to the result that one would see for a linear oscillator [17]. When the oscillator energies are equivalent, the coupled Duffing system is indistinguishable from a pair of uncoupled Duffing oscillators. Therefore, the relative minima of the mixing entropy correspond to the points at which E 1 = E 2 . This is because there is no exchange of energy at the instances when the oscillator energies are equal to one another. In other words, there is no coupling contribution to the total entropy at these instances [17]. Thus, the mixing entropy is minimized. Because of its relationship with energy, the mixing entropy is a useful tool for understanding the qualitative behavior of complex systems with many degrees of freedom.

Entropy of Henon-Heiles Oscillators
In this section, Khinchin's entropy will be found for weakly nonlinear Henon-Heiles oscillators [21,24]. Section 5.1 discusses an oscillator with a third order anharmonic potential. This oscillator is one of the nonlinear components of the Henon-Heiles system. While the anharmonic potential of a Duffing oscillator is a restoring force, the nonlinear potential considered in Section 5.1 is repulsive. Section 5.2 discusses a set of Henon-Heiles oscillators.

Single Degree of Freedom Oscillator with Third Order Anharmonic Potential
Consider an oscillator with position x, velocityẋ, mass m, spring stiffness k, and a nonlinear spring stiffness k nl associated with a third order anharmonic potential. The anharmonic potential, U nl (x), is given by where ν is a small parameter that governs the strength of nonlinearity, and k nl is the nonlinear spring stiffness. When this oscillator is nonlinearly coupled to a simple oscillator, the configuration forms a set of Henon-Heiles oscillators [21]. The energy, E, of the nonlinear oscillator is such that the equation of motion is The negative quadratic term in Equation (67) is a repulsive force. Consequently, one of the poles of this system is nonnegative. This introduces the possibility of the system becoming unstable, which is not present for a Duffing oscillator described by Equation (30). The potential for system instability results in certain limitations, which are discussed in Section 5.1.1.
The following values are selected for the numerical results presented in Section 5.1: linear spring stiffness k = 1, oscillator mass m = 1, initial velocity v 0 = √ 2, and nonlinear spring stiffness k nl = 1, with varying strength of nonlinearity, ν. All values are given in SI units.

The Phase Volume
Solving Equation (66) forẋ, the velocity of the nonlinear oscillator described by Equation (67) iṡ The phase-space trajectory of the oscillator is shown in Figure 10 using Equation (68) for several values of ν. The third order anharmonic potential, Equation (65), introduces asymmetry into the phase-space trajectory of the oscillator. It is shown that with increasing nonlinearity, the trajectory of the system expands asymmetrically along the x-axis. Moreover, the instability of the system causes the solution to blow up when ν is greater than a certain value, ν c . For ν > ν c , the initial velocity imparts sufficient energy into the system for the oscillator to escape from the origin [21]. When this occurs, the trajectory no longer forms a closed surface in phase space. When ν < ν c , the behavior is oscillatory, and at ν = ν c , the oscillator settles into an unstable equilibrium position. The position of the oscillator is shown over time for ν < ν c , ν = ν c , and ν > ν c in Figure 11 [21]. To find the critical value, ν c , it is necessary to find the unstable equilibrium point. The equilibrium points of the system are where the position of the oscillator is constant, i.e.,ẍ =ẋ = 0 [29]. That is, Equation (67) becomes which is satisfied when The point {x = 0,ẋ = 0} is a stable equilibrium point, while {x = x c ,ẋ = 0} is an unstable equilibrium point. The critical value of the strength of nonlinearity, ν c , can be found by substituting the position and velocity at unstable equilibrium, {x c , 0}, into Equation (66) and solving for ν. This yields   Figure 11. Position of the oscillator described by Equation (67), shown for ν just above, below, and equal to the critical value, ν c .
For the physical parameters and initial conditions selected in this section, ν c = 1/ √ 6 ≈ 0.40825. To provide some insight into how the behavior of the oscillator changes with increasing nonlinearity, phase portraits for ν = 0, ν = 0.25, ν = 0.4, and ν = 1/ √ 6 are shown in Figure 12. The point markers on the plot denote equilibrium points, the dashed line denotes the separatrix when ν = ν c , and the solid line is the system trajectory when E = 1. When ν = 0, the system is stable and linear, and the only equilibrium point is at the origin. The streamlines within the solid line represent trajectories with E < 1, and those outside of the solid line represent trajectories with E > 1. As ν is increased, a saddle point develops at p 2 = {x c , 0}. Since x c = k/k nl ν, the saddle point approaches the origin as the strength of nonlinearity increases. Figure 12d shows that when ν = ν c = 1/ √ 6, the trajectory of the oscillator is along the separatrix, and the oscillator reaches unstable equilibrium. It is shown that trajectories within the left loop of the E = 1 trajectory (trajectories with E < 1) are stable, whereas those with E ≥ 1 are unstable.
As a result of the instability of the oscillator, ν must be less than ν c for the trajectory to form a closed surface. In the case of weak nonlinearity, this is generally true, and a phase-space volume can be found. The trajectory is asymmetric, and thus the magnitude of the most positive position, x + M , is not the same as the magnitude of the most negative position, x − M . Therefore, the equi-energy volume enclosed by the phase-space trajectory is In this section, an approximation of the phase volume is obtained to the fourth order with respect to ν. Taking the series expansion of the integrand of Equation (72) yieldṡ x(x) ≈ẋ [0] +ẋ [1] ν +ẋ [2] ν 2 +ẋ [3] ν 3 +ẋ [4] wherė A perturbation approach is used to approximate x + M and x − M to fourth order. Taking Equation (66) and settingẋ = 0, the perturbation method yields two possible solutions, x + M and x − M : The phase-space volume of the nonlinear oscillator described by Equation (67) is given from the first through fourth order with respect to ν by V (1) is equivalent to the linear oscillator volume, V L , from Equation (46). Thus, it is necessary to approximate the phase-space volume to the second order with respect to ν to see an improvement over the linear result. Additionally, the third order approximation, V (3) , is identical to the second order result. The volume approximations and their percent errors are compared in Figures 13 and 14, respectively. It is shown that the phase volume of this oscillator increases with ν. Figure 13 verifies that the first order approximation, V (1) , is the same as the linear result. The second and fourth order volume approximations, V (2) and V (4) , are indistinguishable for small ν. Unlike the result for the Duffing oscillator, V (4) has improved accuracy over V (2) for larger values of ν, as shown in Figure 14. This does not hold true in general. For weakly nonlinear systems, the difference in accuracy between the two approximations is negligible. As for the Duffing oscillator, insight can be gained by fixing the strength of nonlinearity to a set value while varying the linear natural frequency, ω = √ k/m, of the system. ν = 0.1, and enforcing k nl = k. The phase volume approximations are compared to the exact volume in Figure 15a, while their percent errors are shown in Figure 15b. Figure 15a contains three regimes. At low frequencies, the effect of the nonlinearity ν is large enough that the the volume approximations fail. The nonlinearity becomes weaker as the natural frequency increases. Figure 15b shows that the errors in the phase volume approximations become small at ω ≈ 0.4. For ω > 0.4, the nonlinear volume, V, is indistinguishable from the linear system volume, V L .

Khinchin's Entropy
Since the error in both V (2) and V (4) is less than one percent for small ν, entropy can be derived with sufficient accuracy using only the second order phase-volume. Despite this, the second and fourth order approximations are compared to one another in this section for demonstrative purposes. The same approach can be applied using higher order approximations if more accuracy is desired. Defining the structure function as the generating function can be obtained to the second and fourth order with respect to ν by taking the Laplace transform of Equation (78): Substituting Equations (79a) and (79b) into Equation (8) to find σ, the temperature is obtained using T = 1/σ. Taking the series expansion of the temperature to the second and fourth order about ν = 0, While the presence of nonlinearity causes temperature to increase for the Duffing oscillator, it results in a decrease in temperature for an oscillator with a repulsive quadratic force. Setting C = 0, the temperatures from Equation (80) can be substituted into Equation (7) to calculate entropy of the nonlinear oscillator, H (2) and H (4) , as Since the energy of the system is constant, the total entropy of the system is constant over time. Equation (81) gives the entropy of an oscillator with a repulsive quadratic force, νk nl x 2 , using the second and fourth order approximations of the temperature with respect to ν. These entropies are compared to the entropy of a linear oscillator in Figure 16. Figure 16a shows that for small ν, the approximations are nearly indistinguishable. As ν becomes large, the solutions diverge from one another. When ν = 0.4, H (2) is approximately 4.2% larger than the linear result, while H (4) is 8.6% larger. It is expected that the fourth order approximation, H (4) , is the more accurate of the two, since the phase volume used for this approximation has a smaller percent error than the second order solution (Figure 14b). Entropy can be thought of as a measure of chaos or disorder. Under this frame of thought, the entropy increase associated with an increase in the strength of nonlinearity, ν, can be interpreted as a result of the greater system complexity.  Figure 17 shows a set of Henon-Heiles oscillators [21,24]. The Henon-Heiles system consists of two oscillators. The first is a simple linear oscillator, and the second is an oscillator with the nonlinear potential defined in Equation (65). The oscillator masses are m 1 and m 2 , respectively. The spring stiffness of the first oscillator is k 1 , while the second oscillator has a linear spring stiffness, k 2 , and nonlinear spring stiffness k nl . The oscillators are coupled by a nonlinear spring with stiffness k c .

Henon-Heiles Oscillators
To obtain the numerical results of this section, the oscillators are given the following properties: spring stiffnesses k 1 = k 2 = k nl = 0.1, oscillator masses m 1 = m 2 = 1, coupling spring stiffness k c = 0.01k 1 , strength of nonlinearity ν = 0.1, and initial velocities v 10 = 1/3, v 20 = 1/(3 √ 2) of the first and second oscillators. These initial velocities are such that the total energy is E = 1/12. All values are given in SI units. This choice of physical parameters ensures that there is sufficient interaction between the oscillators for this analysis to be meaningful, while also satisfying the assumption of weak coupling and preserving the stability of the system. The energy of the Henon-Heiles system is where the oscillator energies are and potential energy associated with the coupling of the oscillators is defined as This yields a total system energy of It is assumed that the oscillators are weakly coupled, such that The energies of the Henon-Heiles oscillators are plotted in Figure 18. Figure 18 verifies that the total energy of the system is approximately equal to the sum of the oscillator energies. Thus, the coupling energy, E 12 , is small and the oscillators are weakly coupled. It is shown that the oscillator energies experience a beating phenomenon where low frequency oscillations are interspersed with high frequency noise.

Energy
E 1 E 2 E 1 +E 2 E Figure 18. Energies E 1 and E 2 of the Henon-Heiles oscillators compared to their sum, E 1 + E 2 , and the total energy, E.

Entropy of the Henon-Heiles Oscillators
The generating function of a linear oscillator is the Laplace transform of the structure function associated with Equation (46). This yields the generating function of the first oscillator: The mixing entropy is the difference between the actual entropy of the system and the hypothetical decoupled entropy: The actual entropy, hypothetical decoupled entropy, and mixing entropy of the Henon-Heiles system are plotted to the second and fourth order in Figure 19. The second and fourth order approximations are indistinguishable from one another for each entropy. This confirms that it is sufficient to use the second order approximations to find the entropy of the system. The actual entropies are each a constant value. The oscillations of the hypothetical decoupled entropies are interspersed with high frequency noise and do not reach the value of the actual entropy. Consequently, the mixing entropy is positive. Additionally, since neither oscillator experiences a state of zero energy, the mixing entropy has no singularities. These observations reinforce the notion that the mixing entropy can be a useful tool for understanding the qualitative behavior of systems.  Figure 19. Actual entropy, H, total hypothetical decoupled entropy, H * , and mixing entropy, H mix , of the Henon-Heiles system approximated to the second and fourth order.

Conclusions
In this paper, Khinchin's entropy has been examined for weakly nonlinear oscillators with third and fourth order anharmonic potentials. We have expanded the approach for calculating Khinchin's entropy linear systems to weakly nonlinear systems using a perturbation method. Defining a dimensionless strength of nonlinearity parameter, ν, allows for the development of such an approach for the case of weak nonlinearity. A perturbation method is used to calculate an explicit equation of phase volume. This calculation is verified against the numerical calculation of phase volume. For Duffing oscillators, it is shown that approximating entropy to the first order with respect to ν is sufficient in the case of weak nonlinearity. For Henon-Heiles oscillators, it is necessary to approximate entropy to the second order with respect to ν to see an improvement over the linear result. The strength of nonlinearity, ν, is limited by the instability of the nonlinear oscillator for the Henon-Heiles system. Higher order approximations are shown to be possible for both the Duffing and Henon-Heiles systems if more accuracy is desired.
It has been shows that a mixing entropy can be obtained for nonlinear systems. For each of the considered system in this paper, the mixing entropy is shown to be nonnegative in the case of weak coupling. As for linear systems, the mixing entropy provides insight into system behavior and represents the increase in system entropy caused by the coupling of oscillators.
Additionally using both the thermodynamic definition of entropy and Khinchin's entropy, a relationship between vibrational temperature and energy can be defined. This relationship can be used to derive the power flow equation of SEA. This paper opens the path to improve SEA for nonlinear systems in the future work using the entropy concept.
where HOT stand for higher order terms of ν, assuming ν is small. Equation (A6) shows that the right hand side and left hand side of Equation (A3) are equal and therefore,