Next Article in Journal
Why the Tsirelson Bound? Bub’s Question and Fuchs’ Desideratum
Previous Article in Journal
A Sequence-Based Damage Identification Method for Composite Rotors by Applying the Kullback–Leibler Divergence, a Two-Sample Kolmogorov–Smirnov Test and a Statistical Hidden Markov Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Information Entropy-Based Modeling Method for the Measurement System

1
School of Artificial Intelligence and Automation, Key Laboratory of Image Processing and Intelligent Control, Ministry of Education, Huazhong University of Science and Technology, Wuhan 430074, China
2
School of Intelligent Engineering, Henan Institute of Technology, Xinxiang 453003, China
3
School of Mechanical Engineering and Automation, Wuhan Textile University, Wuhan 430074, China
*
Author to whom correspondence should be addressed.
Entropy 2019, 21(7), 691; https://doi.org/10.3390/e21070691
Submission received: 15 June 2019 / Revised: 6 July 2019 / Accepted: 12 July 2019 / Published: 15 July 2019
(This article belongs to the Section Information Theory, Probability and Statistics)

Abstract

:
Measurement is a key method to obtain information from the real world and is widely used in human life. A unified model of measurement systems is critical to the design and optimization of measurement systems. However, the existing models of measurement systems are too abstract. To a certain extent, this makes it difficult to have a clear overall understanding of measurement systems and how to implement information acquisition. Meanwhile, this also leads to limitations in the application of these models. Information entropy is a measure of information or uncertainty of a random variable and has strong representation ability. In this paper, an information entropy-based modeling method for measurement system is proposed. First, a modeling idea based on the viewpoint of information and uncertainty is described. Second, an entropy balance equation based on the chain rule for entropy is proposed for system modeling. Then, the entropy balance equation is used to establish the information entropy-based model of the measurement system. Finally, three cases of typical measurement units or processes are analyzed using the proposed method. Compared with the existing modeling approaches, the proposed method considers the modeling problem from the perspective of information and uncertainty. It focuses on the information loss of the measurand in the transmission process and the characterization of the specific role of the measurement unit. The proposed model can intuitively describe the processing and changes of information in the measurement system. It does not conflict with the existing models of the measurement system, but can complement the existing models of measurement systems, thus further enriching the existing measurement theory.

1. Introduction

Measurement has been developed through the physical sciences and plays a very important role in industry, commerce, health and safety, and environmental protection [1,2,3,4,5]. A unified model of measurement systems is critical to the design and optimization of measurement systems. However, the existing measurement theory which will be reviewed below is too abstract. To a certain extent, this makes it difficult to have a clear overall understanding of measurement systems and how to obtain information with measurement units during the measurement process at the outset. Therefore, measurement science needs a theoretical framework [2] that can intuitively describe, analyze, and evaluate measurement systems and characterize how measurement units work to obtain the information of the measurand.
Numerous works of the modeling of the measurement or measurement system have been developed and published. Helmholtz and Hoelder developed a theory of measurement based on the concepts of the physical sciences [1], which regarded measurement as the operation set of assigning the determinate numerical value to the physical quantity of the object [2]. Then, three main model approaches or theories, the representational theory, the object-oriented method, and the probabilistic theory [6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25] were developed and studied. As the main body of these studies, the representational theory [6,7,8,9,10,11,12,13,14,15,16] considers representing the mapping between the measurand and measurement result with general symbols, from different standpoints such as semiotics [7,9], set theory [13], or domain-theory [15]. The object-oriented method [17,18,19] applies the object-oriented technology in computer programming to construct an object-oriented model of the measurement system, which divides the measurement elements into five classes described by their attributes, operations, or environment. The probabilistic theory [20,21,22,23,24,25] proposes a complete theory of measurement, including probability representations of different measurement scales, probabilistic descriptions of measurement systems, and measurement processes.
The abovementioned researches proposed the modeling methods from different perspectives or according to relevant theories. The models of measurement, measurement processes, or measurement systems were established, and even the special measurement problems, like in ratio, interval, ordinal, and nominal scales were adequately considered [13,16,21,22,25]. However, the implementation of the measurement relies on a series of measurement units. These theoretical models have considered the measurability, the relationship between the input and output of the system, calibration, restitution, etc., but cannot describe the role of the measurement unit in the measurement process. Furthermore, one of the cores of all measurements is the problem of uncertainty [26,27], but a model of measurement systems established directly from uncertainty has not been proposed.
Since absolute zero is impossible to achieve and the measured object always interacts with the outside world, the absolute standard measurement environment is not possible. This causes the measurand to be essentially a random process or a random sequence [28]. If the measurand is stationary, it can be considered as a random variable within a short measurement time. After the effective measurement, the uncertainty of the measurand is reduced compared to the uncertainty of the measurand before measurement. Therefore, the measurement is a process of uncertainty reduction, and its essence is information acquisition. Additionally, with the development of measurement science, the viewpoint which views measurement as an information process and regards instruments as information machines is widely recognized [1,2,9,29,30]. Since Shannon proposed the concept of information entropy as a measure of information and uncertainty of a random variable [31], information theory has been applied to some aspects of measurement [32,33,34]. Therefore, information entropy has high potential as one of the methods to solve measurement problems and it is feasible to establish a system model from the perspective of uncertainty with information entropy.
In this paper, an information entropy-based modeling method for measurement system is proposed. The main contributions of this paper are as follows: (1) a modeling idea based on the viewpoint of information and uncertainty is presented; (2) the entropy balance equation based on the chain rule for entropy is proposed for system modeling; and (3) an information entropy-based model of measurement systems is established based on the entropy balance equation from the perspective of uncertainty and information acquisition.
The rest of this paper is organized as follows. Section 2 presents the preliminaries on relations between different entropies, and proposes the entropy balance equation. The information entropy-based model of the measurement system is proposed in Section 3. Section 4 analyzes three cases of typical measurement units or processes with the proposed method. Finally, conclusions of the proposed model of the measurement system are drawn in Section 5.

2. Methodology

2.1. Information Entropy and Related Concepts

Entropy is a measure of the uncertainty of a random variable. For a discrete random variable X with limited states, probability of each state X = x i , i = 1 , 2 , , N , is denoted as p ( x i ) = p ( X = x i ) . For the sake of simplicity, we use p ( x i ) to represent probability instead of p ( X = x i ) . Similarly, for discrete random variable Y , its probability function is denoted as p ( y j ) , j = 1 , 2 , , M . The joint probability function of X and Y is represented by p ( x i y j ) .
Definition 1.
The information entropy of the discrete random variable X is defined as:
H ( X ) = i = 1 N p ( x i ) l o g p ( x i ) .
If the log is to base 2, the unit of information entropy is bits; if the log is to base e (the natural logarithm), the unit is nats, and if the log is to base 10, the unit is harts. For the related measures that will be introduced later, their units are the same. The unit of entropy and related measures for continuous random variables is also the same.
For a continuous random variable X with probability density function p ( x ) , its information entropy is infinite since the number of its stats is infinite. In this case, information entropy is the sum of differential entropy and a constant that tends to infinity. The definition of differential entropy is given as follows:
Definition 2.
The differential entropy of the continuous random variable X with probability density function p ( x ) , x R , is defined as:
h ( X ) = R p ( x ) l o g p ( x ) d x .
Obviously, differential entropy cannot represent the uncertainty of continuous random variables and does not have the connotation of information. However, when discussing mutual information, since two infinite constant terms will cancel each other, differential entropy has the same information characteristics as information entropy.
In this paper, in order to make each item in the model established in Section 3 have the connotation of information, the uncertainty of a random variable is characterized by information entropy, whether the random variable is continuous or discrete. In addition, for continuous cases, mutual information is calculated using differential entropy.
Based on the information entropy, the related concepts and their definitions are introduced below:
Definition 3.
The joint information entropy of discrete random variables X and Y is defined as:
H ( X , Y ) = i = 1 N j = 1 M p ( x i y j ) log p ( x i y j ) .
Definition 4.
The conditional entropy of the discrete random variable X given Y is defined as:
H ( X | Y ) = i = 1 N j = 1 M p ( x i y j ) log p ( x i | y j ) .
Definition 5.
The average mutual information (also referred to as mutual information) between discrete random variables X and Y is defined as:
I ( X ; Y ) = i = 1 N j = 1 M p ( x i y j ) log p ( x i y j ) p ( x i ) p ( y j ) .
The relationship between H ( X ) , H ( Y ) , H ( X | Y ) , H ( Y | X ) and I ( X ; Y ) can be expressed by the Venn diagram shown in Figure 1. Two equations governing this are:
H ( X , Y ) = H ( X ) + H ( Y | X ) = H ( Y ) + H ( X | Y ) = H ( X | Y ) + H ( Y | X ) + I ( X ; Y ) ,
I ( X ; Y ) = H ( X ) H ( X | Y ) = H ( Y ) H ( Y | X ) = H ( X ) + H ( Y ) H ( X , Y ) .

2.2. Entropy Balance Equation

In this part, the extension for the chain rule of joint entropy, called the entropy balance equation (Equation (8)), is developed for system modeling, which is given and proved below:
Theorem 1.
Given random variables X 1 , X 2 , , X n which are drawn according to p ( x 1 , x 2 , , x n ) , then:
H ( X 1 ) + i = 1 n 1 H ( X i + 1 | X i X i 1 X 1 ) = H ( X n ) + i = 2 n H ( X i 1 | X i X i + 1 X n ) .
Proof. 
By the chain rule for entropy [35], we have:
H ( X 1 , X 2 , , X n ) = i = 1 n H ( X i | X i 1 , X i 2 , X 1 )
Equation (9) can be readily proved with p ( x 1 , x 2 , , x n ) = i = 1 n p ( x i | x i 1 , x i 2 , , x 1 ) and the definitions of entropy and conditional entropy. By symmetry, one can write:
p ( x 1 , x 2 , , x n ) = i = 1 n p ( x i | x i + 1 , x i + 2 , , x n )
Thus:
H ( X 1 , X 2 , , X n ) = i = 1 n H ( X i | X i + 1 , X i + 2 , X n )
Based on Equations (9) and (11), one can obtain the following equality:
i = 1 n H ( X i | X i 1 , X i 2 , X 1 ) = i = 1 n H ( X i | X i + 1 , X i + 2 , X n )
which is equivalent to Equation (8). □

3. Modeling of Measurement Systems

The unified description and modeling of most measurement systems for all measurement applications is one of the key problems in measurement theory. This paper focuses on the traditional measurement system that provides information about the physical values of measurand [36]. The system has three types of components connected in series, including sensor, variable conversion units, and signal processing units. Sometimes the sensor and variable conversion units are combined.

3.1. Model of Measurement Unit

A measurement system [3] consists of a finite number of measurement units as depicted in Figure 2 which is generally a series system. For any unit i of the system ( i = 1 , 2 , , n 1 ), there are four random variables X i , E i , N i , X i + 1 associated to it, where X i is input, E i denotes the error, N i is noise (this model only considers additive noise), and with the combined effect of X i , E i and N i , the output is X i + 1 . Therefore, the unit can be described by the information entropy-based model in the form of Venn diagram as shown in Figure 1 (for the sake of convenience, here is redrawn as Figure 3) and the entropies of the four variables satisfy:
H ( X i ) + H ( N i ) = H ( X i + 1 ) + H ( E i )
where H ( X i ) denotes the entropy of the unit input X i , H ( X i + 1 ) represents the entropy of output X i + 1 , H ( N i ) = H ( X i + 1 | X i ) is the noise entropy that stands for the entropy increase caused by noise, amplification and other reasons, H ( E i ) = H ( X i | X i + 1 ) is the error entropy which denotes the information loss of X i passively or proactive, and indicates the active denoising of the measurement unit.
H ( X i X i + 1 ) denotes the joint entropy of X i and X i + 1 , I ( X i ; X i + 1 ) is the average mutual information between X i and X i + 1 , which denotes the amount of information shared by X i and X i + 1 . The relationships of these entropies satisfy equations as follows:
H ( X i X i + 1 ) = H ( X i ) + H ( X i + 1 | X i ) = H ( X i + 1 ) + H ( X i | X i + 1 ) = H ( X i | X i + 1 ) + I ( X i ; X i + 1 ) + H ( X i + 1 | X i )
I ( X i ; X i + 1 ) = H ( X i ) H ( X i | X i + 1 ) = H ( X i + 1 ) H ( X i + 1 | X i ) = H ( X i ) + H ( X i + 1 ) H ( X i X i + 1 ) .
The traditional model only considers noise in the signal and the error between the measurement result and true value. In contrast with the traditional model, the proposed model of the measurement unit also considers the information loss in the process of transmission through each measurement unit and can describe the denoising and amplification effect of the measurement unit on the input. These functions of measurement units are represented by error entropy and noise entropy. This shows that this model has excellent ability to describe the measurement unit.

3.2. Information Entropy-Based Model of Measurement System

By repeated application of Equation (13), we have the relations of entropies of every unit in a measurement system:
{ H ( X 1 ) + H ( N 1 ) = H ( X 2 ) + H ( E 1 ) H ( X 2 ) + H ( N 2 ) = H ( X 3 ) + H ( E 2 ) H ( X n 1 ) + H ( N n 1 ) = H ( X n ) + H ( E n 1 ) .
Then, adding the two sides of these equations, respectively, and eliminate the same terms, we have:
H ( X 1 ) + i = 1 n 1 H ( N i ) = H ( X n ) + i = 1 n 1 H ( E i )
where H ( N i ) = H ( X i + 1 | X i ) , H ( E i ) = H ( X i | X i + 1 ) and Equation (17) is equivalent to:
H ( X 1 ) + i = 1 n 1 H ( X i + 1 | X i ) = H ( X n ) + i = 2 n H ( X i 1 | X i )
Equation (18) is the information entropy-based model of measurement system. Notice that Equation (18) is similar to the entropy balance Equation (8). The reason is that the measurement system shown in Figure 2 has a multi-unit serial structure. For the input of the system and outputs of units X 1 , X 2 , , X n , the random variable X i + 1 generally only depends on the input X i of the unit i , and is not directly related to the previous random variables X 1 , X 2 , , X i 1 . Therefore, X 1 , X 2 , , X n forms a first-order Markov chain, namely:
H ( X i | X i 1 X i 2 X 1 ) = H ( X i | X i 1 )
Since X 1 , X 2 , , X n constitutes a first-order Markov chain, X n , X n 1 , , X 1 is also a first-order Markov chain, that is:
H ( X i 1 | X i X i + 1 X n ) = H ( X i 1 | X i )
Therefore, the measurement system can also be described by a first-order Markov chain. Figure 4 depicts the Venn diagram of entropy model of a first-order Markov chain, and this model has a symmetrical structure:
According to the previous discussion, we have:
Corollary 1.
For a Markov chain X n , X n 1 , , X 1 , the entropy balance equation can be further written as Equation (18).
Proof. 
According to Theorem 1 and the Markov property, we have Equations (8), (19), and (20). Substituting Equations (19) and (20) into Equation (8) gives Equation (18). □
From Corollary 1, the entropy balance equation of a Markov chain X 1 X 2 X n is the information entropy-based model of measurement system. It shows that all units of a measurement system can be equivalent to one unit as displayed in Figure 5, the sum of all input entropies is equal to the sum of all output entropies.
The information entropy-based model of the measurement system (Equation (18)) not only describes the relationship of the inputs and outputs of the system, but also represents the intermediate quantity in the system, that is, the model of the subsystem can be expressed as:
H ( X i ) + k = i j H ( X k + 1 | X k ) = H ( X j + 1 ) + k = i j H ( X k | X k + 1 ) , 1 i j n 1
If the input entropy (or output entropy) and all conditional entropies associated with the subsystem are known, then the subsystem’s output entropy (or input entropy) can be calculated according to Equation (21).
For an ideal source (the system input is without noise), the measurement result can be directly evaluated by mutual information between the system input and output I ( X 1 ; X n ) . The greater the mutual information, the more accurate the measurement result. The information loss of the measurand can be evaluated by the relative information error (RIE) which is defined as:
ε = H ( X 1 ) I ( X 1 ; X n ) H ( X 1 ) × 100 %
An ideal measurement system satisfies I ( X 1 ; X n ) = H ( X 1 ) = H ( X n ) , and the condition is:
H ( X 1 | X n ) = H ( X n | X 1 ) = 0
which means that X 1 and X n have the same probability function and information of the measurand is completely acquired by the measurement system.

4. Application

To better understand the proposed model, three cases of typical measurement units or processes are discussed in this section.

4.1. Case 1: Bandpass Filter

The bandpass filter, which is a typical unit in the measurement system, is analyzed in this section. As shown in Figure 6, the input of the filter K ( ω ) is Y = X + N , where X is a Gaussian random variable with power of σ x 2 , N is white Gaussian noise with power of σ n 2 , X and N are independent of each other. The differential entropy of X can be expressed as:
h ( X ) = 1 2 log 2 π e σ x 2
and the differential entropy of N is denoted by:
h ( N ) = 1 2 log 2 π e σ n 2
Before passing through the filter, since X and N are independent, the power of Y satisfies σ y 2 = σ x 2 + σ n 2 . The mutual information between X and Y is:
I ( X ; Y ) = h ( Y ) h ( Y | X ) = h ( Y ) h ( N ) = 1 2 log 2 π e σ y 2 1 2 log 2 π e σ n 2 = 1 2 log ( 1 + σ x 2 σ n 2 )
After passing through the filter, the mutual information between X and Z is:
I ( X ; Z ) = 1 2 log ( 1 + σ x ^ 2 σ n ^ 2 )
where σ x ^ 2 and σ n ^ 2 represent the power of X and N after pass through the filter, respectively.
The increment of mutual information (IMI) is defined by:
Δ I = I ( X ; Z ) I ( X ; Y ) = 1 2 log ( σ n 2 σ n ^ 2 σ n ^ 2 + σ x ^ 2 σ n 2 + σ x 2 )
Suppose that the power of noise N is σ n 2 = N 0 f / 2 where f is the bandwidth of noise and N 0 / 2 denotes bilateral power spectral density of noise. The filter is an ideal bandpass filter with a bandwidth of Δ f and the gain is 1 in the passband. After passing through the filter, the power of the noise is σ n ^ 2 = N 0 Δ f / 2 , then Equation (28) can be rewritten as:
Δ I = 1 2 log ( N 0 2 f N 0 2 Δ f σ n ^ 2 + σ x ^ 2 σ n 2 + σ x 2 ) = 1 2 log ( f Δ f σ n ^ 2 + σ x ^ 2 σ n 2 + σ x 2 )
According to the characteristics of the filter, the passband should be consistent with the frequency band of X , that is, σ x ^ 2 = σ x 2 and σ x 2 > > σ n ^ 2 , therefore:
Δ I = 1 2 log ( f Δ f σ x 2 σ n 2 + σ x 2 ) = 1 2 log ( f Δ f 1 σ n 2 σ x 2 + 1 )
Equation (30) shows that the IMI is related to the bandwidth Δ f and signal to noise ratio (SNR) of the input signal σ x 2 / σ n 2 . The narrower the bandwidth of the filter is, the larger the increment of mutual information is. In general, f / Δ f 1 , but the SNR of the input signal σ n 2 / σ x 2 is uncertain. For small signals, the SNR is less than 1 ( σ x 2 / σ n 2 < 1 ), then we have
Δ I = 1 2 log ( f Δ f 1 σ n 2 σ x 2 + 1 ) > 0
If σ x 2 / σ n 2 = 1 , then:
Δ I = 1 2 log ( f 2 Δ f ) > 0
For large signal, the SNR is generally much more than 1 ( σ x 2 / σ n 2 1 ), then:
Δ I = 1 2 log ( f Δ f ) > 0
The function of the filter is to filter out the noise contained in the signal. From the above three cases, the IMIs are all greater than zero, which means that at the information level, the role of filter is to increase the amount of information that can be obtained.

4.2. Case 2: Quantization Process

The quantization process is an important step in the measurement process. From the perspective of information acquisition, the quantization process is a process of information loss. For a continuous random variable, it requires infinitely high precision to describe itself in theory, and its information entropy is infinite. After quantization, the continuous random variable is transformed into a discrete random variable with limited precision, and its information entropy is finite.
Given a continuous random variable X with a probability density function of p ( x ) , the range of X is evenly divided into intervals of length Δ . Assuming that p ( x ) is continuous within each interval. According to the mean value theorem, there exists x i within each interval such that:
p ( x i ) Δ = i Δ ( i + 1 ) Δ p ( x ) d x
After quantization, the discrete random variable X Q is obtained and its definition is:
X Q = x i , i f i Δ X ( i + 1 ) Δ
Then, the probability of X Q = x i is:
P ( X Q = x i ) = i Δ ( i + 1 ) Δ p ( x ) d x = p ( x i ) Δ
Therefore, the information entropy of X Q is:
H ( X Q ) = P ( X Q = x i ) log P ( X Q = x i ) = p ( x i ) Δ log p ( x i ) log Δ
If the function p ( x ) log p ( x ) is Riemann integrable, the first item in Equation (37) approaches h ( X ) = p ( x ) log p ( x ) d x as Δ 0 , which means:
lim Δ 0 H ( X Q ) = H ( X )
Since Δ 0 is not achievable in practice, there is information loss in the quantization process. For a N-bit quantizer, Δ = 2 N , then the information loss H ( X | X Q ) can be defined as:
H ( X | X Q ) = H ( X ) H ( X Q ) lim Δ 0 log Δ N log 2
The amount of information obtained from X with quantization process is:
I ( X ; X Q ) = H ( X Q )
Therefore, the quantization process can be illustrated as shown in Figure 7. It can be found from Equations (39) and (40) that the larger N is, the less information is lost and the more information is obtained.
For example, consider a continuous random variable X with uniformly distribution on [ 0 , 1 ] . It is quantized by a N-bit quantizer and the process is simulated with MATLAB R2018b (developed by the MathWorks, Inc. with headquarters in Natick, Massachusetts, USA). X is generated by the unifrnd function with 1,000,000 data points. The first 5000 data points of X are shown in Figure 8a, and the probability density function of X is shown in Figure 8b. It can be found that the simulated data of X is not ideal, and its probability density is significantly less than 1 when its value is close to 0 or 1. Here, five quantizers with N-bit ( N = 8 , 9 , 10 , 11 , 12 ) are used to quantize X , and then the corresponding information entropies of X Q are calculated and the results are shown in Figure 8c. As h ( X ) = 0 , according to Equation (37), the information entropy of X Q is equal to N bits (since I ( X ; X Q ) = H ( X Q ) , the mutual information is also N bits), when the log is to the base 2. It can be seen from Figure 8c that the simulation results are consistent with the theoretical values within the allowable error. This also shows that the more bits the quantizer has, the more information can be obtained, which is consistent with the theoretical analysis.

4.3. Case 3: Cumulative Averaging Procedure

In some practical measurement applications, the noisy signal is sampled at high speed, then the cumulative averaging procedure is performed to the measured values to filter out the high frequency parts of noise to obtain higher measurement accuracy.
As shown in Figure 9, given a Gaussian signal S with zero mean, a Gaussian noise N with zero mean, S and N are independent of each other and Y = S + N , in a very short period of time Δ t , the amplitude of the signal can be considered as constant, and the amplitude of the noise is a variable. Therefore, the correlation coefficient between the signal amplitudes at any two moments in Δ t is 1, and for noise, the correlation coefficient is zero. Assuming that the number of cumulative averaging times is n, and the power of the signal and noise at each sampling moment t i is P S i and P N i ( i = 1 , 2 , , n ), then after the cumulative averaging procedure, their power become:
P S ¯ * = 1 n i = 1 n P S i = 1 n ( A S 1 + A S 2 + + A S n ) 2 = 1 n ( n A S ¯ ) 2 = n A S ¯ 2 = n P S ¯
P N ¯ * = 1 n i = 1 n P N i = 1 n ( A N 1 2 + A N 2 2 + + A N n 2 ) = 1 n n A N ¯ 2 = A N ¯ 2 = P N ¯
where A S i and A N i are the amplitudes of the signal and noise at each sampling moment t i , respectively; A S ¯ and A N ¯ are the average amplitudes of the signal and noise during Δ t , respectively; and P S ¯ and P N ¯ are the average powers of the signal and noise during Δ t , respectively.
After the cumulative averaging procedure, the mutual information that can be obtained from the processed data is:
I ( S ; Y ) = 1 2 log ( 1 + P S ¯ * P N ¯ * ) = 1 2 log ( 1 + n P S ¯ P N ¯ )
which is greater than the mutual information before the cumulative averaging procedure, that is:
I ( S ; Y ) = 1 2 log ( 1 + P S ¯ P N ¯ )
This shows that the cumulative averaging procedure can be equivalent to a digital filter, which can improve the signal-to-noise ratio and increase the mutual information. It can also be seen from Equation (43) that the mutual information increases as the number of cumulative averaging times n increases.

5. Conclusions

In this paper, an information entropy-based modeling method for measurement systems is proposed. The modeling idea of the measurement system based on the viewpoints of information acquisition and uncertainty is presented. Based on this idea, the entropy balance equation based on the chain rule for entropy is proposed for system modeling. Then, information entropy-based models of measurement units and measurement systems are established with the entropy balance equation. Finally, three cases of typical measurement units or processes are analyzed using the proposed model. Compared with the existing modeling methods of measurement systems, the proposed method considers the modeling problem from the perspective of information and uncertainty, and focuses on the loss of the measurand information in the transmission process and the representation of the role of the measurement unit, such as filtering, amplification, and introduced noise. From error entropy, noise entropy, and mutual information between input and output of each unit, the changes of information can be intuitively reflected. If the system input is without noise, the mutual information between the input and output of the system directly reflects the amount of information acquired from measurand, which can be directly used as an evaluation index of the performance of the measurement system.
The proposed model has excellent ability to intuitively describe the processing and changes of information in the measurement system. These characteristics make it easy to have a clear overall understanding of the concept of the measurement system and specific implementation of measurement with measurement units. Note that, although the proposed model has the above advantages, it is not considered and proposed from the perspective of metrological analysis. Compared with the existing models of the measurement system, the output of the proposed model cannot be directly applied to represent the measurement results in the traditional sense, and loses the time information of measurement result. The proposed model does not conflict with the existing models of measurement systems, but can complement the existing models of measurement systems, thus further enriching the existing measurement theory.

Author Contributions

Conceptualization: L.K., H.P., X.L., S.M., and K.Z.; methodology: L.K., K.Z., and H.P.; writing—original draft preparation: H.P., X.L., and S.M.; writing—review and editing: K.Z. and Q.X.

Funding

This research was funded by the National Natural Science Foundation of China (61873101, 61771210).

Acknowledgments

This work was supported by the National Natural Science Foundation of China (61873101, 61771210). We also thank Qiaochu Li for her careful spelling and grammar checking for this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Finkelstein, L. Widely-defined measurement—An analysis of challenges. Measurement 2009, 42, 1270–1277. [Google Scholar] [CrossRef]
  2. Biesen, L.V. Advances of measurement science. In Measurement Science a Discussion; Kariya, K., Finklestein, L., Eds.; Ohmsha, Ltd.: Toyko, Japan, 2000; pp. 3–9. [Google Scholar]
  3. Goumopoulos, C. A high precision, wireless temperature measurement system for pervasive computing applications. Sensors 2018, 18, 3445. [Google Scholar] [CrossRef] [PubMed]
  4. Pan, D.; Jiang, Z.; Chen, Z.; Gui, W.; Xie, Y.; Yang, C. Temperature measurement method for blast furnace molten iron based on infrared thermography and temperature reduction model. Sensors 2018, 18, 3792. [Google Scholar] [CrossRef] [PubMed]
  5. Heo, J.; Yoon, H.; Park, K.S. A novel wearable forehead EOG measurement system for human computer interfaces. Sensors 2017, 17, 1485. [Google Scholar] [CrossRef] [PubMed]
  6. Krantz, D.; Luce, R.; Suppes, P.; Tversky, A. Foundations of Measurement; Academic Press: New York, NY, USA, 1971. [Google Scholar]
  7. Hofmann, D. Current state and further development of measurement theory—Report of the IMEKO technical committee on measurement theory (TC7). Measurement 1983, 1, 33–38. [Google Scholar] [CrossRef]
  8. Finkelstein, L.; Leaning, M.S. A review of the fundamental concepts of measurement. Measurement 1984, 2, 25–34. [Google Scholar] [CrossRef]
  9. Finkelstein, L. Measurement and instrumentation science—An analytical review. Measurement 1994, 14, 3–14. [Google Scholar] [CrossRef]
  10. Finkelstein, L. Measurement, information, knowledge—Fundamental concepts, philosophical implications, applications. In Proceedings of the XIII IMEKO World Congress, Turin, Italy, 5–9 September 1994; pp. 11–18. [Google Scholar]
  11. Muravyov, S.V.; Savolainen, V. Representation theory treatment of measurement semantics for ratio, ordinal and nominal scales. Measurement 1997, 22, 37–46. [Google Scholar] [CrossRef]
  12. Finkelstein, L. Foundational problems of measurement. In Measurement Science a Discussion; Kariya, K., Finklestein, L., Eds.; Ohmsha, Ltd.: Toyko, Japan, 2000; pp. 13–21. [Google Scholar]
  13. Muravyov, S.V. Approach to mathematical structure concerning measurement science. In Measurement Science a Discussion; Kariya, K., Finklestein, L., Eds.; Ohmsha, Ltd.: Toyko, Japan, 2000; pp. 37–49. [Google Scholar]
  14. Luce, R.D.; Suppes, P. Representational measurement theory. In Stevens Handbook of Experimental Psychology; Pashler, H., Wixted, J.Y., Eds.; Wiley: New York, NY, USA, 2002; pp. 1–41. [Google Scholar]
  15. Dimuro, G.P. Modelling Measurement Processes as Timed Information Processes in Simplex Domains. In Proceedings of the 10th IMEKO TC7 International Symposium, St. Petersburg, Russia, 30 June–2 July 2004; Volume 1, pp. 71–76. [Google Scholar]
  16. Rossi, G.B.; Francesco, C. A formal theory of the measurement system. Measurement 2018, 116, 644–651. [Google Scholar] [CrossRef]
  17. Fiok, A.; Bek, J.; Jaworski, J.M. Some problems of measurement of real objects. In Proceedings of the XII IMEKO World Congress, Beijing, China, 5–10 September 1991; Volume 3, pp. 81–87. [Google Scholar]
  18. Ferris, T.L.J. The concept of leap in measurement interpretation. Measurement 1997, 21, 137–146. [Google Scholar] [CrossRef]
  19. Yang, Q.; Butler, C. An object-oriented model of measurement systems. IEEE Trans. Instrum. Meas. 1998, 47, 104–107. [Google Scholar] [CrossRef] [Green Version]
  20. Falmagne, J.C. A probabilistic theory of extensive measurement. Philos. Sci. 1980, 47, 277–296. [Google Scholar] [CrossRef]
  21. Michelini, R.C.; Rossi, G.B. Representational framework for measurement uncertainty. In Proceedings of the European Scientific Metrological Conference, St. Petersburg, Russia, 1–3 September 1992. [Google Scholar]
  22. Michelini, R.C.; Rossi, G.B. Measurement uncertainty: A probabilistic theory for intensive entities. Measurement 1995, 15, 143–157. [Google Scholar] [CrossRef]
  23. Rossi, G.B. A probabilistic model for measurement process. Measurement 2003, 34, 85–99. [Google Scholar] [CrossRef]
  24. Rossi, G.B.; Crenna, F. Probability as a logic for measurement representations. In Journal of Physics: Conference Series; IOP Publishing: Bristol, UK, 2013; Volume 459, p. 012005. [Google Scholar]
  25. Rossi, G.B. A probabilistic theory of measurement. Measurement 2006, 39, 34–50. [Google Scholar] [CrossRef]
  26. Finkelstein, L.; Morawski, R.Z. Fundamental concepts of measurement. Measurement 2003, 34, 1–2. [Google Scholar] [CrossRef]
  27. Mari, L.; Morawski, R.Z. The Evolving Science of Measurement. Metro. Meas. Syst. 2007, 14, 3–6. [Google Scholar]
  28. Gray, M.R. Entropy and Information Theory, 2nd ed.; Springer: New York, NY, USA, 2007; pp. 1–5. [Google Scholar]
  29. Finkelstein, L. Analysis of the concepts of measurement, information and knowledge. In Proceedings of the XVII IMEKO World Congress, Metrology in the 3rd Millennium, Dubrovnik, Croatia, 22–27 June 2003; pp. 1043–1047. [Google Scholar]
  30. Finkelstein, L. Measurement science state and trends. In Proceedings of the 16th International Electrotechnical and Computer Science Conference, ERK 2007, Ljubljana, Slovenia, 24–26 September 2007; Slovenia Section IEEE 2007. Zajc, B., Trost, A., Eds.; pp. I–IV, ISSN 1581-4572. [Google Scholar]
  31. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
  32. Kavalerov, G.I.; Mandel’shtam, S.M. Introduction to the information theory of measurement. In Vvedeniye v Informatsionnoyu Teoriyu Izmereniyi; Energiya: Moscow, Russia, 1974; pp. 32–55. [Google Scholar]
  33. Woschni, E.G. Some aspects of applying information-theory to measurement. Measurement 1988, 6, 184–186. [Google Scholar] [CrossRef]
  34. Woschni, E.G. Information Theory in Measurement and Instrumentation. In Concise Encyclopedia of Measurement & Instrumentation; Finkelstein, L., Ed.; Pergamon Press: Oxford, UK, 1994; pp. 153–157. [Google Scholar]
  35. Cover, T.M.; Thomas, J.A. Elements of Information Theory, 2nd ed.; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2012; pp. 22–25. [Google Scholar]
  36. Morris, A.S.; Langari, R. Measurement and Instrumentation: Theory and Application; Academic Press: Waltham, MA, USA, 2012; pp. 2–27. ISBN 978-0-12-381960-4. [Google Scholar]
Figure 1. The relationship between various entropies or mutual information.
Figure 1. The relationship between various entropies or mutual information.
Entropy 21 00691 g001
Figure 2. Structure of the actual measurement system.
Figure 2. Structure of the actual measurement system.
Entropy 21 00691 g002
Figure 3. Information entropy-based model of the measurement unit.
Figure 3. Information entropy-based model of the measurement unit.
Entropy 21 00691 g003
Figure 4. Venn diagram of entropy model of the first-order Markov chain.
Figure 4. Venn diagram of entropy model of the first-order Markov chain.
Entropy 21 00691 g004
Figure 5. Information entropy-based equivalent model of measurement system.
Figure 5. Information entropy-based equivalent model of measurement system.
Entropy 21 00691 g005
Figure 6. Gaussian random variable with additive white Gaussian noise pass through a bandpass filter.
Figure 6. Gaussian random variable with additive white Gaussian noise pass through a bandpass filter.
Entropy 21 00691 g006
Figure 7. The model of quantization process.
Figure 7. The model of quantization process.
Entropy 21 00691 g007
Figure 8. Simulation of quantization process. (a) The waveform of the first 5000 data points of the continuous random variable X . (b) The probability density function of X . (c) Information entropies of X Q quantized by quantizers with different numbers of bits.
Figure 8. Simulation of quantization process. (a) The waveform of the first 5000 data points of the continuous random variable X . (b) The probability density function of X . (c) Information entropies of X Q quantized by quantizers with different numbers of bits.
Entropy 21 00691 g008
Figure 9. Gaussian random signal with additive Gaussian noise processed by the cumulative averaging procedure.
Figure 9. Gaussian random signal with additive Gaussian noise processed by the cumulative averaging procedure.
Entropy 21 00691 g009

Share and Cite

MDPI and ACS Style

Kong, L.; Pan, H.; Li, X.; Ma, S.; Xu, Q.; Zhou, K. An Information Entropy-Based Modeling Method for the Measurement System. Entropy 2019, 21, 691. https://doi.org/10.3390/e21070691

AMA Style

Kong L, Pan H, Li X, Ma S, Xu Q, Zhou K. An Information Entropy-Based Modeling Method for the Measurement System. Entropy. 2019; 21(7):691. https://doi.org/10.3390/e21070691

Chicago/Turabian Style

Kong, Li, Hao Pan, Xuewei Li, Shuangbao Ma, Qi Xu, and Kaibo Zhou. 2019. "An Information Entropy-Based Modeling Method for the Measurement System" Entropy 21, no. 7: 691. https://doi.org/10.3390/e21070691

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop