Reproduction is permitted for noncommercial purposes.
The development of smart sensors involves the design of reconfigurable systems capable of working with different input sensors. Reconfigurable systems ideally should spend the least possible amount of time in their calibration. An autocalibration algorithm for intelligent sensors should be able to fix major problems such as offset, variation of gain and lack of linearity, as accurately as possible. This paper describes a new autocalibration methodology for nonlinear intelligent sensors based on artificial neural networks, ANN. The methodology involves analysis of several network topologies and training algorithms. The proposed method was compared against the piecewise and polynomial linearization methods. Method comparison was achieved using different number of calibration points, and several nonlinear levels of the input signal. This paper also shows that the proposed method turned out to have a better overall accuracy than the other two methods. Besides, experimentation results and analysis of the complete study, the paper describes the implementation of the ANN in a microcontroller unit, MCU. In order to illustrate the method capability to build autocalibration and reconfigurable systems, a temperature measurement system was designed and tested. The proposed method is an improvement over the classic autocalibration methodologies, because it impacts on the design process of intelligent sensors, autocalibration methodologies and their associated factors, like time and cost.
In order to design intelligent sensors for measurement systems with improved features a simple reconfiguration process for the main hardware will be required in order to measure different variables by just replacing the sensor element, building reconfigurable systems. Reconfigurable systems, ideally should spend the least possible amount of time in their calibration. An autocalibration algorithm for intelligent sensors should be able to fix major problems such as offset, variation of gain and lack of linearity, all characteristic of degradation, as accurately as possible.
The linearization of output signal sensors and the calibration process are the major items that are involved in defining the features of an intelligent sensor, for example, the capability to be used or applied to different variables, calibration time and accuracy.
The subject of linearization has been considered in different forms and stages, basically from the designs of circuits with MOS technologies [
The calibration method stage is important for two important aspects: first to define the features of the measurement system and second, the maintenance costs of measurement systems. The costs associated with the maintenance of measurement systems from 70's to now have been discussed [
A wide application area of neural networks is in recognition of patterns in the analysis of signals from sensors [
Most of the available literature related to autocalibration methods depends on several experiments to determine its performance. This approach impacts the time and cost of a reconfigurable system. This paper describes a new autocalibration methodology for nonlinear intelligent sensors based on artificial neural networks, ANN. The methodology involves analysis of several network topologies and training algorithms [
Besides complete experimental details, results and analysis of the complete study, the paper describes the implementation of the ANN in a microcontroller unit, MCU. In order to illustrate the method capability to build autocalibration and reconfigurable systems a temperature measurement system was designed. The system is based on a thermistor, which presents one of the worst nonlinearity behaviors, and is also found in many real world applications because of its low cost.
One important point that needs clarification before proceeding relates to the meaning of the term calibration. In this paper, calibration is used in the sense of [
The paper structure will be as follows: the basic system design considerations are presented in Section 2. The ANN design, training algorithm and simulation are described in Section 3. An ANN of temperature measurement system implementation on MCU is shown in Section 4. The tests and results are described on Section 5. The evaluation results are given in Section 6. Finally, the conclusions are in Section 7.
Prior to any ANN design some considerations are necessary. The output electrical signal
In most of the cases the input and output variables have different scales and they are normalized in the range of [0,1] to simplify their manipulation. This can be obtained by the following equations:
The desired output signal is a straight line with unit slope. This will be the target in the calibration process and will be the reference signal defined by:
Finally, a relative metric error can be used to determine how linear is the output signal
Another way to corroborate the method is by using the least mean square error MSE, expressed by
In the next section where the ANN design for selfcalibration of intelligent sensors will be described, the input signal to the ANN will be the
This section describes the ANN design to be used in a SelfCalibration of intelligent sensors. The training algorithm and simulation are also described.
Several topologies of ANN like a multiplayer perceptron MLP and radial basis function RBF were evaluated. Multilayer perceptron (MLP) neural networks with sufficiently abundant nonlinear units in a single hidden layer have been established as universal function approximators. Some work has been performed to show the relationship between RBF networks and MLPs. since both types of networks are capable of universation approximation capabilities [
Although, it is known that RBF are good function approximation systems, the MLP was selected because is simpler than RBF and furthermore, the RBF network is computationally demanding [
Consequently for our proposal the most appropriate ANN to be implemented was a feed forward MLP with four neurons in the first layer and a logarithmic activation function. The second layer is a single neuron with a linear activation function. The architecture of the ANN is shown in
The number of neurons, number of layers, activation functions, training algorithm and the computation requirements are the major characteristics considered during the design. These features were determined under the restriction of archiving the least output error and simplest ANN structure to be implemented in a small MCU.
The output of the ANN is defined by:
Currently many variations of backpropagation training algorithms are available. The Adaptive Linear Neuron ADALINE and the Least Mean Square LMS were first presented in the late 50′s, they are very frequently used in adaptive filtering applications. Echo cancellers using the LMS algorithm are currently employed on many long distance telephone lines. Backpropagation is an approximate steepest descent algorithm that minimizes mean square error, the difference between LMS and backpropagation is in the manner in which the derivatives is calculated. Backpropagation is a generalization of the LMS algorithm that can be used for multilayer networks. One of major problems with backpropagation has been the long training times needed. The backpropagation process is too slow for most practical applications and it is not feasible to use the basic backpropagation algorithm for real problems, because it can take weeks to train a network, even on a large computer. Since the backpropagation algorithm was first popularized, there has been considerable work on methods to accelerate the convergence. Another important factor in the backpropagation algorithm is the learning rate. Depending on the value of the learning rate the ANN may or may not oscillate. On this item several experiments have been made leading to selection of the appropriate learning rate [
The Levenberg Marquardt algorithm is a variation of Newton′s method and uses the backpropagation procedure. Levenberg Marquardt Backpropagation (LMBP) was designed for minimizing functions that are sums of squares of other nonlinear functions. This is very well suited to neural network training where the performance index is the mean squared error. The LMBP is the fastest algorithm that we have tested for training multiplayer networks of moderate size, even though it requires a matrix inversion at each iteration. It is remarkable that the LMBP is always able to reduce the sum of squares at each iteration [
Looking for a training algorithm with the best features of training time, minimum error with practical application we decided to evaluate backpropagation, backpropagation with momentum and the LMBP algorithms.
Next, the LenvenbergMarquardt algorithm for our particular application is described. Assuming
The iterations of the Levemberg Marquardt algorithm to autocalibration of intelligent sensors for
Present all the
Determine the Jacobian matrix by:
Now compute the variation or delta of ANN parameters, Δ
Recompute the sum of squared errors with the update parameters,
For this case: If the result of MSE is smaller than computed in the step 1,
The ANN training was made with five to eight calibration points using the function
The performance is given in maximum percentage of nonlinearity output error, MPNOE. In
Temperature measurement systems are widely used in almost any process. A thermistor as temperature sensor was selected in the construction of a measurement system,
In this case, the major characteristic that will be analyzed is the
In
The major features of the MCU for the physical implementation are: eight bit words, eight bits analog to digital converter ADC, a 20 MHz clock and 3 Kbytes of RAM.
The iterations of the Levemberg Marquardt backpropagation algorithm assuming five calibration points was programmed as follows:
The input calibration signal was
The initial values for weights and bias were taken from the simulation described in Section 3.3, with the values of:
Using the
The Jacobian matrix is computed. In this example the size of the Jacobian matrix according to
Solve the
Recompute the sum of squared errors with the update parameters,
If the result of MSE is smaller than computed in the step 1,
The MSE was:
Then the ANN model from
The performance of the ANN was compared against a Honeywell temperature meter number UDC3000 with a type K −29 to 538 °C (–20 to 1000 °F) range thermocouple with an accuracy of ±0.02%, taking 50 measurements from a range of 0 to 100 °C in steps of 2 °C using an oven system to change the temperature. The results are shown in
Important evaluations were made considering the computational work that is required in the physical implementation of each method described above. First, the number of operations required for each method using a software tool to program the MCU were counted, as well as the computation time required was predicted This process was made for
A similar process was carried out to evaluate the computational work and the training time. The computational time for one cycle of the ANN training is showed in
Besides, it is important to remark about the sensor resolution. The ADC determinates the sensor resolution and this is defined by:
The performance of the ANN method can be defined if we determine the deviation of the ANN output from a straight line, the ideal response,
Ideally, the value of
The values of
We made a readjustment in the ANN
The error regarding the best fitting line that was obtained with
In this paper a special methodology to design and evaluate an algorithm for autocalibration of intelligent sensors that can be used in the design of a measurement system with any sensor was described.
The major problems such as offset, variation of gain and nonlinearity in sensors can be solved by the ANN method. Most of the available literature related to autocalibration methods depends on several experiments to determine its performance. This process impacts on the time and cost of a reconfigurable system. This paper described a new autocalibration methodology for nonlinear intelligent sensors based on ANN. The artificial neural network was tested with different level of nonlinear input signals. The proposed method was compared against the piecewise and polynomial linearization methods. Method comparison was achieved using different number of calibration points, and several nonlinear levels of the input signal. The proposed method turned out to have a better overall accuracy than the other two methods. For example, for 20% relative nonlinearity the piecewise methods has an error above 1%, the polynomial one has an error around 0.71% and the proposed method has an error of 0.17 % using five calibration points in simulation results, as shown in
The ANN method requires fewer operations than the other two methods and the number of operations does not increase if the number of calibration points increases. This can be observed in
Besides the major advantages which are that with a low number of calibration points the ANN method achieves better results than the piecewise and polynomial methods, as a consequence it requires less time and cost for its maintenance than any measurement system.
In conclusion, the proposed method is an improvement over the classic autocalibration methodologies, because it impacts on design process of intelligent sensors, autocalibration methodologies and their associated factors, like time and cost.
Architecture of the artificial neural network.
Flow chart of the LMBP algorithm.
Performance of the ANN to selfcalibration. a) With five calibration points, b) with six calibration points c) with seven calibration points and d) with eight calibration points.
Structure of temperature measurement system.
Characteristics of input signal
ANN performance with five calibration points. a) Output ANN signal. b) ANN output relative error.
Training algorithms evaluation.
Algorithm  No. Cycles  Mean Square Error MSE 

 
Backpropagation  500  0.0576 
Backpropagation Momentum  500  0.0372 
LenvenbergMarquardt  22  1.986e16 
Autocalibration methods. Their number of operations required and their computation time.
Method  Operation number  Time in microseconds (  

 
Piecewise  80  102  112  128  4915  5898  6881  7566 


Polinomial  101  162  245  352  5609  8778  12974  18287 
 
ANN  28  3523 
Time expense in the ANN training.
Calibration points number  Time expense in one cycle of training  

 
CLOCK of 20Mhz  CLOCK of 40Mhz  
 
0.69s  0.36s  
0.74s  0.38s  
0.79s  0.40s  
0.84s  0.43s 