# Self-Calibration and Optimal Response in Intelligent Sensors Design Based on Artificial Neural Networks

^{1}

^{2}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Basic System Design Considerations

**x'**of any sensor in response to the input variable

**v'**, which will be measured is defined by:

**y**of the ANN obtained by:

**x**term from equation (3).

## 3. Artificial Neural Network Design to Self-Calibration of Intelligent Sensors

#### 3.1. Artificial Neural Network Design

#### 3.2. Training Algorithm

**N**calibration points we define the input sensor signal v

_{n}, the output sensor signal x

_{n}and its desired t

_{n}, with normalized values equations (2)-(4) as:

- Step 1:
- Present all the N calibration points and compute the sum of MSE with equation (6).
- Step 2:
- Determine the Jacobian matrix by:$$J(p)=\left[\begin{array}{ccccccccccccc}\frac{\partial {\epsilon}_{1}}{\partial {w}_{1}^{1}}& \frac{\partial {\epsilon}_{1}}{\partial {w}_{2}^{1}}& \frac{\partial {\epsilon}_{1}}{\partial {w}_{3}^{1}}& \frac{\partial {\epsilon}_{1}}{\partial {w}_{4}^{1}}& \frac{\partial {\epsilon}_{1}}{\partial {b}_{1}^{1}}& \frac{\partial {\epsilon}_{1}}{\partial {b}_{2}^{1}}& \frac{\partial {\epsilon}_{1}}{\partial {b}_{3}^{1}}& \frac{\partial {\epsilon}_{1}}{\partial {b}_{4}^{1}}& \frac{\partial {\epsilon}_{1}}{\partial {w}_{1}^{2}}& \frac{\partial {\epsilon}_{1}}{\partial {w}_{2}^{2}}& \frac{\partial {\epsilon}_{1}}{\partial {w}_{3}^{2}}& \frac{\partial {\epsilon}_{1}}{\partial {w}_{4}^{2}}& \frac{\partial {\epsilon}_{1}}{\partial {b}_{1}^{2}}\\ \frac{\partial {\epsilon}_{2}}{\partial {w}_{1}^{1}}& \frac{\partial {\epsilon}_{2}}{\partial {w}_{2}^{1}}& \frac{\partial {\epsilon}_{2}}{\partial {w}_{3}^{1}}& \frac{\partial {\epsilon}_{2}}{\partial {w}_{4}^{1}}& \frac{\partial {\epsilon}_{2}}{\partial {b}_{1}^{1}}& \frac{\partial {\epsilon}_{2}}{\partial {b}_{2}^{1}}& \frac{\partial {\epsilon}_{2}}{\partial {b}_{3}^{1}}& \frac{\partial {\epsilon}_{2}}{\partial {b}_{4}^{1}}& \frac{\partial {\epsilon}_{2}}{\partial {w}_{1}^{2}}& \frac{\partial {\epsilon}_{2}}{\partial {w}_{2}^{2}}& \frac{\partial {\epsilon}_{2}}{\partial {w}_{3}^{2}}& \frac{\partial {\epsilon}_{2}}{\partial {w}_{4}^{2}}& \frac{\partial {\epsilon}_{2}}{\partial {b}_{1}^{2}}\\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ \frac{\partial {\epsilon}_{N}}{\partial {w}_{1}^{1}}& \frac{\partial {\epsilon}_{N}}{\partial {w}_{2}^{1}}& \frac{\partial {\epsilon}_{N}}{\partial {w}_{3}^{1}}& \frac{\partial {\epsilon}_{N}}{\partial {w}_{4}^{1}}& \frac{\partial {\epsilon}_{N}}{\partial {b}_{1}^{1}}& \frac{\partial {\epsilon}_{N}}{\partial {b}_{2}^{1}}& \frac{\partial {\epsilon}_{N}}{\partial {b}_{3}^{1}}& \frac{\partial {\epsilon}_{N}}{\partial {b}_{4}^{1}}& \frac{\partial {\epsilon}_{N}}{\partial {w}_{1}^{2}}& \frac{\partial {\epsilon}_{N}}{\partial {w}_{2}^{2}}& \frac{\partial {\epsilon}_{N}}{\partial {w}_{3}^{2}}& \frac{\partial {\epsilon}_{N}}{\partial {w}_{4}^{2}}& \frac{\partial {\epsilon}_{N}}{\partial {b}_{1}^{2}}\end{array}\right]$$
- Step 3:
- Now compute the variation or delta of ANN parameters, Δp
_{k}:$${\mathrm{\Delta}}_{{p}_{k}}={\left[{J}^{T}\left({p}_{k}\right)J\left({p}_{k}\right)+{\mu}_{k}I\right]}^{-1}{J}^{T}({p}_{k})\epsilon ({p}_{k})$$_{k}, I is the identity matrix and the J(p_{k}) is the Jacobian matrix evaluated with the ANN parameters p_{k}. - Step 4:
- Recompute the sum of squared errors with the update parameters, equation (6), step 1 for the update of parameters:$${p}_{k+1}={p}_{K}^{T}+\mathrm{\Delta}{p}_{k}^{T}$$

_{k+1,mse}< ε

_{k,mse}, then evaluate μ

_{k+1}as ${\mu}_{k+1}=\frac{{\mu}_{k}}{c}$, c is a constant value, and continue with the step 2. If the sum of squares is not reduced, ε

_{k+1,mse}> ε

_{k, mse}evaluate as μ

_{k}

_{+1}= μ

_{k}c and go to the step 2. All this will be repeated until the desired error or k's cycles are reached. Note that the initial values of μ and c are the key to the right convergence, the recommend values are μ =0.01 and c=5. Figure 2 shows the flow chart of the LBMP algorithm.

#### 3.3 Simulation to Evaluation and Results Comparison

**x**. The artificial neural network was tested with different level of nonlinear input signals. The proposed method was compared against the piecewise [35] and polynomial linearization methods [36] using simulation software. Figure 3 illustrates the results of the comparison of the ANN, piecewise and polynomial methods. Each figure shows the performance of these methods for different calibration points regarding different levels of input nonlinearity, from 10% to 65%.

#### 4. Temperature Intelligent Sensor Design using ANN on a Small MCU

_{o}=10000 ohms to 25 °C, and assembled on a voltage divider to build a measurement system in the range from 0 to 100 °C as shown in Figure 4.

**x**, the input signal to the ANN. This signal is the response the sensor to the temperature and the values are normalized in the ranges of [0,1].

**t**(a straight line) can be observed. Figure 5b shows the percentage of nolinearity of this signal

**x**, evaluated according to Equation (5). In this figure all the error signals are shown: simulated signal, theorical thermistor response, and the real thermistor response. From Figure 5b, the similarity of the three signals can be easily appreciated. Taking into consideration just the parameter β and its ±10% tolerance, the real signal is under this limits and our proposal methodology is acceptable.

**x**in intelligent sensor will be described.

**x**,

**y**and

**t**were calculated, to obtain the vectors of equation (8):

- Step 1:
- Using the equations (10) and (12) the output signal of the ANN
**y**from equation (7) is computed. The evaluation of the error equation (5) and the MSE with equation (6) is computed.$$\begin{array}{l}{y}_{n}=\mathit{\text{purelin}}\phantom{\rule{0.2em}{0ex}}\left[{w}^{2}\phantom{\rule{0.2em}{0ex}}\left(\mathit{\text{Logsig}}({w}^{1}{x}_{j}+{b}^{1})\right)+{b}^{2}\right],\phantom{\rule{0.2em}{0ex}}n=1\phantom{\rule{0.2em}{0ex}}\text{to}\phantom{\rule{0.2em}{0ex}}5\hfill \\ \hfill \\ y=[0.5968,0.5837,0.5708,0.5636,0.5601]\hfill \\ {\epsilon}_{1}=[-0.5968\phantom{\rule{0.4em}{0ex}}-0.3537\phantom{\rule{0.4em}{0ex}}-0.0708\phantom{\rule{0.4em}{0ex}}0.1964\phantom{\rule{0.4em}{0ex}}0.4399]\hfill \\ {\epsilon}_{1}{,}_{\mathit{\text{mse}}}=0.3791\hfill \end{array}$$ - Step 2:
- The Jacobian matrix is computed. In this example the size of the Jacobian matrix according to equation (17) and five calibration points will be 5 × 13, defined as:$$J=\left[\begin{array}{l}\begin{array}{lllllllllllll}\frac{\partial {\epsilon}_{1}}{{\partial w}_{1}^{1}}\hfill & \frac{\partial {\epsilon}_{1}}{{\partial w}_{2}^{1}}\hfill & \frac{\partial {\epsilon}_{1}}{{\partial w}_{3}^{1}}\hfill & \frac{\partial {\epsilon}_{1}}{{\partial w}_{4}^{1}}\hfill & \frac{\partial {\epsilon}_{1}}{{\partial b}_{1}^{1}}\hfill & \frac{\partial {\epsilon}_{1}}{{\partial b}_{2}^{1}}\hfill & \frac{\partial {\epsilon}_{1}}{{\partial b}_{3}^{1}}\hfill & \frac{\partial {\epsilon}_{1}}{{\partial b}_{4}^{1}}\hfill & \frac{\partial {\epsilon}_{1}}{{\partial w}_{1}^{2}}\hfill & \frac{\partial {\epsilon}_{1}}{{\partial w}_{2}^{2}}\hfill & \frac{\partial {\epsilon}_{1}}{{\partial w}_{3}^{2}}\hfill & \frac{\partial {\epsilon}_{1}}{{\partial w}_{4}^{2}}\hfill & \frac{\partial {\epsilon}_{1}}{{\partial b}_{1}^{2}}\hfill \\ \frac{\partial {\epsilon}_{2}}{{\partial w}_{1}^{1}}\hfill & \frac{\partial {\epsilon}_{2}}{{\partial w}_{2}^{1}}\hfill & \frac{\partial {\epsilon}_{2}}{{\partial w}_{3}^{1}}\hfill & \frac{\partial {\epsilon}_{2}}{{\partial w}_{4}^{1}}\hfill & \frac{\partial {\epsilon}_{2}}{{\partial b}_{1}^{1}}\hfill & \frac{\partial {\epsilon}_{2}}{{\partial b}_{2}^{1}}\hfill & \frac{\partial {\epsilon}_{2}}{{\partial b}_{3}^{1}}\hfill & \frac{\partial {\epsilon}_{2}}{{\partial b}_{4}^{1}}\hfill & \frac{\partial {\epsilon}_{2}}{{\partial w}_{1}^{2}}\hfill & \frac{\partial {\epsilon}_{2}}{{\partial w}_{2}^{2}}\hfill & \frac{\partial {\epsilon}_{2}}{{\partial w}_{3}^{2}}\hfill & \frac{\partial {\epsilon}_{2}}{{\partial w}_{4}^{2}}\hfill & \frac{\partial {\epsilon}_{2}}{{\partial b}_{1}^{2}}\hfill \\ \frac{\partial {\epsilon}_{3}}{{\partial w}_{1}^{1}}\hfill & \frac{\partial {\epsilon}_{3}}{{\partial w}_{2}^{1}}\hfill & \frac{\partial {\epsilon}_{3}}{{\partial w}_{3}^{1}}\hfill & \frac{\partial {\epsilon}_{3}}{{\partial w}_{4}^{1}}\hfill & \frac{\partial {\epsilon}_{3}}{{\partial b}_{1}^{1}}\hfill & \frac{\partial {\epsilon}_{3}}{{\partial b}_{2}^{1}}\hfill & \frac{\partial {\epsilon}_{3}}{{\partial b}_{3}^{1}}\hfill & \frac{\partial {\epsilon}_{3}}{{\partial b}_{4}^{1}}\hfill & \frac{\partial {\epsilon}_{3}}{{\partial w}_{1}^{2}}\hfill & \frac{\partial {\epsilon}_{3}}{{\partial w}_{2}^{2}}\hfill & \frac{\partial {\epsilon}_{3}}{{\partial w}_{3}^{2}}\hfill & \frac{\partial {\epsilon}_{3}}{{\partial w}_{4}^{2}}\hfill & \frac{\partial {\epsilon}_{3}}{{\partial b}_{1}^{2}}\hfill \\ \frac{\partial {\epsilon}_{4}}{{\partial w}_{1}^{1}}\hfill & \frac{\partial {\epsilon}_{4}}{{\partial w}_{2}^{1}}\hfill & \frac{\partial {\epsilon}_{4}}{{\partial w}_{3}^{1}}\hfill & \frac{\partial {\epsilon}_{4}}{{\partial w}_{4}^{1}}\hfill & \frac{\partial {\epsilon}_{4}}{{\partial b}_{1}^{1}}\hfill & \frac{\partial {\epsilon}_{4}}{{\partial b}_{2}^{1}}\hfill & \frac{\partial {\epsilon}_{4}}{{\partial b}_{3}^{1}}\hfill & \frac{\partial {\epsilon}_{4}}{{\partial b}_{4}^{1}}\hfill & \frac{\partial {\epsilon}_{4}}{{\partial w}_{1}^{2}}\hfill & \frac{\partial {\epsilon}_{4}}{{\partial w}_{2}^{2}}\hfill & \frac{\partial {\epsilon}_{4}}{{\partial w}_{3}^{2}}\hfill & \frac{\partial {\epsilon}_{4}}{{\partial w}_{4}^{2}}\hfill & \frac{\partial {\epsilon}_{4}}{{\partial b}_{1}^{2}}\hfill \end{array}\\ \begin{array}{lllllllllllll}\frac{\partial {\epsilon}_{5}}{{\partial w}_{1}^{1}}\hfill & \frac{\partial {\epsilon}_{5}}{{\partial w}_{2}^{1}}\hfill & \frac{\partial {\epsilon}_{5}}{{\partial w}_{3}^{1}}\hfill & \frac{\partial {\epsilon}_{5}}{{\partial w}_{4}^{1}}\hfill & \frac{\partial {\epsilon}_{5}}{{\partial b}_{1}^{1}}\hfill & \frac{\partial {\epsilon}_{5}}{{\partial b}_{2}^{1}}\hfill & \frac{\partial {\epsilon}_{5}}{{\partial b}_{3}^{1}}\hfill & \frac{\partial {\epsilon}_{5}}{{\partial b}_{4}^{1}}\hfill & \frac{\partial {\epsilon}_{5}}{{\partial w}_{1}^{2}}\hfill & \frac{\partial {\epsilon}_{5}}{{\partial w}_{2}^{2}}\hfill & \frac{\partial {\epsilon}_{5}}{{\partial w}_{3}^{2}}\hfill & \frac{\partial {\epsilon}_{5}}{{\partial w}_{4}^{2}}\hfill & \frac{\partial {\epsilon}_{5}}{{\partial b}_{1}^{2}}\hfill \end{array}\end{array}\right]$$$$J\left({p}_{1}\right)=\left[\begin{array}{rrrrrrrrrrrrr}\hfill 0& \hfill 0& \hfill 0& \hfill 0& \hfill 0.1157& \hfill 0.0268& \hfill -0.0224& \hfill 0.0116& \hfill -0.7311& \hfill -0.7311& \hfill -0.7311& \hfill -0.7311& \hfill -1\\ \hfill 0.0381& \hfill 0.0091& \hfill -0.0080& \hfill -0.0036& \hfill 0.1095& \hfill 0.0260& \hfill -0.0231& \hfill -0.0102& \hfill -0.7529& \hfill -0.7428& \hfill -0.7181& \hfill -0.7778& \hfill -1\\ \hfill 0.0720& \hfill 0.0176& \hfill -0.0166& \hfill -0.0061& \hfill 0.1031& \hfill 0.0253& \hfill -0.0237& \hfill -0.0088& \hfill 0.7736& \hfill -0.7543& \hfill -0.7047& \hfill -0.8186& \hfill -1\\ \hfill 0.0894& \hfill 0.0223& \hfill -0.0217& \hfill -0.0072& \hfill 0.0993& \hfill 0.0248& \hfill -0.0241& \hfill -0.0080& \hfill -0.7849& \hfill -0.7608& \hfill -0.6968& \hfill -0.8393& \hfill -1\\ \hfill 0.0975& \hfill 0.0246& \hfill -0.0242& \hfill -0.0076& \hfill 0.0975& \hfill 0.0246& \hfill -0.0242& \hfill -0.0076& \hfill -0.7904& \hfill -0.7640& \hfill -0.6928& \hfill -0.8489& \hfill -1\end{array}\right]$$
- Step 3:
- Solve the equation (10) to get Δp
_{k}, for the first iteration for k=1, μ_{1}=0.01 and c=5 the result is:$$\mathrm{\Delta}{p}_{1}^{T}=\left[-2.3307-\mathrm{0.59740.60540.1711}\phantom{\rule{0.4em}{0ex}}0.5757\phantom{\rule{0.4em}{0ex}}0.0839\phantom{\rule{0.2em}{0ex}}0.0211-0.1123\phantom{\rule{0.2em}{0ex}}0.6563\phantom{\rule{0.2em}{0ex}}0.0195\phantom{\rule{0.2em}{0ex}}-1.7367\phantom{\rule{0.2em}{0ex}}2.0186\phantom{\rule{0.2em}{0ex}}-1.0764\right]$$ - Step 4:
- Recompute the sum of squared errors with the update parameters, equation (6), step 1. For this case:$${p}_{k+1}={p}_{2}={p}_{1}^{T}+\mathrm{\Delta}{p}_{1}^{T}$$

_{2},

_{mse}< ε

_{1},

_{mse}, than evaluate μ

_{2}as ${\mu}_{2}=\frac{{\mu}_{1}}{c}$, and continue with step 2. If the sum of squares error is not reduced, ε

_{2},

_{mse}< ε

_{1},

_{mse}, evaluate as μ as μ

_{2}= μ

_{1}c and go to step 2. All this will be repeated until the desired error or k's cycles are reached. The process above described was repeated for

**k**=500 or 500 cycles and final parameters are:

#### 5. Tests and results

**y**compared with the target straight line can be seen in Figure 6a. In order to visualize better the error between the ANN output and the target straight line the percentage of relative error of nolinearity computed with equation (5) is shown in Figure 6b. In summary, Figure 6b shows the difference between the ideal output and the output provided by the ANN. It can also be observed that the maximum percentage of relative nonlinearity error is approximately 0.7%, below 1% as was predicted from simulation and was illustrated in Figure 3a with five calibration points.

_{FRS}is the full range voltage scale and the number of bits of the ADC is n. In our example E

_{FRS}=5 volts, then the sensor resolution is 19.6 mV, that means, the temperature sensor is limited to detect temperature changes of 0.38 °C. In a case where this resolution might represent a problem in an specific application, the MCU can be changed with 10 bits of ADC converter to improve the resolution to 4.9 mV (0.09 °C). Another alternative is the use of an external ADC with more than 10 bits.

#### 6. Evaluation of Results

**error**” [41].

^{−3}, the error for b is −0.0281 °C.

^{−4}°C, respectively.

#### 7. Conclusions

## References and Notes

- Khachab, N. I.; Mohammed, I. Linearization Techniques for n-th Order Sensor Model in Mos VLSI Technology. IEEE Transactions on Circuits and Systems
**1991**, 38, 1439–1449. [Google Scholar] - Iglesias, G. E.; Iglesias, E. A. Linearization of Transducer Signal Using an Analog to Digital Converter. IEEE Transactions on Instrumentation and Measurement
**1988**, 37, 53–57. [Google Scholar] - Vargha, B.; Zoltán, I. Calibration Algorithm for Current-Output R-2R Ladders. IEEE Transactions On Instrumentation And Measurement
**2001**, 50, 1216–1220. [Google Scholar] - Dent, A.C.; Colin, F.N. Linearization of Analog to Digital Converters. IEEE Transactions on Circuits and Systems
**1990**, 37, 729–737. [Google Scholar] - Kaliyugavaradan, S.; Sankaran, P.; Murti, V.G.K. A New Compensation Scheme for Thermistors and its Implementation for Response Linearization Over a wide Temperature Range. IEEE Transactions on Instrumentation and Measurement
**1993**, 42, 952–956. [Google Scholar] - Cristaldi, L.; Ferro, A.; Lazzaroni, M.; Ottoboni, R. A Linearization Method for Comercial Hall-Effect Current Transducer. IEEE Transactions on Instrumentation and Measurement
**2001**, 50, 1149–1153. [Google Scholar] - James, H. T.; Antoniotti, A. J. Lineariation Algorithm for Computer Aided Control Engineering. IEEE Control Systems
**1993**, 58-64. [Google Scholar] - Patranbis, D.; Gosh, D. Anovell software based transducer linearizer. IEEE Transaction Instrumentation and Measurement
**1989**, 38, 1122–1126. [Google Scholar] - Malcovati, P.; Leme, C.A.; O′Leary, P.; Maloberti, F.; Baltes, H. Smart sensor interface with A/D conversion and programmable calibration. IEEE Journal of Solid State Circuits
**1994**, 29, 963–966. [Google Scholar] - Yamada, M.; Watanabe, K. A. capacitive pressure sensor interface using oversampling Δ-Σdemodulation techniques. IEEE Transaction Instrumentation and Measurement
**1997**, 46, 3–7. [Google Scholar] - Rahman, M. H. R.; Devanathan, R.; Kuanyi, Z. Neural Network Approach for Linearizing Control of Nolinear Process Plant. IEEE Transactions on Industrial Electronics
**2000**, 47, 470–476. [Google Scholar] - Shing, K. H.; Chao, H. J. Adaptive Reinforcement Learning System for Linearization Control. IEEE Transactions on Industrial Electronics
**2000**, 47, 1185–1188. [Google Scholar] - Dai, X.; He, D.; Zhang, T.; Zhang, K. ANN generalized inversion for the linearisation and control of nonlinear systems. IEE Proceeding in Control Theory Appl
**2003**, 150, 267–277. [Google Scholar] - Schoukens, J.; Németh, J. G.; Vandersteen, G.; Pintelon, R.; Crama, P. Linearization of Nonlinear Dynamics Systems. IEEE Transactions on Instrumentation and Measurement
**2004**, 53, 1245–1248. [Google Scholar] - Ashhab, M. S.; Salaymeh, A. Optimization of hot-wire thermal flow sensor based on a neural net model. Applied Thermal Engineering
**2006**, 26, 948–955. [Google Scholar] - Ciminski, A. S. Neural network based adaptable control method for linearization of high power amplifiers. International Journal of Electronics an Communications
**2005**, 59, 239–243. [Google Scholar] - Capitán, V. L. F.; Arroyo, G. E.; Fernández, R. M. D.; Cuadros, R. L. Logit Linearization of analytical response curves in optical disposable sensors based on coextraction for monovalent anions. Analytical Chemical
**2006**, 561, 156–163. [Google Scholar] - Hutchins, M. A. Twenty-First Century Calibration. The Institution of Electrical Engineers the IEE. Savoy Place, London WC2R OBL, UK
**1996**, 1–6. [Google Scholar] - Dack, P. So What Does Industry Want From Calibration? The Institution of Electrical Engineers. by the IEE, Savoy Place, London WCZR OBL, UK.
**1995**, 1–6. [Google Scholar] - Williams, T. A. Quality Management: Quality Spending Nears $3B Mark. Quality Magazine: Improving your Manufacturing Process
**2004**. [Google Scholar] - Robins, M. Quality Management: $4.4 Billon and Growin”. Quality Magazine: Improving your Manufacturing Process
**2005**. [Google Scholar] - King, D.; Lyons, W. B.; Flanagan, C.; Lewis, E. An Optical-Fiber Sensor for Use in Water Systems Utilizing Digital Signal Processing Techniques and Artificial Neural Network Pattern Recognition. IEEE Sensor Journal
**2004**, 4, 21–27. [Google Scholar] - Zhao, S.; Li, B.; Yuan, J.; Zhao, D. Key Elements Extraction Based On Particle Feature and RBFNN In New Meter Calibration Method. IEEE International Conference on Industrial Technology
**2005**, 795–798. [Google Scholar] - Zhang, G.; Saw, S.; Liu, J.; Sterrantino, S.; Johnson, D. K.; Jung, S. An Accurate Current Source With On-Chip Self-Calibration Circuits for Low-Voltage Current-Mode Differential Drivers. IEEE Transactions on Circuits And Systems
**2006**, 53, 40–47. [Google Scholar] - Kikuchi, S.; Tsuji, H.; Sano, A. Autocalibration Algorithm for Robust Capon Beamforming. IEEE Antennas and Wireless Propagation Letters
**2006**, 5, 251–255. [Google Scholar] - Depari, A.; Flammini, A.; Marioli, D.; Taroni, A. Application of an ANFIS Algorithm to Sensor Data Processing. IEEE Transactions on Instrumentation and Measurement
**2007**, 56, 75–79. [Google Scholar] - Depari, A.; Ferrari, P.; Ferrari, V.; Flammini, A.; Ghisla, A.; Marioli, D.; Taroni, A. Digital Signal Processing for Biaxial Position Measurement With a Pyroelectric Sensor Array. IEEE Transactions on Instrumentation and Measurement
**2006**, 55, 501–506. [Google Scholar] - Hooshmand, R. A.; Joorabian, M. Design and optimisation of electromagnetic flowmeter for conductive liquids and its calibration based on neural networks. IEE Proceedings Science Measurements and Technologies
**2006**, 153, 139–146. [Google Scholar] - Abu, N. K.; Lonsmann, J. J. I. Calibration of a Sensor Array (an Electronic Tongue) for Identification and Quantification of Odorants from Livestock Buildings. Sensors
**2007**, 7, 103–128. [Google Scholar] - Patra, J. C.; Luang, A.; Meher, P. K. A Novel Neural Network-based Linearization and Auto-compensation Technique for Sensors. ISCAS
**2006**, 1167–1170. [Google Scholar] - Ji, T.; Pang, Q.; Liu, X. An Intelligent Pressure Sensor Using Rough Set Neural Networks. IEEE International Conference on Information Acquisition
**2006**, 717–721. [Google Scholar] - Hang, T. M.; Howard, D. B.; Mark, B. Neural Net Work Design.; PWS Publishing Company: Beijing, 2002; pp. 12.1–12.27. [Google Scholar]
- Tai, C. C.; Da, J. H. Acceleration of Levenberg-Marquardt Training of Neural Network with Variable Decay Rate. IEEE Proceeding of the International Joint Conference on Neural Networks
**2003**, 1873–1878. [Google Scholar] - Syed, M. A.; Tahseen, A. J. Cemal Ardil, Lenverg-Marquardt Algorithm for Karachi Stock Exchange Share Rates Forecasting. Transactions on Engineering Computing and Technology, World Enformatika Society
**2004**, 316–321. [Google Scholar] - Van der Horn, G.; Huijsing, J. L. Integrated Smart Sensors Design and Calibration.; Kluwer Academic Publisher: Boston, MA, 1998. [Google Scholar]
- Fouad, L. K.; Van der Horn, G.; Huijsing, J. L. A Noniterative Polynomial 2-D Calibration Method Implemented in a Microcontroller. IEEE Transactions on Instrumentation and Measurement
**1997**, 46, 752–756. [Google Scholar] - Maruyama, M.; Girosi, F.; Poggio, T. A. connection between GRBF and MLP. Technical Memo. AIM-1291. Massachusetts Institute of Technology, Artificial Intelligence Laboratory
**1992**, 1–37. [Google Scholar] - Hu, Y. H.; Hwang, J. N. Handbook of Neural Network Signal Processing; CRC Press: Washington D.C., 2002. [Google Scholar]
- Haykin, S. Neural Networks a Comprensive Foundation.; Prentice Hall: Upper Saddle River NJ, 1999. [Google Scholar]
- Dobelin, E. Measurement Systems Application and Design; Mc Graw Hill: New York, 2004. [Google Scholar]
- Ronald, E. W.; Raymond, H. M.; Sharon, L.M. Probability and Statistics for Engineers and Scientist.; Prentice Hall: Upper Saddle River NJ, 1999. [Google Scholar]
- ISO Guide to the expression of uncertainty in measurement; ISO Publishing, 1995.
- NIST. Guidelines for Evaluating and Expressing the Uncertainty of NIST Measurement Results.
**1994**. [Google Scholar] - ISO International vocabulary of basic and general terms in metrology, Second Edition ed; ISO Publishing, 1993.
- Hernandez, W. Improving the Response of a Rollover Sensor Placed in a Car under Performance Tests by Using a RLS Lattice Algorithm. Sensors
**2005**, 5, 613–632. [Google Scholar]

**Figure 3.**Performance of the ANN to self-calibration. a) With five calibration points, b) with six calibration points c) with seven calibration points and d) with eight calibration points.

**Figure 6.**ANN performance with five calibration points. a) Output ANN signal. b) ANN output relative error.

Algorithm | No. Cycles | Mean Square Error MSE |
---|---|---|

Backpropagation | 500 | 0.0576 |

Backpropagation Momentum | 500 | 0.0372 |

Lenvenberg-Marquardt | 22 | 1.986e-16 |

Method | Operation number | Time in microseconds (μs) | ||||||
---|---|---|---|---|---|---|---|---|

N=5 | N=6 | N=7 | N=8 | N=5 | N=6 | N=7 | N=8 | |

Piecewise | 80 | 102 | 112 | 128 | 4915 | 5898 | 6881 | 7566 |

Polinomial | 101 | 162 | 245 | 352 | 5609 | 8778 | 12974 | 18287 |

ANN | 28 | 3523 |

Calibration points number | Time expense in one cycle of training | |
---|---|---|

CLOCK of 20Mhz | CLOCK of 40Mhz | |

N=5 | 0.69s | 0.36s |

N=6 | 0.74s | 0.38s |

N=7 | 0.79s | 0.40s |

N=8 | 0.84s | 0.43s |

© 2007 by MDPI ( http://www.mdpi.org). Reproduction is permitted for noncommercial purposes.

## Share and Cite

**MDPI and ACS Style**

Rivera, J.; Carrillo, M.; Chacón, M.; Herrera, G.; Bojorquez, G.
Self-Calibration and Optimal Response in Intelligent Sensors Design Based on Artificial Neural Networks. *Sensors* **2007**, *7*, 1509-1529.
https://doi.org/10.3390/s7081509

**AMA Style**

Rivera J, Carrillo M, Chacón M, Herrera G, Bojorquez G.
Self-Calibration and Optimal Response in Intelligent Sensors Design Based on Artificial Neural Networks. *Sensors*. 2007; 7(8):1509-1529.
https://doi.org/10.3390/s7081509

**Chicago/Turabian Style**

Rivera, José, Mariano Carrillo, Mario Chacón, Gilberto Herrera, and Gilberto Bojorquez.
2007. "Self-Calibration and Optimal Response in Intelligent Sensors Design Based on Artificial Neural Networks" *Sensors* 7, no. 8: 1509-1529.
https://doi.org/10.3390/s7081509