^{1}

^{*}

^{2}

^{1}

This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).

Artificial neural network (ANN) based prediction of the response of a microbend fiber optic sensor is presented. To the best of our knowledge no similar work has been previously reported in the literature. Parallel corrugated plates with three deformation cycles, 6 mm thickness of the spacer material and 16 mm mechanical periodicity between deformations were used in the microbend sensor. Multilayer Perceptron (MLP) with different training algorithms, Radial Basis Function (RBF) network and General Regression Neural Network (GRNN) are used as ANN models in this work. All of these models can predict the sensor responses with considerable errors. RBF has the best performance with the smallest mean square error (MSE) values of training and test results. Among the MLP algorithms and GRNN the Levenberg-Marquardt algorithm has good results. These models successfully predict the sensor responses, hence ANNs can be used as useful tool in the design of more robust fiber optic sensors.

Fiber optic sensors offer several advantages compared to conventional sensors. First, fiber optic sensors can easily be integrated into structures due to their small size and cylindrical geometry, forming what is called smart structures. They are also all-dielectric and immune to electromagnetic interference. Moreover, they are lightweight, robust and have large bandwidths. In addition to these advantages, improvements and cost reductions in fiber optic technology have also stimulated interest in fiber optic sensors in the last decade.

Microbend sensors were among the earliest fiber optic sensors developed and have been employed by some researchers for sensing of many parameters since the 1980s. Typical applications include sensing of acoustic and displacement [

When a fiber is subjected to small deformations (microbends), light rays in the core of the fiber exceed the critical angle. This causes redistribution of energy between core and cladding modes. The guided higher order core modes couple to the cladding modes causing the light propagating in the fiber to decrease. This mode coupling can be achieved by placing the fiber in contact with a series of periodically positioned deformers. Hence, microbending causes the light intensity to decrease due to light leakage into the cladding. By monitoring and correlating the loss of light intensity, different types of microbend sensors can be designed [

Artificial Neural Networks (ANNs) have become a popular artificial intelligence technique due to their ability to learn and fast real-time operation. They have been extensively investigated as a computational paradigm in which a deterministic description of the computation is either difficult to identify or too complex [

Researchers have also used ANNs in fiber optic sensor design in the following areas: recovery of information about strain and temperature from fiber optic sensors [

In this paper, we have used the normalized intensity as a function of applied force obtained from a microbend sensor to predict the response of the sensor by using ANNs since these can generate appropriate outputs for given inputs without any necessity for mathematical formulations between input and output data. To the best of our knowledge, no similar work has been presented in the literature.

Artificial Neural Networks (ANNs) are neuron-like processing elements which try to mimic simple nervous systems. An artificial neuron model comprises input(s) with weight(s), activation function(s) and output(s). The weights are adjusted until the desired output is generated for a given input. Several types of activation functions can be used in applications. Hyperbolic tangent and sigmoidal functions are among the activation functions commonly used. ANNs are mainly used for classification, function approximation, clustering and regression.

ANNs have different types of connections. Feed forward neural networks are the most popular model, used in many applications. They are also called Multi Layer Perceptron (MLP) networks. A MLP generally consists of an input layer, one or more hidden layers and an output layer. The processing units are arranged in layers [

Radial Basis Function (RBF) is another network model which is a class of hybrid connection model. Whilst they are essentially three-layer feed forward networks, RBF networks differ from classical MLP in three significant ways: there is only one set of trainable weights, from the hidden layer to the output layer; the nodes’ activation functions are non-standard and learning is affected by both supervised and unsupervised techniques [

The General Regression Neural Network (GRNN), which is a kind of radial basis network, is a powerful regression tool with a dynamic network structure. The network training speed is extremely fast. It is simple and can be easily implemented [

There are many training algorithms in the literature, so it is very difficult to know which training algorithm will be the fastest or more accurate for a given problem, as this depends on several factors, including complexity of the problem, the number of weights and biases in the network, the number of data points in the training set, the error goal and the task the network is to be used for [

The DA is a technique in which the learning rate is allowed to change during the training process. With standard steepest descent, the learning rate is held constant throughout training. The purpose of the RB method is to eliminate the harmful effects of the magnitudes of the partial derivatives. Only the sign of derivative is used to determine the direction of the weight update; the magnitude of the derivative has no effect on the weight update. The size of the weight is determined by a separate update value. The FRU is a network training function that updates weights and bias values according to the conjugate gradient backpropagation with Fletcher-Reeves updates. It has the smallest storage requirements of the conjugate gradient algorithms. The CGP is a network training function that updates weight and bias values according to the conjugate gradient backpropagation with Polak-Ribiere updates. The CGB is a network training function that updates weight and bias values according to the conjugate gradient backpropagation with Powell/Beale Restarts. The SCG is a member of the class of the conjugate gradient method is the only conjugate gradient algorithm that requires no line search. The BFGS is a quasi-Newton method which is an alternative to the conjugate gradient methods for fast optimization. The LM is a network training function that updates weight and bias values according to the Levenberg-Marquardt optimization. The LM algorithm was designed to approach second order training speed without using a Hessian matrix [

The geometry of a sensing region of a microbend sensor is shown in _{s}

Intensity modulation caused by microbending in multimode fibers can be exploited as a transduction mechanism for detecting environmental changes. The change in light transmission, Δ_{s}_{s}_{s} is the Young’s modulus, and the _{f}

From

To understand the behaviour of fiber optic microbend sensors, an experiment was conducted. The optical intensity was measured as a function of applied force to the microbend deformer set. The experimental setup consisted of a 16 mW solid-state laser with a 650 nm wavelength, a step index multimode silica fiber with a numerical aperture (NA) of 0.37, and core/cladding/jacket dimensions of 200/230/500 μm, microbend deformer and a power-meter. Schematic and picture of the experimental setup are shown in

MLP with different algorithms (RBF and GRNN) have been used to estimate the microbend sensor response. The input and output variables of ANN models are force (F) and normalized intensity, respectively. The general structure of the networks proposed in this paper is shown in

F is a known parameter and can be utilized for the prediction of the sensor response. A training dataset consisted of randomly selected F vs. normalized intensity is exploited to the network and the network is trained. After the training process, new responses are predicted for unseen F values.

Thirteen samples were used in the network. Eight of these samples were used in the training dataset. The performance of the algorithms used in the network is compared in terms of their mean square errors (MSEs). The training algorithms and the number of hidden neurons used in MLP network are given in

The training MSEs results of RBF and GRNN are given in

The prediction performances of different ANN models proposed in this work have been tested with five experimental data sets obtained from the fiber optic microbend sensor described in the experimental procedure in Section 2. The comparisons of the sensor responses and the ANN model outputs are given in

Different ANN approaches have been proposed in this work to predict the response of a fiber optic microbend sensor. First it can be said that ANN approaches can tolerate measurement errors of fiber optic microbend sensors. Performance comparisons demonstrate that all of the ANN models used in this paper can predict the sensor responses with considerable errors. RBF has the best performance with the smallest MSE values of training and test results. GRNN and Levenberg-Marquardt algorithm among the MLP algorithms also have good results. These models successfully predict the sensor responses. From this work it can be concluded that the proposed technique is a useful tool in the design of more robust fiber optic sensors.

The geometry of the sensing region.

Schematic of experimental setup.

Experimental setup.

The sensor response (normalized intensity) in terms of F.

General structure of the networks proposed in this work.

Comparisons of the best and the worst ANN outputs with the sensor responses.

The training algorithms and number of hidden neurons in MLP.

| ||
---|---|---|

MLP | DA | 6 |

RB | 6 | |

FRU | 5 | |

PRU | 4 | |

CGB | 4 | |

SCG | 3 | |

BFGS | 5 | |

LM | 4 |

The MSEs of different training algorithms for MLP.

| ||
---|---|---|

MLP | DA | 1.9320E-07 |

RB | 1.7005E-07 | |

FRU | 1.7415E-09 | |

PRU | 5.9327E-09 | |

CGB | 1.9657E-07 | |

SCG | 1.0030E-06 | |

BFGS | 5.2094E-09 | |

LM | 8.2732E-16 |

The MSEs of RBF and GRNN

RBF | 4.1600E-32 |

GRNN | 3.1917E-20 |

Comparison of the sensor responses and the ANN outputs.

4.905 | 0.997 | 0.9956 | 0.9944 | 0.9912 | 0.9915 | 0.9912 | 0.9950 | 0.9912 | 0.9917 | 0.9951 | 0.9945 |

14.715 | 0.988 | 0.9800 | 0.9839 | 0.9846 | 0.9861 | 0.9859 | 0.9829 | 0.9847 | 0.9862 | 0.9837 | 0.9830 |

24.525 | 0.971 | 0.9687 | 0.9649 | 0.9693 | 0.9667 | 0.9673 | 0.9701 | 0.9683 | 0.9664 | 0.9696 | 0.9700 |

44.145 | 0.945 | 0.9507 | 0.9452 | 0.9409 | 0.9429 | 0.9444 | 0.9488 | 0.9409 | 0.9424 | 0.9472 | 0.9455 |

53.955 | 0.923 | 0.9263 | 0.9261 | 0.9336 | 0.9319 | 0.9294 | 0.9244 | 0.9336 | 0.9321 | 0.9237 | 0.9275 |