Next Article in Journal
Numerical Investigation of Raman-Assisted Four-Wave Mixing in Tapered Fiber Raman Fiber Amplifier
Previous Article in Journal
Reduction in Temperature-Dependent Fiber-Optic Gyroscope Bias Drift by Using Multifunctional Integrated Optical Chip Fabricated on Pre-Annealed LiNbO3
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep Learning-Enabled De-Noising of Fiber Bragg Grating-Based Glucose Sensor: Improving Sensing Accuracy of Experimental Data

1
Amity Institute of Nanotechnology, Amity University, Noida 201301, Uttar Pradesh, India
2
Physics Division, Department of Applied Sciences, National Institute of Technology Delhi, GT Karnal Road, Delhi 110036, India
3
Department of Computer Science & Engineering, Indian Institute of Technology Patna, Bihta, Patna 801106, India
4
Department of Computer Science and Engineering, National Institute of Technology Delhi, GT Karnal Road, Delhi 110036, India
5
Department of Electronics & Communication Engineering, National Institute of Technology Delhi, GT Karnal Road, Delhi 110036, India
6
Department of Electronics & Communication Engineering, MNNIT Allahabad, Prayagraj 211004, Uttar Pradesh, India
7
CICECO—Aveiro Institute of Materials, Department of Physics, University of Aveiro, 3810-193 Aveiro, Portugal
8
Department of Physics, VSB—Technical University of Ostrava, 70800 Ostrava, Czech Republic
*
Author to whom correspondence should be addressed.
Current address: Space and Resilient Communications and Systems, Centre Tecnològic de Telecomunicacions de Catalunya, 08860 Barcelona, Spain.
Photonics 2024, 11(11), 1058; https://doi.org/10.3390/photonics11111058
Submission received: 15 September 2024 / Revised: 27 October 2024 / Accepted: 1 November 2024 / Published: 12 November 2024
(This article belongs to the Special Issue Optical Fiber Sensors: Recent Progress and Future Prospects)

Abstract

This paper outlines the successful utilization of deep learning (DL) techniques to elevate data quality for assessing Au-TFBG (tilted fiber Bragg grating) sensor performance. Our approach involves a well-structured DL-assisted framework integrating a hierarchical composite attention mechanism. In order to mitigate high variability in experimental data, we initially employ seasonal decomposition using moving averages (SDMA) statistical models to filter out redundant data points. Subsequently, sequential DL models extrapolate the normalized transmittance (Tn) vs. wavelength spectra, which showcases promising results through our SpecExLSTM model. Furthermore, we introduce the AttentiveSpecExLSTM model, integrating a composite attention mechanism to improve Tn sequence prediction accuracy. Evaluation metrics demonstrate its superior performance, including a root mean square error of 1.73 ± 0.05, a mean absolute error of 1.20 ± 0.04, and a symmetric mean absolute percentage error of 2.22 ± 0.05, among others. Additionally, our novel minima difference (Min. Dif.) metric achieves a value of 1.08 ± 0.46, quantifying wavelength for the global minima within the Tn sequence. The composite attention mechanism in the AttentiveSpecExLSTM adeptly captures both high-level and low-level dependencies, refining the model’s comprehension and guiding informed decisions. Hierarchical dot and additive attention within this model enable nuanced attention refinement across model layers; dot attention focuses on high-level dependencies, while additive attention fine-tunes its focus on low-level dependencies within the sequence. This innovative strategy enables accurate estimation of the spectral width (full-width half maxima) of the Tn curve, surpassing raw data’s capabilities. These findings significantly contribute to data quality enhancement and sensor performance analysis. Insights from this study hold promise for future sensor applications, enhancing sensitivity and accuracy by improving experimental data quality and sensor performance assessment.

1. Introduction

In recent years, fiber optic sensors (FOS) have gained significant attention due to their exceptional sensitivity, reliability, and immunity to electromagnetic interference. These sensors have been extensively employed in various fields, such as telecommunications, aerospace, and structural health monitoring, where accurate measurement of physical parameters is critical [1,2,3,4]. Among different types of FOS, the fiber Bragg grating (FBG) sensor has emerged as a promising technology for precise and distributed sensing applications.
Tilted FBG (TFBG) sensors have been used to measure various physical or environmental parameters, such as temperature, strain, pressure, and refractive index (RI) [5,6,7,8]. The tilted grating structure in a TFBG sensor induces coupling between the fundamental core mode and cladding modes of the fiber, leading to higher sensitivity than conventional FBG sensors. By measuring the shift in Bragg wavelength caused by changes in surrounding physical/environmental parameters, a TFBG sensor can accurately measure the desired parameter [9].
One key parameter that plays a crucial role in FBG-based sensing is the spectral width of FBG spectrum, which determines the resolution and precision of the sensor system [10]. However, the precise estimation of the spectral width presents a significant challenge due to several factors, including sensor property, environmental noise, and limitations of conventional data processing techniques.
Recent advancements in artificial intelligence (AI) have significantly bolstered the accuracy and robustness of TFBG sensor measurements, with notable progress in machine learning (ML) and deep learning (DL) algorithms. ML employs algorithms and statistical models for task execution based on patterns and inference. DL, which is a subset of ML, employs multi-layered neural networks to discern intricate patterns in the given data. Notably, ML has been pivotal in advancing photonic and optical fiber-based devices, surmounting challenges like processing speed constraints, noise reduction over long fiber lengths, data management, nanophotonic meta-surface generation, and addressing cross-sensitivity issues [11,12,13].
In a recent study, unsupervised ML techniques, specifically k-means clustering and principal component analysis (PCA), were applied for signal analysis in plasmonic-based sensors coated with Au-Pd polymers [14]. Another study proposed the use of PCA as a data analysis technique for monitoring multiple parameters, identifying correlations, and reducing dimensionality in aquaculture data analysis [15]. Furthermore, a study introduced a FBG temperature sensor array and employed various ML algorithms for fluid level detection, achieving a root mean square error (RMSE) of 3.56 cm [16]. Avellar et al. presented a transmission–reflection analysis system that utilized dielectric nanoparticles (NPs)-doped fibers and AI to achieve high spatial resolution in distributed sensing [17]. ML techniques enable the analysis and optimization of micro-structured film designs, including v-cut, lenticular shapes, and patterned holes, to transform light distribution, enhance efficiency, and maintain a low unified glare rating (UGR) in LEDs [18]. ML algorithms can also facilitate accurate data analysis for high-performance fuel gauging sensors [19]. In the realm of fiber-optic biosensors, ML techniques enhance accuracy and reliability by extracting information from spectra, enabling sensitive detection of cortisol levels [20]. Several studies have demonstrated the potential of generative ML algorithms in optimizing and developing novel grating structures [21,22].
For optimizing and enhancing the performance of FBG sensors, ML algorithms have been explored in several studies. These advancements aim to improve the estimation of measurands based on the optical characteristics of the signal. Various regression algorithms, such as decision tree, random forest, K-nearest neighbour (KNN), and Gaussian process regression (GPR) models, have been investigated to optimize the estimation of measurands [23]. Additionally, an optimization method based on the nondominated sorting genetic algorithm II (NSGA-II) has been proposed to determine the optimum grating parameters for FBG sensors based on application requirements [24]. A demodulation system for FBG sensors based on a long-period fiber grating (LPG) driven by AI techniques was developed, reporting high-precision wavelength interrogation [25].
Focusing on the FBG sensor, this research paper leverages the capability of DL models to enable precise spectral width (full width at half maximum, i.e., FWHM) measurement in FBG sensor spectra through advanced experimental data processing. Specifically, we explore the application of DL algorithms to improve the estimation of FWHM and investigate their performance compared to conventional techniques.
The main contribution of this research are as follows:
  • Development of a novel framework for estimating FWHM of TFBG-based glucose sensor data: We developed a novel DL-based framework for FWHM estimation in TFBG-based glucose sensor data.
  • Comprehensive evaluation of the proposed approach: Extensive experiments were conducted to rigorously evaluate the performance and applicability of the proposed DL model on real-world TFBG glucose sensor data.
  • Proposing a more robust data analysis method: The sensor data are reformulated using a comprehensive DL assisted framework. This approach addresses the high variability in experimental data, leading to more accurate and reliable results.
Overall, this research presents a significant advancement in FOS by proposing a refined methodology for analyzing sensor data. By integrating DL techniques with innovative attention mechanisms and evaluation metrics, it establishes a new standard for enhancing the performance and data quality of the FOS.
This paper is organized as follows: Section 2.1. provides an overview of the collected TFBG sensor data, Section 2.2. introduces the proposed ML framework for precise FWHM estimation, and Section 2.3. provides details of the experimental setup and training. Section 3 then presents the results and discusses the performance of the DL-enabled approach. Finally, Section 4 concludes the work and outlines future research directions. This research combines the strengths of DL algorithms with the inherent advantages of FBG sensors and aims to improve experimental data handling. This paves the way for next-generation systems that demonstrate improved accuracy, reliability, and adaptability across diverse application scenarios.

2. Proposed Methodology

2.1. Data Exploration

The TFBG data collected are related to an experimental study on the Au-TFBG sensor, where the Tn values are measured at varying glucose concentrations (Cg) values, as presented in Figure 1a [26]. The experiments were conducted over three days, with Cg ranging from 0% to 50% (0%, 1%, 5%, 10%, 20%, 30%, 40%, and 50%), to capture the variability occurring in the transmittance, mostly due to the coupling variations between optical fiber and light source. A representative subset of the dataset is shown in Table 1.
For each Cg value, at least three measurements were taken per day. The dataset contains wavelength (λ) values ranging from 1500 nm to 1620 nm, with an actual laser resolution of 0.049 nm and an input resolution of 0.06 nm. As a result, the array has the dimensions of (3 × 2001 × 8 × 2), where the first axis represents the day dimension, the second axis represents the λ dimension, and the third axis represents the Cg dimension. The last dimension represents the two values, (i.e., λ and Tn). In the upcoming sections, this dataset is called the FWHM-data. A detailed methodology (SM-2) is provided in Supplementary Materials file.

2.2. Proposed Models

It is noteworthy that the variability in experimental data can arise due to factors like coupling fluctuations, environmental changes, and human errors, which may lead to hindrances in the analysis and measurement of the FWHM. To address this issue, we propose a two-step approach to negate the variability effects and improve the FWHM estimations.
Firstly, we utilize a statistical model called seasonal decomposition using moving averages (SDMA) to extract the trend and discard redundant Tn values during FWHM estimation. SDMA deconstructs time series data into their underlying components (trend, seasonal, and residual) by taking a moving average of a fixed window size over the time series. The trend component is derived using an additive model (original series (Y) = trend (T) + seasonality (S) + residuals (R)) and is further processed for extrapolation modeling [27,28], as represented in Figure 1b.
In the second stage, we utilized the extracted data from SDMA, with proposed sequential modeling approach to extrapolate the Tn spectra for improved FWHM. This approach utilizes a recurrent neural network (RNN) framework, specifically the long short-term memory (LSTM) framework, which learns complex hidden patterns by treating the data sequentially [29,30,31,32]. The LSTM models the regression function in a supervised fashion, using sequence of λ with day and Cg values to predict a sequence of Tn values (extracted from SDMA).
The proposed models for accomplishing the aforementioned task are as follows.

2.2.1. Spectral Extrapolator LSTM (SpecExLSTM)

The SpecExLSTM is a type of sequential deep belief network (DBN). This model is designed to work in a sequential setting [33,34]. The primary goal of Seq-to-Seq translation involves treating an input sequence of λ conditioned with the Cg value into the Tn sequence. To accomplish this, LSTM serves as an efficient tool for capturing both long-term and short-term dependencies within the data in a sequential manner. LSTM excels in handling sequential information, where λ is treated sequentially, by retaining and processing information over varying spectral dimension, making it apt for sequence extrapolation. SpecExLSTM is equipped with two LSTM layers that can perform bidirectional computations (Bi-LSTM) and a single unit for transformation. The Bi-LSTM layers contain 64 and 32 units, respectively. The Bi-LSTM output function settings are modified with the linear function in the proposed model to provide sparsity to the learnable-weight values. In addition, the single unit for transformation contains a sigmoid function as an activation. However, multiple activation functions have been experimented with, and it has been discussed in the following sections. The time complexity of the following model is shown to be O(ln2), where l is the sequence length and n is the number of units.

2.2.2. Spectral Extrapolation with Attention (AttentiveSpecExLSTM)

The AttentiveSpecExLSTM incorporates an additional attention layer component. The proposed model employs the composition of two different types of attention mechanisms: dot attention (Luong attention) and add attention (Bahdanau attention) [31,35]. The introduction of the attention architecture brings several enhancements compared to the SpecExLSTM, including improved context awareness, translation quality between input and output sequences, and interpretability. Additionally, in photonic applications, alternative attention variants with LSTM have demonstrated notably enhanced performance relative to their counterparts without attention [36].
The base architecture of the model is the same as SpecExLSTM, with the additional complexity of attention layers. The dot attention layer is applied after the first Bi-LSTM layer, which computes the attention by taking the Bi-LSTM layer’s outputs. Moreover, the additive attention layer is applied after the second Bi-LSTM layer, which also computes from the LSTM layer outputs and then feeds them into a single unit perceptron layer, as represented in Figure 2. This approach implies that the attention mechanism is attending to the LSTM output sequence solely based on its features, without considering any external contextual information. The following model shows the time complexity of O(2l2n2), where l is the sequence length and n denotes the number of units.

2.3. Data Preparation

To ensure data standardization prior to modeling, min–max scaling was applied to the Tn values of each day, resulting in data sampled between the bounds of 0 and 1. Additionally, the spectral width can be estimated from the trend series of the FWHM data. Hence, we performed trend extraction on the FWHM data using the SDMA model. This model employs a rolling average technique to decompose the input series into its constituents: trend, seasonality, and residuals. The trend component was extracted using a window size of 50, and the resulting trend plots are displayed in Figure 1b.
In this sequence, Figure 3, presents the comparison between TFBG data collected on the same day for a sub-set of Cg values after the application of SDMA. It is evident that while the Tn magnitudes exhibit shifts for each day, the corresponding λ values for Tn minima remain consistent for the given glucose Cg values. This indicates the repeatability and consistency of the sensor data.
However, calculating FWHM becomes complex due to variations in the damping of the curve and the absence of the curve intersection. Nonetheless, we have addressed these issues using the application of the ML/DL model, discussed further below.
Before applying ML/DL models, the data were partitioned based on Cg values. The Cg values for each day were divided into training, validation, and test sets to ensure unbiased splits across the days. The partition ratio for each day’s Cg values in the train–test-validation sets was 7.0:1.5:1.5, resulting in 15, 6, and 3 Cg values, respectively.
Furthermore, λ and Cg values were standardized, and the days were encoded into a one-hot vector representation. The data were then augmented by breaking them into smaller overlapping sequences using a window size of l = 10 (sequence length) and a stride of 5, following an up-sampling of the training split. Consequently, the inputs were transformed into an array of dimensions (Batch size × l × 5), where the last axis contained values for λ, Cg, and the one-hot vector representation of the days (with the vector length being the total number of days).
Similarly, the outputs, namely, the Tn values, were transformed into an array of dimensions (Batch size × l × 1), with the batch size set to 512 to align with the model requirements.

2.4. Model Training

The SpecExLSTM and AttentiveSpecExLSTM models were optimized with the mean-squared error (MSE) as a loss function and trained exclusively using the ADAM optimizer, which employed a cyclic learning rate policy. The evaluation of these models utilized six commonly employed metrics: mean absolute error (MAE); root mean square error (RMSE); symmetric mean absolute percentage error (SMAPE); non-linear regression multiple correlation coefficient (R2); percentage of correction direction (PCD); and dynamic time warping similarity (DTW-sim.) [37]. Additionally, we introduced a novel metric, namely, minima difference (Min-dif.), to assess the accuracy of the models in tracking the minima and measuring the corresponding λ values for Tn minima. The formulations of these metrics (SM-1) are provided in Supplementary Materials file.
The Min-dif. metric is pivotal for evaluating the effectiveness of the proposed models in their implicit tracking of spectral width. In essence, proposed models are acquiring a shared representation of Tn sequences vis-à-vis λ sequences, Cg, and day labels. Their collective goal during the inference is to predict Tn sequences, accomplished through interpolating missing Cg and extrapolating the Tn sequence for absent λ sequences for calculating FWHM of the Tn curve.
For precise FWHM calculation, it is imperative to measure the λ value at the maxima of Tn curve. Existing evaluation metrics focus on average error, forecast accuracy, goodness of fit, or temporal deviation, but they may not align with the specific requirements for the FWHM calculation. The introduced Min-dif. metric addresses this gap by crucially measuring λ at Tn maxima in both predicted and true sequences. This adds a vital dimension to model evaluation, quantifying its efficacy in utilizing shared representations for FWHM calculation based on λ, Cg values, and day labels. The metric formulation is straightforward: we begin by identifying the index of the minima Tn value in both the unprocessed predicted and true sequences (the minima is computed for the unprocessed Tn sequences; the FWHM measurements are performed on the processed sequences as described in Section 3.2; computing minima for the unprocessed Tn sequences yields the same results as computing maxima for the processed Tn sequences). From this index, we extract the corresponding λ values and calculate the absolute difference between predicted and true λ values. The mathematical representation is as follows:
Let y i and y i ^ be the true and predicted Tn sequence, then
Min dif . = | x [ argmin y i ] x [ argmin y i ^ ]   | .
In Equation (1), the “argmin” function returns the indices of the minimum value for the given sequence.

3. Results and Discussion

In this section, we present and analyze the results of the tests conducted on the models discussed in the previous sections. The models were evaluated using various test metrics, as discussed in Section 2.3, with the objective of comparing and selecting the model showing the best possible performance. Our aim is to choose a model that consistently performs well across all metrics, displaying low RMSE, MAE, SMAPE, and Min-dif. values, as well as high R2, PCD, and DTW-sim. values.
Table 2 presents the average evaluation metric scores for the top-three performing SpecExLSTM and AttentiveSpecExLSTM models. Based on these metrics, we observe that the AttentiveSpecExLSTM model generally outperformed the SpecExLSTM model. The AttentiveSpecExLSTM model exhibited slightly lower RMSE, MAE, and SMAPE values, indicating greater accuracy in predicting the target values. However, the Min-dif. for the AttentiveSpecExLSTM model was higher than that of the SpecExLSTM model, indicating a larger difference between predicted and actual λ values corresponding to Tn minima. On the other hand, the R2 value for the AttentiveSpecExLSTM model was slightly higher, indicating a better fit to the data. Additionally, the AttentiveSpecExLSTM model had a higher PCD value, suggesting a stronger degree of correlation. Both models had identical DTW-sim. values.
The AttentiveSpecExLSTM model demonstrates superior performance over SpecExLSTM across various evaluation metrics, showcasing enhanced predictive capabilities. However, specific challenges arise, notably in the Min-dif. metric. SpecExLSTM achieves an average score of 0.543 with a deviation of 0.12, whereas AttentiveSpecExLSTM yields an average score of 1.086 with a deviation of 0.469. While the attention mechanism in AttentiveSpecExLSTM contributes to lower RMSE and MAE, it also influences the Tn curve by shifting minima, reflected in lower Min-dif. scores. This shift is pivotal, directly impacting FWHM calculation. To improve the performance, incorporating Min-dif. directly into the loss function or enhancing the network’s awareness of underlying physics could be explored.
Additionally, upon analyzing the predicted Tn curve of both models, particularly in Figure 4a,c,e compared to Figure 4g,i,k, within the highlighted region marked by a red circle, AttentiveSpecExLSTM exhibits significant improvement by reducing variability. However, accuracy in prediction remains a challenge, evident from the high deviation in R2 and DTW-sim. metrics scores. To further enhance the proposed methodology’s performance, scaling the experimental setup by collecting more data and inducing generalizability could be explored. This approach would expose the model to various Tn curve variations, contributing to improved accuracy.
Overall, the AttentiveSpecExLSTM model exhibited reasonably superior performance across most evaluation metrics, demonstrating its enhanced predictive capabilities compared to the SpecExLSTM model. This improved performance can be attributed to the inclusion of a composite attention mechanism in the AttentiveSpecExLSTM model. This mechanism enhanced the model’s ability to capture both short-range and long-range dependencies within the sequence.
In the next section, we conduct a comprehensive empirical analysis to explore the effects of the attention layer, specifically additive attention and dot product attention.

3.1. Empirical Analysis of Composite Attention Mechanism

This section provides an evaluation and analysis of the proposed composite attention mechanism, AttentiveSpecExLSTM. Thia model employs a hierarchical attention approach, combining dot product attention in the initial layer and additive attention in the deep layer.
To assess the contribution of each attention mechanism in the composite setup, ablation experiments were conducted with different attention layer configurations. The attention layer was removed from the AttentiveSpecExLSTM model, resulting in two ablated models: L1-Dot Attn. only and L1-Add Attn. only. The former applies dot attention after the first Bi-LSTM layer, while the latter applies additive attention. Evaluation metrics were recorded for these models, and Table 3 presents the average results of the top three best-performing models.
From Table 3, it is evident that the L1-Dot Attn. only model consistently outperformed the other models in terms of evaluation metrics. This model achieved the minimum MAE (1.28 ± 0.01 × 10−2), SMAPE (2.36 ± 0.01 × 10−2), Min-dif. (0.60 ± 0.16), R2 (98.87 ± 0.13 × 10−2), PCD (58.86 ± 0.4 × 10−2), and DTW-sim. (0.35 ± 0.02). Additionally, the L1-Dot Attn. only model achieved the minimum RMSE (1.87 ± 0.11 × 10−2).
Figure 5a,b depict the activation map of the attention layer for the L1-Dot Attn. only and L1-Add Attn. only models, respectively. The activation map of the L1-Dot Attn. only model indicates a focus on short-range dependencies, with attention peaks concentrated around specific sequence positions. Conversely, the L1-Add Attn. only model exhibits a broader distribution of attention weights across the sequence, capturing long-range dependencies.
Similar patterns were observed in the second layer ablated models (L2-Dot Attn. only and L2-Add Attn. only), as shown in Figure 5c,d. Notably, the L2-Add Attn. only ablated model outperformed the L2-Dot Attn. only ablated model in terms of RMSE, MAE, SMAPE, Min-dif., R2, and PCD, as noted in Table 3.
When both attention layers were employed with the same attention type (dot attention or additive attention) in the initial and deep layers, additive attention consistently yielded better results in the evaluation metrics compared to dot attention. Table 3 illustrates that both Add-Attn. models achieved the minimum scores for evaluation metrics, including RMSE (1.93 ± 0.11 × 10−2), SMAPE (2.49 ± 0.18 × 10−2), R2 value (98.98 ± 0.128 × 10−2), and PCD (60.72 ± 0.68 × 10−2). The activation pattern of the Both Add-Attn. model (refer to Figure 5f) reveals a sparse distribution in the attention map, indicating its ability to model non-linear interactions and capture long-range dependencies.
To capture both short-range and long-range dependencies effectively, a hierarchical composite attention mechanism was experimented with. This mechanism aimed to learn the low-level and global context of the sequence at different stages. Two hierarchical composite attention models were evaluated, and the L1-Dot Attn./L2-Add Attn. or AttentiveSpecExLSTM model exhibited superior performance compared to the L1-Add Attn./L2-Dot Attn. model. The L1-Dot Attn./L2-Add Attn. model achieved an RMSE of 1.73 ± 0.05 × 10−2, an MAE of 1.20 ± 0.04 × 10−2, an SMAPE of 2.22 ± 0.05× 10−2, a Min-dif. of 1.086 ± 0.469, an R2 of 99.08 ± 0.12× 10−2, a PCD of 60.41 ± 2.68× 10−2, and a DTW-sim. of 0.35 ± 0.053. Figure 5h. visually represents this model, illustrating how attention is applied in both the initial and deep layers in a complementary fashion. This behavior is also observed in Both Dot-Attn., Both Add-Attn., and L1-Add Attn./L2-Dot Attn. models.
Furthermore, attention weights are broadly distributed in the damped region of the sequence, while they concentrate towards a narrow position in the sequence towards the extremes. This consistent behavior was observed across all experiments.
In summary, the AttentiveSpecExLSTM model with the proposed hierarchical composite attention demonstrates its capability to capture fine-grained details across the entire sequence through initial dot product attention. The subsequent additive attention in the deep layer focuses on global dependencies and relevant context, utilizing the information extracted by the previous layers. This hierarchical attention contributes to the model’s superior performance by progressively refining its understanding and making more informed decisions.
The next section applies the AttentiveSpecExLSTM model with the proposed hierarchical composite attention for curve enhancement, aiming to improve the FWHM estimation.

3.2. FWHM Estimation

As discussed in the previous sections, the FWHM-data contains Cg values for three days across a range of λ from 1500 nm to 1620 nm, with a spectral resolution of 0.06 nm. The calculation of the spectral width is challenging due to the high variability in the Tn values for the same Cg and λ value of different days. This is because the Tn curve is not intersected with the half-maxima value in the current λ range for days 2 and 3, whereas the Tn curve is only intersected with the half-maxima value in the current λ range for day 1, as shown in Figure 6.
To address this issue, we have employed a two-fold approach, where primarily the high variability in the experimental data were removed with the SDMA model and then the sequential ML/DL model is used for extrapolation. The proposed AttentiveSpecExLSTM model was selected to extrapolate the λ range beyond the bound of the training dataset until the Tn curve intersects with the half-maxima value in order to calculate the FWHM of the curve for quantifying the performance of the Au-TFBG sensor using this approach.
It was found that only the left bound of λ in the dataset needed to be extended for extrapolation since the Tn curve did not intersect with the half-maxima in this region. The Tn values of λ, ranging from 1450 nm to 1620 nm, were predicted while keeping the resolution the same as the training resolution of 0.06 nm. The predicted Tn values were concatenated with the true Tn values of the un-extrapolated data to calculate the FWHM or spectral width.
To calculate the FWHM value, the predicted scaled Tn values were subtracted from 1 to obtain an inverse plot of the Tn. The spectral width calculation was then performed, which yielded FWHM values as shown in Table 4. Figure 7 shows the spectral width calculation of the experimental Au-TFBG glucose sensor data. The FWHM calculation for day 1 did not require extrapolation, whereas for days 2 and 3, FWHM was calculated after extrapolating the λ left-bound range.
To address this issue, the selected AttentiveSpecExLSTM model was utilized to extrapolate the λ range beyond the bound of the training dataset until the Tn curve intersected with the half-maxima value. This was undertaken in order to allow for the FWHM of the curve to be calculated for quantifying the performance of the Au-TFBG sensor.

3.3. Quantifying Sensor Performance with Figure of Merit (FOM) Estimation

In this section, we delve deeper into the computation of the FWHM values using the AttentiveSpecExLSTM model. The introduction of the figure of merit (FOM) offers a more nuanced and comprehensive metric for the assessment of the sensor’s performance. This metric serves as a guiding factor in the pursuit of optimizing sensor functionality for a diverse range of enhanced sensing applications. The FOM calculation involves these steps:
  • For glucose sensing with the TFBG sensor, sensitivity is defined as the change in resonance wavelength (∆λ) in response to variations in the glucose concentration (∆Cg):
S e n s i t i v i t y = λ C g .
The refractive index ( n ) of the solution is influenced by glucose concentration ( C g ), which can be expressed as
n = n 0 + k   C g ,
where n 0 is the refractive index of the pure solvent and k is a proportionality constant. As glucose concentration increases, the refractive index also increases, causing a corresponding shift in the resonance wavelength of the TFBG sensor. Thus, the sensor’s sensitivity to glucose concentration is intrinsically linked to its response to changes in the refractive index, and it can be expressed as
S e n s i t i v i t y = λ n .
2.
Estimation of FWHM, discussed in Section 3.2, vital for FOM calculation.
3.
FOM computation by measuring the ratio of sensitivity as reciprocal to FWHM, with the following mathematical representation:
F O M = S e n s i t i v i t y × 1 F W H M
As a point of reference, we performed FOM computations on the initial FWHM-dataset, consisting of high-variability in the datapoints. This involved empirical FWHM measurements, followed by sensitivity and FOM calculations. The calculated FOM values provide insights into the sensor’s performance across different days. On day 1, the FOM average was 0.012, with a standard deviation of 0.0038; day 2 showed a slightly higher average of 0.0131, with a standard deviation of 0.0035; while day 3 had an average FOM of 0.0106 with a narrower standard deviation of 0.001. These variations in FOM values across days could be attributed to various factors; notably, the empirical measurement of the FWHM from the high-variability data also leads to inaccurate calculations of the FOM, as shown in Figure 1a.
However, a significant shift in FOM values was observed when incorporating the AttentiveSpecExLSTM model to reduce the variability of the data and extrapolate for the spectral width measurements. The model helped to mitigate the impact of measurement variability by extracting meaningful patterns from the data. The FOM values on day 1 exhibit an average of 0.0006, with a standard deviation of 4.9 × 10−5; whereas day 2 and day 3 exhibit FOM values with averages of 0.0065 and 0.00652, respectively, and relatively low standard deviations.
This marked progress in FOM measurement subsequent to the model’s implementation underscores the AttentiveSpecExLSTM model’s proficiency in elevating data quality and reducing measurement uncertainties. It accentuates the model’s ability to focus on the relevant spectral features and discard the variability, contributing to more accurate FOM calculations. The decreasing standard deviations further signify an elevation in data consistency and repeatability, consequently bolstering the dependability of the sensor’s performance evaluation. Ultimately, this advancement in FOM assessment substantiates the sensor’s optimization and attests to the efficacy of the TFBG sensor, guided by the refined insights provided by the AttentiveSpecExLSTM model.

3.4. Comparison of the Proposed Scheme with the Existing Schemes

As mentioned earlier, we have treated the transmittance spectrum as a time-series in the wavelength dimension, framing the problem as time-series forecasting (extrapolation). Several reports in the literature operate on similar data types, either treating the data as dependent (sequential) or independent. Dwivedi et al. treated optical sensor data via GPR to model the FOM of surface plasmon resonance (SPR) sensors [37]. This study considered the sensor’s data as independent, achieving an RMSE of 185.52, an MAE of 78.32, and an R2 of 0.927. Further, in one of our previous studies, we approached the problem as a sequential one, employing RNN-based models to forecast a series of FOM values from the wavelength and the corresponding metal layer thickness values [38]. The above study achieved superior results, with an RMSE of 2.21, an MAE of 0.54, and an R2 of 0.99 on the test dataset. Salmela et al. utilized an RNN-based model, particularly LSTM, to predict the temporal and spatial evolution of light waves from the initial conditions of light pulses using simulation data [39]. Despite achieving an RMSE value of 0.161, their model struggled with extreme values in the wavelength spectrum and was limited to learning only from the provided simulation data. Liu et al. proposed optimization methods, including cuckoo search and orthogonal least squares, to optimize the architecture of an RNN model for a microwave heating system. Despite achieving an RMSE of 0.67 and an MAE of 0.536, their methodology was computationally expensive due to its complexity [40].
In contrast, our proposed methodology preserves the true nature of TFBG’s experimental data while effectively capturing trends and patterns within the transmittance spectrum. This enables us to extrapolate the transmittance spectrum for the enhanced estimation of FWHM. Table 5 presents the comparative scenario as per the above discussion.

4. Conclusions and Future Work

This research effectively employs DL techniques to improve data analysis for quantifying the performance of Au-TFBG-based glucose sensors. We introduced a novel metric, the minima difference (Min-dif.), to evaluate model accuracy in tracking the minima and corresponding λ values for Tn minima. Additionally, we proposed two new sequential architectures and a hierarchical composite attention mechanism.
The baseline SpecExLSTM model demonstrated promising performance, achieving an RMSE of 1.75 ± 0.03 × 10−2 and a PCD of 59.12 ± 1.11 × 10−2. The integration of the hierarchical composite attention mechanism in the AttentiveSpecExLSTM model further enhanced prediction accuracy, resulting in an RMSE of 1.73 ± 0.05 × 10−2 and a PCD of 60.41 ± 2.68 × 10−2. This attention mechanism plays a crucial role in capturing both high-level and low-level dependencies, allowing the model to refine its understanding and improve accuracy.
Our methodology includes a two-step approach to addressing variability in experimental data: first, by applying SDMA to filter out noise; and second, by using DL/ML models to extrapolate transmittance values with respect to λ. This approach enabled accurate measurement of the spectral width of the Tn curve, which was previously unachievable with raw data. A notable improvement was achieved, resulting in FOM values of 0.0006 ± 0.00049 for day 1 and 0.0065 and 0.00652 for day 2 and day 3, respectively. This underscores the impact of the methodology in mitigating variability and rectifying inaccuracies while quantifying the performance of the Au-TFBG glucose sensor.
Our method is designed for high accuracy across various solutions, not just glucose. While specifically tailored to TFBG sensor data, its adaptability extends to datasets with similar attributes. Future work will focus on refining the proposed models by incorporating the physical information of the system into the neural network, along with introducing bias, to fine-tune the trade-off between generalization and the performance of the learning algorithm, as well as overall system performance.
Additionally, we aim to scale experimental data to create a more generalized dataset, further boosting system performance. However, this scaling poses challenges, including increased computational time and cross-sensitivity (e.g., the DL model inadvertently learning sensor-specific behavior from the training data over time with respect to the Tn from the glucose concentration). When dealing with high-resolution concentration values, minor differences in concentrations may lead to overlapping latent representations in the model. This could be addressed by exploring physics-informed neural networks.
For computational efficiency, sequential learning models, such as transformers, offer promise due to their ability to parallelize processes. In regression tasks, transformers can treat decimal points as categorical values, which are then concatenated to form floating-point predictions. This approach could significantly reduce computational overhead and improve prediction precision.
In summary, this research significantly advances data quality and performance in fiber optic sensors through innovative DL and ML techniques, providing valuable insights for future photonic sensor applications.

Supplementary Materials

The supporting information (SM-1 and SM-2) can be downloaded in a single file at: https://www.mdpi.com/article/10.3390/photonics11111058/s1.

Author Contributions

Conceptualization, H.T., Y.S.D., R.S., A.K.S. (Anuj K. Sharma) and C.M.; methodology, H.T.; software, H.T.; validation, Y.S.D., R.S., A.K.S. (Anuj K. Sharma) and C.M.; formal analysis, H.T. and R.S.; investigation, H.T., A.K.S. (Anuj K. Sharma) and N.S.S.; resources, A.K.S. (Anuj K. Sharma) and C.M.; data curation, H.T.; writing—original draft preparation, H.T.; writing—review and editing, H.T., A.K.S. (Anuj K. Sharma), R.S., Y.S.D., R.K., Y.K.P. and C.M.; visualization, H.T.; supervision, A.K.S. (Anuj K. Sharma); project administration, A.K.S. (Ajay Kumar Sharma); funding acquisition, A.K.S. (Anuj K. Sharma) and C.M. All authors have read and agreed to the published version of the manuscript.

Funding

Anuj K. Sharma and Ajay Kumar Sharma received the funding from the Department of Science and Technology (India) through research grant (No. DST/ICD/BRICS/Call-5/3DBioPhoto/2023) under BRICS STI Framework Programme. The work of C. Marques was supported by the projects CICECO (LA/P/0006/2020, UIDB/50011/2020 & UIDP/50011/2020), DigiAqua (PTDC/EEI-EEE/0415/2021), financed by national funds through the (Portuguese Science and Technology Foundation/MCTES (FCT I.P.)). The research was co-funded by the financial support of the EU under the REFRESH—Research Excellence For REgion Sustainability and High-tech Industries project no. CZ.10.03.01/00/22_003/0000048 via the Operational Programme Just Transition.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Acknowledgments

Anuj K. Sharma and Ajay Kumar Sharma acknowledge the Department of Science and Technology (India) for their research grant (No. DST/ICD/BRICS/Call-5/3DBioPhoto/2023) under the BRICS STI Framework Programme.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cibira, G.; Glesk, I.; Dubovan, J. Dynamic Bandwidth Allocation for C-Band Shared FBG Sensing and Telecommunications. IEEE Internet Things J. 2022, 9, 23272–23284. [Google Scholar] [CrossRef]
  2. Hegde, G.; Asokan, S.; Hegde, G. Fiber Bragg grating sensors for Aerospace Applications: A Review. ISSS J. Micro Smart Syst. 2022, 11, 257–275. [Google Scholar] [CrossRef]
  3. Kinet, D.; Mégret, P.; Goossen, K.W.; Qiu, L.; Heider, D.; Caucheteur, C. Fiber Bragg grating sensors toward structural health monitoring in composite materials: Challenges and solutions. Sensors 2014, 14, 7394–7419. [Google Scholar] [CrossRef] [PubMed]
  4. Rohan, R.; Venkadeshwaran, K.; Ranjan, P. Recent advancements of fiber Bragg grating sensors in Biomedical Application: A Review. J. Opt. 2023, 53, 282–293. [Google Scholar] [CrossRef]
  5. Li, X.; Yang, Y.; Zhang, W.; Wang, Z.; Yuan, Y.; Hu, H.; Xu, D. An FBG pressure sensor based on spring-diaphragm elastic structure for ultimate pressure detection. IEEE Sens. J. 2022, 22, 2213–2220. [Google Scholar] [CrossRef]
  6. Ma, K.-P.; Wu, C.-W.; Tsai, Y.-T.; Hsu, Y.-C.; Chiang, C.-C. Internal residual strain measurements in carbon fiber-reinforced polymer laminates curing process using embedded tilted fiber Bragg grating sensor. Polymers 2020, 12, 1479. [Google Scholar] [CrossRef] [PubMed]
  7. Chehura, E.; James, S.W.; Tatam, R.P. Temperature and strain discrimination using a single tilted fibre Bragg grating. Opt. Commun. 2007, 275, 344–347. [Google Scholar] [CrossRef]
  8. Soares, M.; Marques, C. Fiber gratings–based plasmonic sensors. In Plasmonics-Based Optical Sensors and Detectors; Jenny Stanford Publishing: Dubai, United Arab Emirates, 2023; pp. 133–161. [Google Scholar] [CrossRef]
  9. Dong, X.; Zhang, H.; Liu, B.; Miao, Y. Tilted Fiber Bragg gratings: Principle and sensing applications. Photonic Sens. 2010, 1, 6–30. [Google Scholar] [CrossRef]
  10. Hisham, H. Full width-half maximum characteristics of FBG for petroleum sensor applications. Iraqi J. Electr. Electron. Eng. 2020, 16, 99–103. [Google Scholar] [CrossRef]
  11. Mall, A.; Patil, A.; Tamboli, D.; Sethi, A.; Kumar, A. Fast design of plasmonic metasurfaces enabled by deep learning. J. Phys. D Appl. Phys. 2020, 53, 49LT01. [Google Scholar] [CrossRef]
  12. Mall, A.; Patil, A.; Sethi, A.; Kumar, A. A cyclical deep learning based framework for simultaneous inverse and forward design of nanophotonic metasurfaces. Sci. Rep. 2020, 10, 19427. [Google Scholar] [CrossRef] [PubMed]
  13. Genty, G.; Salmela, L.; Dudley, J.M.; Brunner, D.; Kokhanovskiy, A.; Kobtsev, S.; Turitsyn, S.K. Machine Learning and Applications in ultrafast photonics. Nat. Photonics 2020, 15, 91–101. [Google Scholar] [CrossRef]
  14. Leal-Junior, A.; Lopes, G.; Marques, C. Development and analysis of multifeature approaches in SPR sensor development. Photonics 2023, 10, 694. [Google Scholar] [CrossRef]
  15. Silva, L.C.; Lopes, B.; Pontes, M.J.; Blanquet, I.; Segatto, M.E.; Marques, C. Fast decision-making tool for monitoring recirculation aquaculture systems based on a multivariate statistical analysis. Aquaculture 2021, 530, 735931. [Google Scholar] [CrossRef]
  16. Nascimento, K.P.; Frizera-Neto, A.; Marques, C.; Leal-Junior, A.G. Machine learning techniques for liquid level estimation using FBG temperature sensor array. Opt. Fiber Technol. 2021, 65, 102612. [Google Scholar] [CrossRef]
  17. Avellar, L.; Frizera, A.; Rocha, H.; Silveira, M.; Díaz, C.; Blanc, W.; Marques, C.; Leal-Junior, A. Machine learning-based analysis of multiple simultaneous disturbances applied on a transmission-reflection analysis based distributed sensor using a nanoparticle-doped fiber. Photonics Res. 2023, 11, 364–372. [Google Scholar] [CrossRef]
  18. Hsu, K.-F.; Lin, C.-W.; Hwang, J.-M. High efficiency batwing thin-film design for LED flat panel lighting. In Proceedings of the 9th International Microsystems, Packaging, Assembly and Circuits Technology Conference (IMPACT), Taipei, Taiwan, 22–24 October 2014; pp. 480–483. [Google Scholar]
  19. Marques, A.F.; Pospori, A.; Sáez-Rodríguez, D.; Nielsen, K.; Bang, O.; Webb, D.J. Aviation fuel gauging sensor utilizing multiple diaphragm sensors incorporating polymer optical fiber Bragg gratings. IEEE Sens. J. 2016, 16, 6122–6129. [Google Scholar] [CrossRef]
  20. Soares, M.S.; Silva, L.C.B.; Vidal, M.; Loyez, M.; Facão, M.; Caucheteur, C.; Segatto, M.E.V.; Costa, F.M.; Leitão, C.; Pereira, S.O.; et al. Label-free plasmonic immunosensor for cortisol detection in a D-shaped optical fiber. Biomed. Opt. Express 2022, 13, 3259–3274. [Google Scholar] [CrossRef]
  21. Pospori, A.; Marques, C.A.; Bang, O.; Webb, D.J.; André, P. Polymer optical fiber Bragg grating inscription with a single UV laser pulse. Opt. Express 2017, 25, 9028–9038. [Google Scholar] [CrossRef]
  22. Min, R.; Ortega, B.; Marques, C.A.F. Fabrication of tunable chirped mPOF Bragg gratings using a uniform phase mask. Opt. Express 2018, 26, 4411–4420. [Google Scholar] [CrossRef]
  23. Raju; Kumar, R.; Dhanalakshmi, S. Design and Analysis of Prediction method for FBG based Humidity Sensor. In Proceedings of the 2023 5th International Conference on Smart Systems and Inventive Technology (ICSSIT), Tirunelveli, India, 23–25 January 2023; pp. 364–370. [Google Scholar] [CrossRef]
  24. Elsayed, Y.; Gabbar, H.A. Enhancing FBG Sensing in the Industrial Application by Optimizing the Grating Parameters Based on NSGA-II. Sensors 2022, 22, 8203. [Google Scholar] [CrossRef] [PubMed]
  25. Pal, D.; Kumar, A.; Gautam, A.; Thangaraj, J. FBG Based Optical Weight Measurement System and Its Performance Enhancement Using Machine Learning. IEEE Sens. J. 2022, 22, 4113–4121. [Google Scholar] [CrossRef]
  26. Marques, C. (Department of Physics and I3N, University of Aveiro, Portugal) provided the TFBG spectra data.
  27. Bandara, K.; Hyndman, R.J.; Bergmeir, C. MSTL: A seasonal-trend decomposition algorithm for time series with multiple seasonal patterns. arXiv 2021, arXiv:2107.13462. [Google Scholar] [CrossRef]
  28. Cleveland, R.B.; Cleveland, W.S.; McRae, J.E.; Terpenning, I. STL: A Seasonal-Trend Decomposition Procedure Based on Loess (with Discussion). J. Off. Stat. 1990, 6, 3–73. [Google Scholar]
  29. Sherstinsky, A. Fundamentals of Recurrent Neural Network (RNN) and long short-term memory (LSTM) network. Phys. D Nonlinear Phenom. 2020, 404, 132306. [Google Scholar] [CrossRef]
  30. Pascanu, M.; Bengio, Y. On the difficulty of training recurrent neural networks. In Proceedings of the International Conference on Machine Learning, Atlanta, GA, USA, 16–21 June 2013; pp. 1310–1318. [Google Scholar]
  31. Cho, K.; van Merriënboer, B.; Gulcehre, C.; Bahdanau, D.; Bougares, F.; Schwenk, H.; Bengio, Y. Learning phrase representations using RNN encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar, 25–29 October 2014. [Google Scholar] [CrossRef]
  32. Guo, T.; Lin, T.; Lu, Y. An interpretable LSTM neural network for autoregressive exogenous model. arXiv 2018, arXiv:1804.05251. [Google Scholar]
  33. Zhang, Y.; Xiao, F.; Qian, F.; Li, X. VGM-RNN: HRRP Sequence Extrapolation and Recognition Based on a Novel Optimized RNN. IEEE Access 2020, 8, 70071–70081. [Google Scholar] [CrossRef]
  34. Hinton, G.E.; Osindero, S.; Teh, Y.-W. A fast learning algorithm for deep belief nets. Neural Comput. 2006, 18, 1527–1554. [Google Scholar] [CrossRef]
  35. Luong, T.; Pham, H.; Manning, C.D. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, Lisbon, Portugal, 17–21 September 2015. [Google Scholar] [CrossRef]
  36. Sakoe, H.; Chiba, S. Dynamic programming algorithm optimization for spoken word recognition. IEEE Trans. Acoust. Speech Signal Process. 1978, 26, 43–49. [Google Scholar] [CrossRef]
  37. Dwivedi, Y.S.; Singh, R.; Sharma, A.K.; Sharma, A.K. Enhancing the performance of photonic sensor using machine learning approach. IEEE Sens. J. 2023, 23, 2320–2327. [Google Scholar] [CrossRef]
  38. Tiwari, H.; Dwivedi, Y.S.; Singh, R.; Kaur, B.; Prajapati, Y.K.; Krishna, R.; Singha, N.S.; Sharma, A.K. Exploring Deep Learning Models Aimed at Favorable Optimization and Enhancement of Fiber Optic Sensor’s Performance. IEEE Sens. J. 2023, 23, 20330–20337. [Google Scholar] [CrossRef]
  39. Salmela, L.; Tsipinakis, N.; Foi, A.; Billet, C.; Dudley, J.M.; Genty, G. Predicting ultrafast nonlinear dynamics in fibre optics with a recurrent neural network. Nat. Mach. Intell. 2021, 3, 344–354. [Google Scholar] [CrossRef]
  40. Liu, T.; Liang, S.; Xiong, Q.; Wang, K. Integrated CS optimization and OLS for recurrent neural network in modeling microwave thermal process. Neural Comput. Appl. 2019, 32, 12267–12280. [Google Scholar] [CrossRef]
Figure 1. (a) Au-TFBG data for days 1–3 with varying glucose levels; X-axis: λ (in nm); Y-axis: Tn. (b) Reduced variability in Au-TFBG for days 1–3 using SDMA across glucose levels.
Figure 1. (a) Au-TFBG data for days 1–3 with varying glucose levels; X-axis: λ (in nm); Y-axis: Tn. (b) Reduced variability in Au-TFBG for days 1–3 using SDMA across glucose levels.
Photonics 11 01058 g001
Figure 2. High-level representations of the AttentiveSpecExLSTM model for improving spectral width estimations, where inputs sequence represents the wavelength sequence with one-hot encoded day vector. The model contains four layers with a hierarchical attention architecture.
Figure 2. High-level representations of the AttentiveSpecExLSTM model for improving spectral width estimations, where inputs sequence represents the wavelength sequence with one-hot encoded day vector. The model contains four layers with a hierarchical attention architecture.
Photonics 11 01058 g002
Figure 3. Relationship between trans. minima and glucose concentration over time. The graph demonstrates a consistent λ (units in nanometers or nm) value corresponding to each trans. minima, indicating a stable correlation between glucose concentration and the trans. minima. For day 1, the two different sensor readings of trans. minima for a given concentration (shown in subplots (a,b)) highlight the reliability of this parameter in assessing glucose levels. This consistency is similarly observed for day 2 and day 3, as reflected in subplots (cf), respectively.
Figure 3. Relationship between trans. minima and glucose concentration over time. The graph demonstrates a consistent λ (units in nanometers or nm) value corresponding to each trans. minima, indicating a stable correlation between glucose concentration and the trans. minima. For day 1, the two different sensor readings of trans. minima for a given concentration (shown in subplots (a,b)) highlight the reliability of this parameter in assessing glucose levels. This consistency is similarly observed for day 2 and day 3, as reflected in subplots (cf), respectively.
Photonics 11 01058 g003
Figure 4. SpecExLSTM (af) and AttentiveSpecExLSTM (gl) models’ re-constructed predictions on the test data split. The x-axis in the subplots represents λ in nm.
Figure 4. SpecExLSTM (af) and AttentiveSpecExLSTM (gl) models’ re-constructed predictions on the test data split. The x-axis in the subplots represents λ in nm.
Photonics 11 01058 g004
Figure 5. Attention maps with respect to dynamic changes in the temporal dimension of the ablated model: (a) L1-Dot Attn. only; (b) L1-Add Attn. only; (c) L2-Dot Attn. only; (d) L2-Add Attn. only; (e) Both-Dot Attn.; (f) Both-Add Attn.; (g) L1-Add/L2- Dot Attn;, and (f) L1-Dot/L2- Add Attn. The x-axis in the subplots (ah) represents λ in nm. The attention maps in the subplots’ x- and y-axes represent the number of attention units. The sub-subfigures (i–iv) in the subfigures represent the attention maps for the respective temporal range (highlighted with a red line) in the subfigures.
Figure 5. Attention maps with respect to dynamic changes in the temporal dimension of the ablated model: (a) L1-Dot Attn. only; (b) L1-Add Attn. only; (c) L2-Dot Attn. only; (d) L2-Add Attn. only; (e) Both-Dot Attn.; (f) Both-Add Attn.; (g) L1-Add/L2- Dot Attn;, and (f) L1-Dot/L2- Add Attn. The x-axis in the subplots (ah) represents λ in nm. The attention maps in the subplots’ x- and y-axes represent the number of attention units. The sub-subfigures (i–iv) in the subfigures represent the attention maps for the respective temporal range (highlighted with a red line) in the subfigures.
Photonics 11 01058 g005
Figure 6. FWHM calculation for day 1 without extrapolation for Cg of 1% (x-axis represents λ in nm).
Figure 6. FWHM calculation for day 1 without extrapolation for Cg of 1% (x-axis represents λ in nm).
Photonics 11 01058 g006
Figure 7. FWHM measurement after extrapolating left-bound (x-axis represents λ in nm).
Figure 7. FWHM measurement after extrapolating left-bound (x-axis represents λ in nm).
Photonics 11 01058 g007
Table 1. Subset of the gathered dataset showcasing correlations between transmittance, wavelength in nanometers, and concentration for days 1, 2, and 3.
Table 1. Subset of the gathered dataset showcasing correlations between transmittance, wavelength in nanometers, and concentration for days 1, 2, and 3.
Day: 1Day: 2Day: 3
λ (nm)052050052050052050
Conc. (%)
15001.0000.1110.1170.1190.2530.2260.2520.2360.6810.6820.6870.687
1500.060.9720.1090.1130.1200.2770.2660.2540.2570.6780.6770.6830.679
1500.120.9380.1090.1090.1190.2510.2860.2500.2460.6800.6770.6830.685
1550.040.6170.4510.3410.3500.1870.2000.3090.1800.4640.5350.4250.693
1550.10.6150.5110.3490.3850.1870.2080.3070.2170.4780.5230.5310.695
1550.160.6240.5740.3970.4140.1880.2130.2990.2470.4820.5150.6170.691
1619.880.1180.1350.1540.1540.3650.3760.3860.3860.9870.9910.9950.996
1619.940.1170.1300.1390.1480.3680.3760.3870.3920.9810.9840.9860.989
16200.1160.1310.1500.1430.3670.3770.3870.3930.9810.9790.9810.981
Table 2. SpecExLSTM and AttentiveSpecExLSTM performance on evaluation metrics.
Table 2. SpecExLSTM and AttentiveSpecExLSTM performance on evaluation metrics.
MetricsSpecExLSTMAttentiveSpecExLSTM
RMSE (×10−2)1.75 ± 0.031.73 ± 0.05
MAE (×10−2)1.21 ± 0.011.20 ± 0.04
SMAPE (×10−2)2.24 ± 0.032.22 ± 0.05
Min-dif.0.543 ± 0.121.086 ± 0.469
R2 (×10−2)99.01 ± 0.0599.08 ± 0.12
PCD (×10−2)59.12 ± 1.1160.41 ± 2.68
DTW-sim.0.35 ± 0.0160.35 ± 0.053
Table 3. Ablated model experimental results on evaluation metrics.
Table 3. Ablated model experimental results on evaluation metrics.
ModelRMSE
(×10−2)
MAE
(×10−2)
SMAPE
(×10−2)
Min-dif.R2
(×10−2)
PCD
(×10−2)
DTW-Sim.
Both-Add Attn.1.93 ± 0.111.44 ± 0.032.49 ± 0.180.68 ± 0.2398.98 ± 1.2860.72 ± 0.680.37 ± 0.01
Both-Dot Attn.2.11 ± 0.011.39 ± 0.052.58 ± 0.050.39 ± 0.4298.83 ± 0.0156.24 ± 3.140.45 ± 0.06
L1-Dot Attn./L2-Add Attn.1.73 ± 0.051.20 ± 0.042.22 ± 0.051.08 ± 0.4699.08 ± 0.1260.41 ± 2.680.35 ± 0.05
L1-Add Attn./L2-Dot Attn.1.90 ± 0.041.35 ± 0.072.52 ± 0.160.25 ± 0.0598.81 ± 0.0556.71 ± 3.910.33 ± 0.04
L1-Dot Attn. only1.89 ± 0.021.28 ± 0.012.36 ± 0.010.60 ± 0.1698.87 ± 0.1358.86 ± 0.400.35 ± 0.02
L1-Add Attn. only1.87 ± 0.111.31 ± 0.062.43 ± 0.120.79 ± 0.2598.85 ± 0.1760.26 ± 0.890.34 ± 0.03
L2-Dot Attn. only2.09 ± 0.101.40 ± 0.062.58 ± 0.070.84 ± 0.6898.58 ± 0.2848.83 ± 2.530.44 ± 0.06
L2-Add Attn. only1.80 ± 0.151.24 ± 0.062.25 ± 0.020.77 ± 0.3499.15 ± 0.0959.29 ± 0.560.41 ± 0.08
Table 4. Measured FWHM values. The AttentiveSpecExLSTM model is utilized for λ extrapolation only for day 1 and day 2.
Table 4. Measured FWHM values. The AttentiveSpecExLSTM model is utilized for λ extrapolation only for day 1 and day 2.
Conc.(%)Day 1Day 2Day 3
065.3377.5879.31
161.7479.3881.18
559.7687.9586.81
1056.8285.5684.53
2056.8283.6480.52
3055.0774.1674.52
4055.4475.3075.95
5055.5068.8872.11
Table 5. Comparative table of algorithms/methodologies applied to similar data types.
Table 5. Comparative table of algorithms/methodologies applied to similar data types.
ReferenceData-TypeMethodologyMetrics
RMSER2MAE
Dwivedi et al. [37]Optical sensor’s FOM data varying with λ and metal layer thicknessGaussian Process Regression185.520.92778.32
Tiwari et al. [38]Optimize the performance of SPR-based optical fiber sensor data i.e., FOM, λ, and metal layer thicknessRecurrent Neural Network02.210.990.54
Salmela et al. [39]Dataset comprises ensembled numerical simulation data encompassing 3000 simulations and 2900 realizationsRecurrent Neural Network0.161--
Liu et al. [40]Dataset comprises modeled microwave power and conveyor speed parameters augmented by an optical fiber temperature sensorRecurrent Neural Network0.67-0.536
This studyDataset comprises transmittance spectra corresponding to Au-TFBG sensors for diverse glucose concentrationsHierarchical Composite Attention Recurrent Neural Network1.730.991.20
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tiwari, H.; Dwivedi, Y.S.; Singh, R.; Sharma, A.K.; Sharma, A.K.; Krishna, R.; Singha, N.S.; Prajapati, Y.K.; Marques, C. Deep Learning-Enabled De-Noising of Fiber Bragg Grating-Based Glucose Sensor: Improving Sensing Accuracy of Experimental Data. Photonics 2024, 11, 1058. https://doi.org/10.3390/photonics11111058

AMA Style

Tiwari H, Dwivedi YS, Singh R, Sharma AK, Sharma AK, Krishna R, Singha NS, Prajapati YK, Marques C. Deep Learning-Enabled De-Noising of Fiber Bragg Grating-Based Glucose Sensor: Improving Sensing Accuracy of Experimental Data. Photonics. 2024; 11(11):1058. https://doi.org/10.3390/photonics11111058

Chicago/Turabian Style

Tiwari, Harshit, Yogendra S. Dwivedi, Rishav Singh, Anuj K. Sharma, Ajay Kumar Sharma, Richa Krishna, Nitin Singh Singha, Yogendra Kumar Prajapati, and Carlos Marques. 2024. "Deep Learning-Enabled De-Noising of Fiber Bragg Grating-Based Glucose Sensor: Improving Sensing Accuracy of Experimental Data" Photonics 11, no. 11: 1058. https://doi.org/10.3390/photonics11111058

APA Style

Tiwari, H., Dwivedi, Y. S., Singh, R., Sharma, A. K., Sharma, A. K., Krishna, R., Singha, N. S., Prajapati, Y. K., & Marques, C. (2024). Deep Learning-Enabled De-Noising of Fiber Bragg Grating-Based Glucose Sensor: Improving Sensing Accuracy of Experimental Data. Photonics, 11(11), 1058. https://doi.org/10.3390/photonics11111058

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop