# Convolutional Neural Networks for Challenges in Automated Nuclide Identification

^{1}

^{2}

^{3}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Spectra for Training and Testing

#### 2.1. Template Spectra

#### 2.2. Test Spectra

## 3. Convolutional Neural Networks

## 4. Methods

#### 4.1. Model Definition and Learning

#### 4.2. Performance Evaluation

## 5. Results and Discussion

#### 5.1. Reference Model

#### 5.2. Reference Model Performance

#### 5.3. Challenging Conditions

#### 5.4. Optimising for Deployment

#### 5.4.1. Data Set Size

#### 5.4.2. Including Real Data

#### 5.4.3. Generalisation

## 6. Conclusions

## 7. Future Work

## Author Contributions

## Funding

## Institutional Review Board Statement

## Informed Consent Statement

## Data Availability Statement

## Conflicts of Interest

## Abbreviations

RIID | Radio-Isotope IDentification |

CNN | Convolutional Neural Network |

ANN | Artificial Neural Network |

ROC | Receiver Operator Characteristic |

TPR | True Positive Rate |

FPR | False Positive Rate |

PPR | Perfect Performance Rate |

AUC | Area Under Curve |

## References

- Olmos, P.; Diaz, J.C.; Perez, J.M.; Gomez, P.; Rodellar, V.; Aguayo, P.; Bru, A.; Garcia-Belmonte, G.; de Pablos, J.L. A new approach to automatic radiation spectrum analysis. IEEE Trans. Nucl. Sci.
**1991**, 38, 971–975. [Google Scholar] [CrossRef] - Vigneron, V.; Morel, J.; Lèpy, M.; Martinez, J. Statistical modelling of neural networks in γ-spectrometry. Nucl. Instrum. Methods Phys. Res. A
**1996**, 369, 642–647. [Google Scholar] [CrossRef] [Green Version] - Pilato, V.; Tola, F.; Martinez, J.; Huver, M. Application of neural networks to quantitative spectrometry analysis. Nucl. Instrum. Methods Phys. Res. A
**1999**, 422, 423–427. [Google Scholar] [CrossRef] - Chen, L.; Wei, Y. Nuclide identification algorithm based on K–L transform and neural networks. Nucl. Instrum. Methods Phys. Res. A
**2009**, 598, 450–453. [Google Scholar] [CrossRef] - Yoshida, E.; Shizuma, K.; Endo, S.; Oka, T. Application of neural networks for the analysis of gamma-ray spectra measured with a Ge spectrometer. Nucl. Instrum. Methods Phys. Res. A
**2002**, 484, 557–563. [Google Scholar] [CrossRef] - Kamuda, M.; Stinnett, J.; Sullivan, C. Automated Isotope Identification Algorithm Using Artificial Neural Networks. IEEE Trans. Nucl. Sci.
**2017**, 64, 1858–1864. [Google Scholar] [CrossRef] [Green Version] - Vargas, A.; Camp, A.; Serrano, I.; Duch, M. Development of a neural network approach to characterise
^{226}Ra contamination at legacy sites using gamma-ray spectra taken from boreholes. J. Environ. Radioact.**2015**, 140, 130–140. [Google Scholar] [CrossRef] [Green Version] - Kamuda, M.; Zhao, J.; Huff, K. A comparison of machine learning methods for automated gamma-ray spectroscopy. Nucl. Instrum. Methods Phys. Res. A
**2018**, 64, 1858–1864. [Google Scholar] [CrossRef] - Liang, D.; Gong, P.; Tang, X.; Wang, P.; Gao, L.; Wang, Z.; Zhang, R. Rapid nuclide identification algorithm based on convolutional neural network. Semin. Nucl. Med.
**2019**, 133, 483–490. [Google Scholar] [CrossRef] - Daniel, G.; Ceraudo, F.; Limousin, O.; Maier, D.; Meuris, A. Automatic and Real-time Identification of Radionuclides in Gamma-ray Spectra: A new method based on Convolutional Neural Network trained with synthetic data set. IEEE Trans. Nucl. Sci.
**2020**, 67, 644–653. [Google Scholar] [CrossRef] - Agostinelli, S.; Allison, J.; Amako, K.; Apostolakis, J.; Araujo, H.; Arce, P.; Asai, M.; Axen, D.; Banerjee, S.; Barrand, G.; et al. Geant4—A simulation toolkit. Nucl. Instrum. Methods Phys. Res. A
**2003**, 506, 250–303. [Google Scholar] [CrossRef] [Green Version] - Turner, A.; Wheldon, C.; Gilbert, M.; Packer, L.; Burns, J.; Kokalova, T.; Freer, M. Generalised gamma spectrometry simulator for problems in nuclide identification. J. Phys. Conf. Ser.
**2019**, 1643, 012211. [Google Scholar] [CrossRef] - Brun, R.; Rademakers, F. ROOT—An object oriented data analysis framework. Nucl. Instrum. Methods Phys. Res. A
**1997**, 389, 81–86. [Google Scholar] [CrossRef] - Abadi, M.; Agarwal, A.; Barham, P.; Brevdo, E.; Chen, Z.; Citro, C.; Corrado, G.S.; Davis, A.; Dean, J.; Devin, M.; et al. TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems. arXiv
**2015**, arXiv:1603.04467. [Google Scholar] - Kingma, D.; Ba, J. Adam: A method for stochastic optimization. arXiv
**2014**, arXiv:1412.6980. [Google Scholar] - Bishop, C. Pattern Recognition and Machine Learning; Springer: Cambridge, UK, 2006; p. 209. [Google Scholar]
- Hinton, G.; Srivastava, N.; Swersky, K. Lecture 6d—A separate, adaptive learning rate for each connection. Slides of Lecture Neural Networks for Machine Learning. 2012. Available online: https://www.cs.toronto.edu/~hinton/coursera_lectures.html (accessed on 20 April 2021).
- He, K.; Zhang, X.; Ren, S.; Sun, J. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. arXiv
**2015**, arXiv:1502.01852. [Google Scholar] - Kim, J.; Park, K.; Cho, G. Multi-radioisotope identification algorithm using an artificial neural network for plastic gamma spectra. Appl. Radiat. Isot.
**2019**, 147, 83–90. [Google Scholar] [CrossRef] [PubMed]

**Figure 1.**Example 4 × 10${}^{3}$ count NaI spectrum (7.05(1)% @662 keV) containing natural background, ${}^{108m}$Ag, ${}^{44}$Ti and ${}^{60}$Co. A 31 keV threshold and gain shift of ×1.7 are applied. Note the ambiguity in model predictions (see Section 5.2) for ${}^{207}$Bi, which is extremely similar in profile to ${}^{44}$Ti.

**Figure 2.**Diagram of the basic test scenario. A 3${}^{\u2033}$× 3${}^{\u2033}$ NaI detector was placed 1, 5, and 10 cm from the source, biased to +800 V. The signal was passed through an Ortec 571 amplifier (2 $\mathsf{\mu}$s shaping time) and digitised with an Ortec Easy-MCA-8K Multi-Channel Analyser (MCA). Metal plates were placed between the source and detector for shielding.

**Figure 3.**Simplified CNN structure for radio-isotope identification. The sequential model consists of several convolution modules, each performing 1D convolution operations to extract relevant features. For classifications, the resulting tensor is flattened into a vector to interface with a small, fully-connected ANN. The final output layer provides the probability of each source being present in a gamma spectrum.

**Figure 4.**Representation of the convolution module. An array of intensities represents the spectrum. This trains first H filters, then K filters in two convolution layers to emphasise key features in the spectra. Pooling and dropout then reduce the dimensionality and combat over-fitting. For demonstration purposes, the shapes exclude a dimension for batch size. See text for details.

**Figure 5.**Representation of the classification module. The convolution module output is flattened into a vector. A small ‘fully-connected’ ANN makes final predictions of present sources, with a 0–1 probability through Sigmoid activation. See text for details.

**Figure 6.**Variation in performance rates as the dense layer size of the classification module is varied. Bold lines represent the average across all training and test sets. Diminishing returns quickly encourage a compromise as dense layers rapidly increase in complexity with scale.

**Figure 7.**ROC curves extended to the multi-label case for the reference model. Provides comparisons of performance with each source as the decision threshold is varied. The average (red) is included in (

**a**) as a reference for model performance across all classes. (

**b**) shows

^{22}Na,

^{60}Co, and

^{44}Ti as marginally responsible for more false classifications.

**Figure 8.**Comparison of performance with number of training samples. While there is a sharp rise in performance as more training spectra are added, the performance plateaus for these particular data. For more complex data sets, additional training samples are expected to become far more important.

Actual Label | |||
---|---|---|---|

1 | 0 | ||

Prediction | 1 | True Positive (TP) | False Positive (FP) |

0 | False Negative (FN) | True Negative (TN) |

Parameter | Value |
---|---|

Optimiser | Adam |

Initialisers (kernel, bias) | He normal, zeros |

Activation function | Leaky ReLU ($\alpha =0.1$) |

Batch size | 32 |

Dense layer nodes | 700 |

Dropout (spatial, dense) | 0.5, 0.5 |

Filters per Conv. layer | 40 |

Filter size | 7 |

Pooling type | Max pooling (length 2, stride 2) |

Padding | Same |

Loss function | Binary cross-entropy |

Actual | |||
---|---|---|---|

1 | 0 | ||

Predicted | 1 | 96.5(1)% | 3.1(2)% |

0 | 3.5(1)% | 96.9(2)% |

**Table 4.**Average perfect classification rates across data sets. All models were trained using the reference model architecture. Reference model developed on marked data set (*).

Stand-Off Data | AUC | Perfect Rate [%] | Shielded Data | AUC | Perfect Rate [%] |
---|---|---|---|---|---|

Point, 10 cm * | 0.996 (1) | 74.4 (9) * | 2 cm Al | 0.994 (1) | 73.9 (9) |

Point, 5 cm | 0.995 (1) | 73.9 (6) | 4 cm Al | 0.986 (2) | 59.9 (7) |

Point, 1 cm | 0.991 (2) | 66.5 (5) | 6 cm Al | 0.951 (4) | 34.9 (8) |

**Table 5.**Average perfect classification rates for all scenarios. The generalised model is compared with those trained individually on each set. Reference model developed on marked data set (*).

Test Set | Individual (%) | Generalised (%) |
---|---|---|

1 cm stand-off | 66.5 (5) | 72.6 (9) |

5 cm stand-off | 73.9 (6) | 90.5 (4) |

10 cm stand-off * | 74.4 (9) * | 90.9 (4) |

1 Al plate | 73.9 (9) | 88.5 (7) |

2 Al plates | 59.9 (7) | 81.5 (6) |

3 Al plates | 34.9 (8) | 60.4 (7) |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Turner, A.N.; Wheldon, C.; Wheldon, T.K.; Gilbert, M.R.; Packer, L.W.; Burns, J.; Freer, M.
Convolutional Neural Networks for Challenges in Automated Nuclide Identification. *Sensors* **2021**, *21*, 5238.
https://doi.org/10.3390/s21155238

**AMA Style**

Turner AN, Wheldon C, Wheldon TK, Gilbert MR, Packer LW, Burns J, Freer M.
Convolutional Neural Networks for Challenges in Automated Nuclide Identification. *Sensors*. 2021; 21(15):5238.
https://doi.org/10.3390/s21155238

**Chicago/Turabian Style**

Turner, Anthony N., Carl Wheldon, Tzany Kokalova Wheldon, Mark R. Gilbert, Lee W. Packer, Jonathan Burns, and Martin Freer.
2021. "Convolutional Neural Networks for Challenges in Automated Nuclide Identification" *Sensors* 21, no. 15: 5238.
https://doi.org/10.3390/s21155238