Next Article in Journal
Wind Booster Optimization for On-Site Energy Generation Using Vertical-Axis Wind Turbines
Next Article in Special Issue
The Adaptive Dynamic Programming Toolbox
Previous Article in Journal
OpenEDS2020 Challenge on Gaze Tracking for VR: Dataset and Results
Previous Article in Special Issue
A Camera-Based Position Correction System for Autonomous Production Line Inspection
Article

Deep ConvNet: Non-Random Weight Initialization for Repeatable Determinism, Examined with FSGM

Department of Automatic Control and Systems Engineering, University of Sheffield, Sheffield S1 3JD, UK
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in Rudd-Orthner, R.N.M.; Mihaylova, L. Non-random weight initialisation in deep convolutional networks applied to safety critical artificial intelligence, In Proceedings of the 2020 13th International Conference on Developments in eSystems Engineering (DeSE), Liverpool, UK, 14–17 December 2020.
Academic Editors: Abir Hussain, Dhiya Al-Jumeily, Hissam Tawfik and Panos Liatsis
Sensors 2021, 21(14), 4772; https://doi.org/10.3390/s21144772
Received: 29 May 2021 / Revised: 2 July 2021 / Accepted: 9 July 2021 / Published: 13 July 2021
(This article belongs to the Collection Robotics, Sensors and Industry 4.0)
A repeatable and deterministic non-random weight initialization method in convolutional layers of neural networks examined with the Fast Gradient Sign Method (FSGM). Using the FSGM approach as a technique to measure the initialization effect with controlled distortions in transferred learning, varying the dataset numerical similarity. The focus is on convolutional layers with induced earlier learning through the use of striped forms for image classification. Which provided a higher performing accuracy in the first epoch, with improvements of between 3–5% in a well known benchmark model, and also ~10% in a color image dataset (MTARSI2), using a dissimilar model architecture. The proposed method is robust to limit optimization approaches like Glorot/Xavier and He initialization. Arguably the approach is within a new category of weight initialization methods, as a number sequence substitution of random numbers, without a tether to the dataset. When examined under the FGSM approach with transferred learning, the proposed method when used with higher distortions (numerically dissimilar datasets), is less compromised against the original cross-validation dataset, at ~31% accuracy instead of ~9%. This is an indication of higher retention of the original fitting in transferred learning. View Full-Text
Keywords: repeatable determinism; weight initialization; convolutional layers; adversarial perturbation attack; FSGM; transferred learning; machine learning; smart sensors repeatable determinism; weight initialization; convolutional layers; adversarial perturbation attack; FSGM; transferred learning; machine learning; smart sensors
Show Figures

Figure 1

MDPI and ACS Style

Rudd-Orthner, R.N.M.; Mihaylova, L. Deep ConvNet: Non-Random Weight Initialization for Repeatable Determinism, Examined with FSGM. Sensors 2021, 21, 4772. https://doi.org/10.3390/s21144772

AMA Style

Rudd-Orthner RNM, Mihaylova L. Deep ConvNet: Non-Random Weight Initialization for Repeatable Determinism, Examined with FSGM. Sensors. 2021; 21(14):4772. https://doi.org/10.3390/s21144772

Chicago/Turabian Style

Rudd-Orthner, Richard N.M., and Lyudmila Mihaylova. 2021. "Deep ConvNet: Non-Random Weight Initialization for Repeatable Determinism, Examined with FSGM" Sensors 21, no. 14: 4772. https://doi.org/10.3390/s21144772

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop