Next Article in Journal
Eurocities of the Iberian Borderland: A Second Generation of Border Cooperation Structures. An Analysis of Their Development Strategies
Next Article in Special Issue
A Secured Privacy-Preserving Multi-Level Blockchain Framework for Cluster Based VANET
Previous Article in Journal
Environmental Health Diagnosis in a Park as a Sustainability Initiative in Cities
Article

Robustness Evaluations of Sustainable Machine Learning Models against Data Poisoning Attacks in the Internet of Things

School of Engineering and Information Technology, University of New South Wales, Canberra 2612, Australia
*
Author to whom correspondence should be addressed.
Sustainability 2020, 12(16), 6434; https://doi.org/10.3390/su12166434
Received: 29 June 2020 / Revised: 27 July 2020 / Accepted: 4 August 2020 / Published: 10 August 2020
(This article belongs to the Special Issue Networked Intelligent Systems for a Sustainable Future)
With the increasing popularity of the Internet of Things (IoT) platforms, the cyber security of these platforms is a highly active area of research. One key technology underpinning smart IoT systems is machine learning, which classifies and predicts events from large-scale data in IoT networks. Machine learning is susceptible to cyber attacks, particularly data poisoning attacks that inject false data when training machine learning models. Data poisoning attacks degrade the performances of machine learning models. It is an ongoing research challenge to develop trustworthy machine learning models resilient and sustainable against data poisoning attacks in IoT networks. We studied the effects of data poisoning attacks on machine learning models, including the gradient boosting machine, random forest, naive Bayes, and feed-forward deep learning, to determine the levels to which the models should be trusted and said to be reliable in real-world IoT settings. In the training phase, a label modification function is developed to manipulate legitimate input classes. The function is employed at data poisoning rates of 5%, 10%, 20%, and 30% that allow the comparison of the poisoned models and display their performance degradations. The machine learning models have been evaluated using the ToN_IoT and UNSW NB-15 datasets, as they include a wide variety of recent legitimate and attack vectors. The experimental results revealed that the models’ performances will be degraded, in terms of accuracy and detection rates, if the number of the trained normal observations is not significantly larger than the poisoned data. At the rate of data poisoning of 30% or greater on input data, machine learning performances are significantly degraded. View Full-Text
Keywords: adversarial machine learning; sustainable machine learning; data poisoning; deep learning; Internet of Things adversarial machine learning; sustainable machine learning; data poisoning; deep learning; Internet of Things
Show Figures

Figure 1

MDPI and ACS Style

Dunn, C.; Moustafa, N.; Turnbull, B. Robustness Evaluations of Sustainable Machine Learning Models against Data Poisoning Attacks in the Internet of Things. Sustainability 2020, 12, 6434. https://doi.org/10.3390/su12166434

AMA Style

Dunn C, Moustafa N, Turnbull B. Robustness Evaluations of Sustainable Machine Learning Models against Data Poisoning Attacks in the Internet of Things. Sustainability. 2020; 12(16):6434. https://doi.org/10.3390/su12166434

Chicago/Turabian Style

Dunn, Corey, Nour Moustafa, and Benjamin Turnbull. 2020. "Robustness Evaluations of Sustainable Machine Learning Models against Data Poisoning Attacks in the Internet of Things" Sustainability 12, no. 16: 6434. https://doi.org/10.3390/su12166434

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop