Next Article in Journal
Acknowledgement to Reviewers of Applied Sciences and Announcement of the 2017 Outstanding Reviewer Awards Winners
Previous Article in Journal
A 25.78-Gbit/s × 4-ch Active Optical Cable with Ultra-Compact Form Factor for High-Density Optical Interconnects
Article Menu
Issue 1 (January) cover image

Export Article

Open AccessArticle
Appl. Sci. 2018, 8(1), 136; https://doi.org/10.3390/app8010136

Pareto Optimal Solutions for Network Defense Strategy Selection Simulator in Multi-Objective Reinforcement Learning

1
Science and Technology on Complex Electronic System Simulation Laboratory, Space Engineering University, Beijing 101400, China
2
Network Management Center, CAPF, Beijing 100089, China
3
College of Humanities and Sciences, University of Montana, 32 Campus Dr, Missoula, MT 59812, USA
4
Centre for Instructional Design and Technology, Open University Malaysia, Jalan Tun Ismail, 50480 Kuala Lumpur, Malaysia
*
Author to whom correspondence should be addressed.
Received: 11 December 2017 / Revised: 12 January 2018 / Accepted: 16 January 2018 / Published: 18 January 2018
(This article belongs to the Section Computer Science and Electrical Engineering)
Full-Text   |   PDF [5453 KB, uploaded 18 January 2018]   |  

Abstract

Using Pareto optimization in Multi-Objective Reinforcement Learning (MORL) leads to better learning results for network defense games. This is particularly useful for network security agents, who must often balance several goals when choosing what action to take in defense of a network. If the defender knows his preferred reward distribution, the advantages of Pareto optimization can be retained by using a scalarization algorithm prior to the implementation of the MORL. In this paper, we simulate a network defense scenario by creating a multi-objective zero-sum game and using Pareto optimization and MORL to determine optimal solutions and compare those solutions to different scalarization approaches. We build a Pareto Defense Strategy Selection Simulator (PDSSS) system for assisting network administrators on decision-making, specifically, on defense strategy selection, and the experiment results show that the Satisficing Trade-Off Method (STOM) scalarization approach performs better than linear scalarization or GUESS method. The results of this paper can aid network security agents attempting to find an optimal defense policy for network security games. View Full-Text
Keywords: Pareto front; zero-sum game; multi-objective optimization; network security Pareto front; zero-sum game; multi-objective optimization; network security
Figures

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. (CC BY 4.0).
SciFeed

Share & Cite This Article

MDPI and ACS Style

Sun, Y.; Li, Y.; Xiong, W.; Yao, Z.; Moniz, K.; Zahir, A. Pareto Optimal Solutions for Network Defense Strategy Selection Simulator in Multi-Objective Reinforcement Learning. Appl. Sci. 2018, 8, 136.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Appl. Sci. EISSN 2076-3417 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top