Open AccessArticle
Conceiving Human Interaction by Visualising Depth Data of Head Pose Changes and Emotion Recognition via Facial Expressions
Computers 2017, 6(3), 25; doi:10.3390/computers6030025 -
Abstract
Affective computing in general and human activity and intention analysis in particular comprise a rapidly-growing field of research. Head pose and emotion changes present serious challenges when applied to player’s training and ludology experience in serious games, or analysis of customer satisfaction regarding
[...] Read more.
Affective computing in general and human activity and intention analysis in particular comprise a rapidly-growing field of research. Head pose and emotion changes present serious challenges when applied to player’s training and ludology experience in serious games, or analysis of customer satisfaction regarding broadcast and web services, or monitoring a driver’s attention. Given the increasing prominence and utility of depth sensors, it is now feasible to perform large-scale collection of three-dimensional (3D) data for subsequent analysis. Discriminative random regression forests were selected in order to rapidly and accurately estimate head pose changes in an unconstrained environment. In order to complete the secondary process of recognising four universal dominant facial expressions (happiness, anger, sadness and surprise), emotion recognition via facial expressions (ERFE) was adopted. After that, a lightweight data exchange format (JavaScript Object Notation (JSON)) is employed, in order to manipulate the data extracted from the two aforementioned settings. Motivated by the need to generate comprehensible visual representations from different sets of data, in this paper, we introduce a system capable of monitoring human activity through head pose and emotion changes, utilising an affordable 3D sensing technology (Microsoft Kinect sensor). Full article
Figures

Figure 1

Open AccessArticle
BICM-ID with Physical Layer Network Coding in TWR Free Space Optical Communication Links
Computers 2017, 6(3), 24; doi:10.3390/computers6030024 -
Abstract
Physical layer network coding (PNC) is a promising technique to improve the network throughput in a two-way relay (TWR) channel for two users to exchange messages across a wireless network. The PNC technique incorporating a TWR channel is embraced by a free space
[...] Read more.
Physical layer network coding (PNC) is a promising technique to improve the network throughput in a two-way relay (TWR) channel for two users to exchange messages across a wireless network. The PNC technique incorporating a TWR channel is embraced by a free space optical (FSO) communication link for full utilization of network resources, namely TWR-FSO PNC. In this paper, bit interleaved coded modulation with iterative decoding (BICM-ID) is adopted to combat the deleterious effect of the turbulence channel by saving the message being transmitted to increase the reliability of the system. Moreover, based on this technique, comparative studies between end-to-end BICM-ID code, non-iterative convolutional coded and uncoded systems are carried out. Furthermore, this paper presents the extrinsic information transfer (ExIT) charts to evaluate the performance of BICM-ID code combined with the TWR-FSO PNC system. The simulation results show that the proposed scheme can achieve a significant bit error rate (BER) performance improvement through the introduction of an iterative process between a soft demapper and decoder. Similarly, Monte Carlo simulation results are provided to support the findings. Subsequently, the ExIT functions of the two receiver components are thoroughly analysed for a variety of parameters under the influence of a turbulence-induced channel fading, demonstrating the convergence behaviour of BICM-ID to enable the TWR-FSO PNC system, effectively mitigating the impact of the fading turbulence channel. Full article
Figures

Figure 1

Open AccessArticle
Data Partitioning Technique for Improved Video Prioritization
Computers 2017, 6(3), 23; doi:10.3390/computers6030023 -
Abstract
A compressed video bitstream can be partitioned according to the coding priority of the data, allowing prioritized wireless communication or selective dropping in a congested channel. Known as data partitioning in the H.264/Advanced Video Coding (AVC) codec, this paper introduces a further sub-partition
[...] Read more.
A compressed video bitstream can be partitioned according to the coding priority of the data, allowing prioritized wireless communication or selective dropping in a congested channel. Known as data partitioning in the H.264/Advanced Video Coding (AVC) codec, this paper introduces a further sub-partition of one of the H.264/AVC codec’s three data-partitions. Results show a 5 dB improvement in Peak Signal-to-Noise Ratio (PSNR) through this innovation. In particular, the data partition containing intra-coded residuals is sub-divided into data from: those macroblocks (MBs) naturally intra-coded, and those MBs forcibly inserted for non-periodic intra-refresh. Interactive user-to-user video streaming can benefit, as then HTTP adaptive streaming is inappropriate and the High Efficiency Video Coding (HEVC) codec is too energy demanding. Full article
Figures

Figure 1

Open AccessArticle
Towards Recognising Learning Evidence in Collaborative Virtual Environments: A Mixed Agents Approach
Computers 2017, 6(3), 22; doi:10.3390/computers6030022 -
Abstract
Three-dimensional (3D) virtual environments bring people together in real time irrespective of their geographical location to facilitate collaborative learning and working together in an engaging and fulfilling way. However, it can be difficult to amass suitable data to gauge how well students perform
[...] Read more.
Three-dimensional (3D) virtual environments bring people together in real time irrespective of their geographical location to facilitate collaborative learning and working together in an engaging and fulfilling way. However, it can be difficult to amass suitable data to gauge how well students perform in these environments. With this in mind, the current study proposes a methodology for monitoring students’ learning experiences in 3D virtual worlds (VWs). It integrates a computer-based mechanism that mixes software agents with natural agents (users) in conjunction with a fuzzy logic model to reveal evidence of learning in collaborative pursuits to replicate the sort of observation that would normally be made in a conventional classroom setting. Software agents are used to infer the extent of interaction based on the number of clicks, the actions of users, and other events. Meanwhile, natural agents are employed in order to evaluate the students and the way in which they perform. This is beneficial because such an approach offers an effective method for assessing learning activities in 3D virtual environments. Full article
Figures

Figure 1

Open AccessArticle
Enhancing BER Performance Limit of BCH and RS Codes Using Multipath Diversity
Computers 2017, 6(2), 21; doi:10.3390/computers6020021 -
Abstract
Modern wireless communication systems suffer from phase shifting and, more importantly, from interference caused by multipath propagation. Multipath propagation results in an antenna receiving two or more copies of the signal sequence sent from the same source but that has been delivered via
[...] Read more.
Modern wireless communication systems suffer from phase shifting and, more importantly, from interference caused by multipath propagation. Multipath propagation results in an antenna receiving two or more copies of the signal sequence sent from the same source but that has been delivered via different paths. Multipath components are treated as redundant copies of the original data sequence and are used to improve the performance of forward error correction (FEC) codes without extra redundancy, in order to improve data transmission reliability and increase the bit rate over the wireless communication channel. For a proof of concept Bose, Ray-Chaudhuri, and Hocquenghem (BCH) and Reed-Solomon (RS) codes have been used for FEC to compare their bit error rate (BER) performances. The results showed that the wireless multipath components significantly improve the performance of FEC. Furthermore, FEC codes with low error correction capability and employing the multipath phenomenon are enhanced to perform better than FEC codes which have a bit higher error correction capability and did not utilise the multipath. Consequently, the bit rate is increased, and communication reliability is improved without extra redundancy. Full article
Figures

Figure 1

Open AccessArticle
Comparison of Four SVM Classifiers Used with Depth Sensors to Recognize Arabic Sign Language Words
Computers 2017, 6(2), 20; doi:10.3390/computers6020020 -
Abstract
The objective of this research was to recognize the hand gestures of Arabic Sign Language (ArSL) words using two depth sensors. The researchers developed a model to examine 143 signs gestured by 10 users for 5 ArSL words (the dataset). The sensors captured
[...] Read more.
The objective of this research was to recognize the hand gestures of Arabic Sign Language (ArSL) words using two depth sensors. The researchers developed a model to examine 143 signs gestured by 10 users for 5 ArSL words (the dataset). The sensors captured depth images of the upper human body, from which 235 angles (features) were extracted for each joint and between each pair of bones. The dataset was divided into a training set (109 observations) and a testing set (34 observations). The support vector machine (SVM) classifier was set using different parameters on the gestured words’ dataset to produce four SVM models, with linear kernel (SVMLD and SVMLT) and radial kernel (SVMRD and SVMRT) functions. The overall identification accuracy for the corresponding words in the training set for the SVMLD, SVMLT, SVMRD, and SVMRT models was 88.92%, 88.92%, 90.88%, and 90.884%, respectively. The accuracy from the testing set for SVMLD, SVMLT, SVMRD, and SVMRT was 97.059%, 97.059%, 94.118%, and 97.059%, respectively. Therefore, since the two kernels in the models were close in performance, it is far more efficient to use the less complex model (linear kernel) set with a default parameter. Full article
Figures

Figure 1

Open AccessArticle
Design of a Convolutional Two-Dimensional Filter in FPGA for Image Processing Applications
Computers 2017, 6(2), 19; doi:10.3390/computers6020019 -
Abstract
Exploiting the Bachet weight decomposition theorem, a new two-dimensional filter is designed. The filter can be adapted to different multimedia applications, but in this work it is specifically targeted to image processing applications. The method allows emulating standard 32 bit floating point multipliers
[...] Read more.
Exploiting the Bachet weight decomposition theorem, a new two-dimensional filter is designed. The filter can be adapted to different multimedia applications, but in this work it is specifically targeted to image processing applications. The method allows emulating standard 32 bit floating point multipliers using a chain of fixed point adders and a logic unit to manage the exponent, in order to obtain IEEE-754 compliant results. The proposed design allows more compact implementation of a floating point filtering architecture when a fixed set of coefficients and a fixed range of input values are used. The elaboration of the data proceeds in raster-scan order and is capable of directly processing the data coming from the acquisition source thanks to a careful organization of the memories, avoiding the implementation of frame buffers or any aligning circuitry. The proposed architecture shows state-of-the-art performances in terms of critical path delay, obtaining a critical path delay of 4.7 ns when implemented on a Xilinx Virtex 7 FPGA board. Full article
Figures

Figure 1

Open AccessArticle
The Right to Remember: Implementing a Rudimentary Emotive-Effect Layer for Frustration on AI Agent Gameplay Strategy
Computers 2017, 6(2), 18; doi:10.3390/computers6020018 -
Abstract
AI (Artificial Intelligence) is often looked at as a logical way to develop a game agent that methodically looks at options and delivers rational or irrational solutions. This paper is based on developing an AI agent that plays a game with a similar
[...] Read more.
AI (Artificial Intelligence) is often looked at as a logical way to develop a game agent that methodically looks at options and delivers rational or irrational solutions. This paper is based on developing an AI agent that plays a game with a similar emotive content like a human. The purpose of the study was to see if the incorporation of this emotive content would influence the outcomes within the game Love Letter. In order to do this an AI agent with an emotive layer was developed to play the game over a million times. A lower win/loss ratio demonstrates that, to some extent, this methodology was vindicated and a 100 per cent win for the AI agent did not happen. Machine learning techniques were modelled purposely so as to match extreme models of behavioural change. The results demonstrated a win/loss ratio of 0.67 for the AI agent and, in many ways, reflected the frustration that a normal player would exhibit during game play. As was hypothesised, the final agent investment value was, on average, lower after match play than its initial value. Full article
Figures

Figure 1

Open AccessArticle
Research on Similarity Measurements of 3D Models Based on Skeleton Trees
Computers 2017, 6(2), 17; doi:10.3390/computers6020017 -
Abstract
There is a growing need to be able to accurately and efficiently recognize similar models from existing model sets, in particular, for 3D models. This paper proposes a method of similarity measurement of 3D models, in which the similarity between 3D models is
[...] Read more.
There is a growing need to be able to accurately and efficiently recognize similar models from existing model sets, in particular, for 3D models. This paper proposes a method of similarity measurement of 3D models, in which the similarity between 3D models is easily, accurately and automatically calculated by means of skeleton trees constructed by a simple rule. The skeleton operates well as a key descriptor of a 3D model. Specifically, a skeleton tree represents node features (including connection and orientation) that can reflect the topology and branch features (including region and bending degree) of 3D models geometrically. Node feature distance is first computed by the dot product between node connection distance, which is defined by 2-norm, and node orientation distance, which is defined by tangent space distance. Then branch feature distances are computed by the weighted sum of the average regional distances, as defined by generalized Hausdorff distance, and the average bending degree distance as defined by curvature. Overall similarity is expressed as the weighted sum of topology and geometry similarity. The similarity calculation is efficient and accurate because it is not necessary to perform other operations such as rotation or translation and it considers more topological and geometric information. The experiment demonstrates the feasibility and accuracy of the proposed method. Full article
Figures

Figure 1

Open AccessFeature PaperReview
Reliability of NAND Flash Memories: Planar Cells and Emerging Issues in 3D Devices
Computers 2017, 6(2), 16; doi:10.3390/computers6020016 -
Abstract
We review the state-of-the-art in the understanding of planar NAND Flash memory reliability and discuss how the recent move to three-dimensional (3D) devices has affected this field. Particular emphasis is placed on mechanisms developing along the lifetime of the memory array, as opposed
[...] Read more.
We review the state-of-the-art in the understanding of planar NAND Flash memory reliability and discuss how the recent move to three-dimensional (3D) devices has affected this field. Particular emphasis is placed on mechanisms developing along the lifetime of the memory array, as opposed to time-zero or technological issues, and the viewpoint is focused on the understanding of the root causes. The impressive amount of published work demonstrates that Flash reliability is a complex yet well-understood field, where nonetheless tighter and tighter constraints are set by device scaling. Three-dimensional NAND have offset the traditional scaling scenario, leading to an improvement in performance and reliability while raising new issues to be dealt with, determined by the newer and more complex cell and array architectures as well as operation modes. A thorough understanding of the complex phenomena involved in the operation and reliability of NAND cells remains vital for the development of future technology nodes. Full article
Figures

Figure 1

Open AccessArticle
Hard Real-Time Task Scheduling in Cloud Computing Using an Adaptive Genetic Algorithm
Computers 2017, 6(2), 15; doi:10.3390/computers6020015 -
Abstract
In the Infrastructure-as-a-Service cloud computing model, virtualized computing resources in the form of virtual machines are provided over the Internet. A user can rent an arbitrary number of computing resources to meet their requirements, making cloud computing an attractive choice for executing real-time
[...] Read more.
In the Infrastructure-as-a-Service cloud computing model, virtualized computing resources in the form of virtual machines are provided over the Internet. A user can rent an arbitrary number of computing resources to meet their requirements, making cloud computing an attractive choice for executing real-time tasks. Economical task allocation and scheduling on a set of leased virtual machines is an important problem in the cloud computing environment. This paper proposes a greedy and a genetic algorithm with an adaptive selection of suitable crossover and mutation operations (named as AGA) to allocate and schedule real-time tasks with precedence constraint on heterogamous virtual machines. A comprehensive simulation study has been done to evaluate the performance of the proposed algorithms in terms of their solution quality and efficiency. The simulation results show that AGA outperforms the greedy algorithm and non-adaptive genetic algorithm in terms of solution quality. Full article
Figures

Figure 1

Open AccessArticle
Emotion Elicitation in a Socially Intelligent Service: The Typing Tutor
Computers 2017, 6(2), 14; doi:10.3390/computers6020014 -
Abstract
This paper presents an experimental study on modeling machine emotion elicitation in a socially intelligent service, the typing tutor. The aim of the study is to evaluate the extent to which the machine emotion elicitation can influence the affective state (valence and arousal)
[...] Read more.
This paper presents an experimental study on modeling machine emotion elicitation in a socially intelligent service, the typing tutor. The aim of the study is to evaluate the extent to which the machine emotion elicitation can influence the affective state (valence and arousal) of the learner during a tutoring session. The tutor provides continuous real-time emotion elicitation via graphically rendered emoticons, as an emotional feedback to learner’s performance. Good performance is rewarded by the positive emoticon, based on the notion of positive reinforcement. Facial emotion recognition software is used to analyze the affective state of the learner for later evaluation. Experimental results show the correlation between the positive emoticon and the learner’s affective state is significant for all 13 (100%) test participants on the arousal dimension and for 9 (69%) test participants on both affective dimensions. The results also confirm our hypothesis and show that the machine emotion elicitation is significant for 11 (85%) of 13 test participants. We conclude that the machine emotion elicitation with simple graphical emoticons has a promising potential for the future development of the tutor. Full article
Figures

Figure 1

Open AccessArticle
Towards Trustworthy Collaborative Editing
Computers 2017, 6(2), 13; doi:10.3390/computers6020013 -
Abstract
Real-time collaborative editing applications are drastically different from typical client–server applications in that every participant has a copy of the shared document. In this type of environment, each participant acts as both a client and a server replica. In this article, we elaborate
[...] Read more.
Real-time collaborative editing applications are drastically different from typical client–server applications in that every participant has a copy of the shared document. In this type of environment, each participant acts as both a client and a server replica. In this article, we elaborate on how to adapt Byzantine fault tolerance (BFT) mechanisms to enhance the trustworthiness of such applications. It is apparent that traditional BFT algorithms cannot be used directly because it would dictate that all updates submitted by participants be applied sequentially, which would defeat the purpose of collaborative editing. The goal of this study is to design and implement an efficient BFT solution by exploiting the application semantics and by doing a threat analysis of these types of applications. Our solution can be considered as a form of optimistic BFT in that local states maintained by each participant may diverge temporarily. The states of the participants are made consistent with each other by a periodic synchronization mechanism. Full article
Figures

Figure 1

Open AccessArticle
Body-Borne Computers as Extensions of Self
Computers 2017, 6(1), 12; doi:10.3390/computers6010012 -
Abstract
The opportunities for wearable technologies go well beyond always-available information displays or health sensing devices. The concept of the cyborg introduced by Clynes and Kline, along with works in various fields of research and the arts, offers a vision of what technology integrated
[...] Read more.
The opportunities for wearable technologies go well beyond always-available information displays or health sensing devices. The concept of the cyborg introduced by Clynes and Kline, along with works in various fields of research and the arts, offers a vision of what technology integrated with the body can offer. This paper identifies different categories of research aimed at augmenting humans. The paper specifically focuses on three areas of augmentation of the human body and its sensorimotor capabilities: physical morphology, skin display, and somatosensory extension. We discuss how such digital extensions relate to the malleable nature of our self-image. We argue that body-borne devices are no longer simply functional apparatus, but offer a direct interplay with the mind. Finally, we also showcase some of our own projects in this area and shed light on future challenges. Full article
Figures

Figure 1

Open AccessArticle
Exploring a New Security Framework for Remote Patient Monitoring Devices
Computers 2017, 6(1), 11; doi:10.3390/computers6010011 -
Abstract
Security has been an issue of contention in healthcare. The lack of familiarity and poor implementation of security in healthcare leave the patients’ data vulnerable to attackers. The main issue is assessing how we can provide security in an RPM infrastructure. The findings
[...] Read more.
Security has been an issue of contention in healthcare. The lack of familiarity and poor implementation of security in healthcare leave the patients’ data vulnerable to attackers. The main issue is assessing how we can provide security in an RPM infrastructure. The findings in literature show there is little empirical evidence on proper implementation of security. Therefore, there is an urgent need in addressing cybersecurity issues in medical devices. Through the review of relevant literature in remote patient monitoring and use of a Microsoft threat modelling tool, we identify and explore current vulnerabilities and threats in IEEE 11073 standard devices to propose a new security framework for remote patient monitoring devices. Additionally, current RPM devices have a limitation on the number of people who can share a single device, therefore, we propose the use of NFC for identification in Remote Patient Monitoring (RPM) devices for multi-user environments where we have multiple people sharing a single device to reduce errors associated with incorrect user identification. We finally show how several techniques have been used to build the proposed framework. Full article
Figures

Figure 1

Open AccessArticle
Discrete Event Simulation Method as a Tool for Improvement of Manufacturing Systems
Computers 2017, 6(1), 10; doi:10.3390/computers6010010 -
Abstract
The problem of production flow in manufacturing systems is analyzed. The machines can be operated by workers or by robots, since breakdowns and human factors destabilize the production processes that robots are preferred to perform. The problem is how to determine the real
[...] Read more.
The problem of production flow in manufacturing systems is analyzed. The machines can be operated by workers or by robots, since breakdowns and human factors destabilize the production processes that robots are preferred to perform. The problem is how to determine the real difference in work efficiency between humans and robots. We present an analysis of the production efficiency and reliability of the press shop lines operated by human operators or industrial robots. This is a problem from the field of Operations Research for which the Discrete Event Simulation (DES) method has been used. Three models have been developed, including the manufacturing line before and after robotization, taking into account stochastic parameters of availability and reliability of the machines, operators, and robots. We apply the OEE (Overall Equipment Effectiveness) indicator to present how the availability, reliability, and quality parameters influence the performance of the workstations, especially in the short run and in the long run. In addition, the stability of the simulation model was analyzed. This approach enables a better representation of real manufacturing processes. Full article
Figures

Open AccessArticle
Traffic Priority-Aware Adaptive Slot Allocation for Medium Access Control Protocol in Wireless Body Area Network
Computers 2017, 6(1), 9; doi:10.3390/computers6010009 -
Abstract
Biomedical sensors (BMSs) monitor the heterogeneous vital signs of patients. They have diverse Quality of Service (QoS) requirements including reduced collision, delay, loss, and energy consumption in the transmission of data, which are non-constrained, delay-constrained, reliabilityconstrained, and critical. In this context, this paper
[...] Read more.
Biomedical sensors (BMSs) monitor the heterogeneous vital signs of patients. They have diverse Quality of Service (QoS) requirements including reduced collision, delay, loss, and energy consumption in the transmission of data, which are non-constrained, delay-constrained, reliabilityconstrained, and critical. In this context, this paper proposes a traffic priority-aware adaptive slot allocation-based medium access control (TraySL-MAC) protocol. Firstly, a reduced contention adaptive slot allocation algorithm is presented to minimize contention rounds. Secondly, a low threshold vital signs criticality-based adaptive slot allocation algorithm is developed for high priority data. Thirdly, a high threshold vital signs criticality-based adaptive slot allocation algorithm is designed for low priority data. Simulations are performed to comparatively evaluate the performance of the proposed protocol with state-of-the-art MAC protocols. From the analysis of the results, it is evident that the proposed protocol is beneficial in terms of lower packet delivery delay and energy consumption, and higher throughput in realistic biomedical environments. Full article
Figures

Figure 1

Open AccessReview
A Survey of Soft-Error Mitigation Techniques for Non-Volatile Memories
Computers 2017, 6(1), 8; doi:10.3390/computers6010008 -
Abstract
Non-volatile memories (NVMs) offer superior density and energy characteristics compared to the conventional memories; however, NVMs suffer from severe reliability issues that can easily eclipse their energy efficiency advantages. In this paper, we survey architectural techniques for improving the soft-error reliability of NVMs,
[...] Read more.
Non-volatile memories (NVMs) offer superior density and energy characteristics compared to the conventional memories; however, NVMs suffer from severe reliability issues that can easily eclipse their energy efficiency advantages. In this paper, we survey architectural techniques for improving the soft-error reliability of NVMs, specifically PCM (phase change memory) and STT-RAM (spin transfer torque RAM). We focus on soft-errors, such as resistance drift and write disturbance, in PCM and read disturbance and write failures in STT-RAM. By classifying the research works based on key parameters, we highlight their similarities and distinctions. We hope that this survey will underline the crucial importance of addressing NVM reliability for ensuring their system integration and will be useful for researchers, computer architects and processor designers. Full article
Figures

Figure 1

Open AccessArticle
Assessing Efficiency of Prompts Based on Learner Characteristics
Computers 2017, 6(1), 7; doi:10.3390/computers6010007 -
Abstract
Personalized prompting research has shown the significant learning benefit of prompting. The current paper outlines and examines a personalized prompting approach aimed at eliminating performance differences on the basis of a number of learner characteristics (capturing learning strategies and traits). The learner characteristics
[...] Read more.
Personalized prompting research has shown the significant learning benefit of prompting. The current paper outlines and examines a personalized prompting approach aimed at eliminating performance differences on the basis of a number of learner characteristics (capturing learning strategies and traits). The learner characteristics of interest were the need for cognition, work effort, computer self-efficacy, the use of surface learning, and the learner’s confidence in their learning. The approach was tested in two e-modules, using similar assessment forms (experimental n = 413; control group n = 243). Several prompts which corresponded to the learner characteristics were implemented, including an explanation prompt, a motivation prompt, a strategy prompt, and an assessment prompt. All learning characteristics were significant correlates of at least one of the outcome measures (test performance, errors, and omissions). However, only the assessment prompt increased test performance. On this basis, and drawing upon the testing effect, this prompt may be a particularly promising option to increase performance in e-learning and similar personalized systems. Full article
Figures

Figure 1

Open AccessArticle
A Comparative Experimental Design and Performance Analysis of Snort-Based Intrusion Detection System in Practical Computer Networks
Computers 2017, 6(1), 6; doi:10.3390/computers6010006 -
Abstract
As one of the most reliable technologies, network intrusion detection system (NIDS) allows the monitoring of incoming and outgoing traffic to identify unauthorised usage and mishandling of attackers in computer network systems. To this extent, this paper investigates the experimental performance of Snort-based
[...] Read more.
As one of the most reliable technologies, network intrusion detection system (NIDS) allows the monitoring of incoming and outgoing traffic to identify unauthorised usage and mishandling of attackers in computer network systems. To this extent, this paper investigates the experimental performance of Snort-based NIDS (S-NIDS) in a practical network with the latest technology in various network scenarios including high data speed and/or heavy traffic and/or large packet size. An effective testbed is designed based on Snort using different muti-core processors, e.g., i5 and i7, with different operating systems, e.g., Windows 7, Windows Server and Linux. Furthermore, considering an enterprise network consisting of multiple virtual local area networks (VLANs), a centralised parallel S-NIDS (CPS-NIDS) is proposed with the support of a centralised database server to deal with high data speed and heavy traffic. Experimental evaluation is carried out for each network configuration to evaluate the performance of the S-NIDS in different network scenarios as well as validating the effectiveness of the proposed CPS-NIDS. In particular, by analysing packet analysis efficiency, an improved performance of up to 10% is shown to be achieved with Linux over other operating systems, while up to 8% of improved performance can be achieved with i7 over i5 processors. Full article
Figures

Figure 1