Next Issue
Volume 9, September
Previous Issue
Volume 9, March

Computers, Volume 9, Issue 2 (June 2020) – 32 articles

Cover Story (view full-size image): This article explores the enhanced mobile broadband (eMBB) service class, defined within the new 5G communication paradigm, and evaluates the impact of the transition from 4G to 5G access technology on the radio access network and transport network. Simulation results are obtained with ns3 and performance analyses are focused on 6 GHz radio scenarios for the radio access network, where a non-standalone 5G configuration has been assumed, and on SDN-based scenarios for the transport network. Inspired by the 5G Transformer model, we describe and simulate each single element of the three main functional plains of the proposed architecture to implement a preliminary evaluation of the end-to-end system performances. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Order results
Result details
Select all
Export citation of selected articles as:
Article
Non-Fragmented Network Flow Design Analysis: Comparison IPv4 with IPv6 Using Path MTU Discovery
Computers 2020, 9(2), 54; https://doi.org/10.3390/computers9020054 - 26 Jun 2020
Viewed by 2012
Abstract
With the expansion in the number of devices that connect to the Internet, a new area, known as the Internet of Things (IoT), appears. It was necessary to migrate the IPv4 protocol by the IPv6 protocol, due to the scarcity of IPv4 addresses. [...] Read more.
With the expansion in the number of devices that connect to the Internet, a new area, known as the Internet of Things (IoT), appears. It was necessary to migrate the IPv4 protocol by the IPv6 protocol, due to the scarcity of IPv4 addresses. One of the advances of IPv6 concerning its predecessor was the Path MTU Discovery protocol, in which this work aims to demonstrate its effectiveness in a virtual environment. Employing the VirtualBox virtualization program, a scenario is defined with fifteen machines with the Debian operating system and two network scenarios, one using the IPv4 network configuration and the other using the IPv6 network configuration. In both situations, the MTU values of all machines were changed to perform the performance tests while using UDP transport traffic. The fragmentation of packets demonstrated the effectiveness of the Path MTU Discovery protocol. The results achieved point to a stabilization in bandwidth and jitter when Path MTU Discovery is used and to change when it is no longer applied, proving its effectiveness. Full article
Show Figures

Figure 1

Article
Towards a RINA-Based Architecture for Performance Management of Large-Scale Distributed Systems
Computers 2020, 9(2), 53; https://doi.org/10.3390/computers9020053 - 25 Jun 2020
Cited by 2 | Viewed by 1812
Abstract
Modern society is increasingly dependent on reliable performance of distributed systems. In this paper, we provide a precise definition of performance using the concept of quality attenuation; discuss its properties, measurement and decomposition; identify sources of such attenuation; outline methods of managing performance [...] Read more.
Modern society is increasingly dependent on reliable performance of distributed systems. In this paper, we provide a precise definition of performance using the concept of quality attenuation; discuss its properties, measurement and decomposition; identify sources of such attenuation; outline methods of managing performance hazards automatically using the capabilities of the Recursive InterNetworking Architecture (RINA); demonstrate procedures for aggregating both application demands and network performance to achieve scalability; discuss dealing with bursty and time-critical traffic; propose metrics to assess the effectiveness of a performance management system; and outline an architecture for performance management. Full article
Show Figures

Figure 1

Article
Cloud Computing for Climate Modelling: Evaluation, Challenges and Benefits
Computers 2020, 9(2), 52; https://doi.org/10.3390/computers9020052 - 22 Jun 2020
Cited by 2 | Viewed by 2220
Abstract
Cloud computing is a mature technology that has already shown benefits for a wide range of academic research domains that, in turn, utilize a wide range of application design models. In this paper, we discuss the use of cloud computing as a tool [...] Read more.
Cloud computing is a mature technology that has already shown benefits for a wide range of academic research domains that, in turn, utilize a wide range of application design models. In this paper, we discuss the use of cloud computing as a tool to improve the range of resources available for climate science, presenting the evaluation of two different climate models. Each was customized in a different way to run in public cloud computing environments (hereafter cloud computing) provided by three different public vendors: Amazon, Google and Microsoft. The adaptations and procedures necessary to run the models in these environments are described. The computational performance and cost of each model within this new type of environment are discussed, and an assessment is given in qualitative terms. Finally, we discuss how cloud computing can be used for geoscientific modelling, including issues related to the allocation of resources by funding bodies. We also discuss problems related to computing security, reliability and scientific reproducibility. Full article
Show Figures

Figure 1

Article
Formation of Unique Characteristics of Hiding and Encoding of Data Blocks Based on the Fragmented Identifier of Information Processed by Cellular Automata
Computers 2020, 9(2), 51; https://doi.org/10.3390/computers9020051 - 19 Jun 2020
Cited by 1 | Viewed by 1393
Abstract
Currently, the following applications of the theory of cellular automata are known: symmetric encryption, data compression, digital image processing and some others. There are also studies suggesting the possibility of building a public key system based on cellular automata, but this problem has [...] Read more.
Currently, the following applications of the theory of cellular automata are known: symmetric encryption, data compression, digital image processing and some others. There are also studies suggesting the possibility of building a public key system based on cellular automata, but this problem has not been solved. The purpose of the study is to develop an algorithm for hiding and encoding data blocks based on a fragmented identifier of information processed on the basis of cellular automata in the scale of binary data streams using an original method containing an public parameter in the conversion key. A mathematical model of the formation of unique data characteristics is considered, based on the use of patterns that determine the individual neighborhood of elements in cell encryption. A multi-threaded computing scheme has been developed for processing confidential data using the single-key method with a public parameter based on cellular automata and using data segmentation. To study individual chains in data blocks, a software module has been developed that allows one to evaluate the uniformity of information distribution during encryption. A variant of estimating the distribution of bits is proposed that indirectly reflects the cryptographic strength of the method. Based on the developed theoretical principles, a software module is synthesized that implements a transformation rule that takes into account the individual neighborhood of the processed element on the basis of a cellular automata. Experimental studies have shown that this modification made it possible to increase the speed of the method by up to 13 percent due to segmentation and the possibility of parallel processing of the original matrix, as well as to increase cryptographic strength due to the use of a unique chain of pseudo-random neighborhood (hereinafter referred to as PRN) defined by the transformation key. At the same time, it was possible to maintain uniformity of distribution of the output chain at the bit level and to ensure that the number of inversions was included in the confidence interval. Full article
(This article belongs to the Special Issue Selected Papers from MICSECS 2019)
Show Figures

Figure 1

Article
Risk Reduction Optimization of Process Systems under Cost Constraint Applying Instrumented Safety Measures
Computers 2020, 9(2), 50; https://doi.org/10.3390/computers9020050 - 19 Jun 2020
Cited by 1 | Viewed by 1386
Abstract
This article is devoted to an approach to develop a safety system process according to functional safety standards. With the development of technologies and increasing the specific energy stored in the equipment, the issue of safety during operation becomes more urgent. Adequacy of [...] Read more.
This article is devoted to an approach to develop a safety system process according to functional safety standards. With the development of technologies and increasing the specific energy stored in the equipment, the issue of safety during operation becomes more urgent. Adequacy of the decisions on safety measures made during the early stages of planning the facilities and processes contributes to avoiding technological incidents and corresponding losses. A risk-based approach to safety system design is proposed. The approach is based on a methodology for determining and assessing risks and then developing the necessary set of safety measures to ensure that the specified safety indicators are achieved. The classification of safety measures is given, and the model of risk reduction based on deterministic analysis of the process is considered. It is shown that the task of changing the composition of safety measures can be represented as the knapsack discrete optimization problem, and the solution is based on the Monte Carlo method. A numerical example is provided to illustrate the approach. The considered example contains a description of failure conditions, an analysis of the types and consequences of failures that could lead to accidents, and a list of safety measures. Solving the optimization problem used real reliability parameters and the cost of equipment. Based on the simulation results, the optimal composition of the safety measures providing cost minimization is given. This research is relevant to engineering departments, who specialize in planning and designing technological solutions. Full article
(This article belongs to the Special Issue Selected Papers from MICSECS 2019)
Show Figures

Figure 1

Article
The Architecture of the Access Protocols of the Global Infocommunication Resources
Computers 2020, 9(2), 49; https://doi.org/10.3390/computers9020049 - 09 Jun 2020
Cited by 1 | Viewed by 1423
Abstract
One of the important functions of cyberspace is to provide people and devices with access to global infocommunication resources, and as the network infrastructure develops, the number of access options increases, including those based on wireless technologies. A wide variety of access technologies [...] Read more.
One of the important functions of cyberspace is to provide people and devices with access to global infocommunication resources, and as the network infrastructure develops, the number of access options increases, including those based on wireless technologies. A wide variety of access technologies leads to the formation of heterogeneous broadcast networks. Following the concept of Always Best Connected and striving for rational use of access network resources, developers use Vertical Handover procedures today. This approach assumes the existence of a selection criterion that allows you to prefer a particular network to other networks from the number of available and able to provide the required connection and services, and a selection procedure that implements the process of calculating the characteristics of access in each of the acceptable options. When implementing a vertical handover, it should be taken into account that the rational choice depends on the moment of time and point of space at which the terminal device developed a request to establish a connection. The corresponding procedures can be implemented in accordance with decentralized or centralized architectures. In the first case, the choice is made by hardware and software of terminal devices. The disadvantage of this implementation is the complexity and, as a result, the rise in price of terminal devices, each of which requires a corresponding complexity of the selection procedure of the performance and memory reserve. Another negative consequence of the decentralized approach is a decrease in the last-mile network utilization rate due to the inability to make complex decisions. The article discusses the centralized architecture of access protocols to global infocommunication resources. In accordance with it, the access network is selected by a new centralized network device that was not previously used on communication networks. The protocols that this network element implements should be located between the first (physical) and second (channel) levels of the open system interaction model. The purpose of the study is to develop an effective architectural solution for access networks and create a mathematical model for evaluating the efficiency of using the last mile resources and the quality of user service. The object of research is architectural solutions for last-mile networks. The subject of research is models of the theory of tele-traffic that allow us to evaluate the qualitative characteristics of the corresponding process. To achieve this goal the following tasks were solved in the article: analysis of known approaches to selecting one of several available access networks; development of a centralized architecture that changes the basic model of interaction between open systems; description of the metadata exchange scenario between network elements of the new architecture; development of a mathematical model of the data transmission process in the access radio network; conducting numerical estimates of the probabilistic and temporal characteristics of the proposed procedures. Full article
(This article belongs to the Special Issue Selected Papers from MICSECS 2019)
Show Figures

Figure 1

Review
A Review of Memory Errors Exploitation in x86-64
Computers 2020, 9(2), 48; https://doi.org/10.3390/computers9020048 - 08 Jun 2020
Viewed by 2542
Abstract
Memory errors are still a serious threat affecting millions of devices worldwide. Recently, bounty programs have reached a new record, paying up to USD 2.5 million for one single vulnerability in Android and up to USD 2 million for Apple’s operating system. In [...] Read more.
Memory errors are still a serious threat affecting millions of devices worldwide. Recently, bounty programs have reached a new record, paying up to USD 2.5 million for one single vulnerability in Android and up to USD 2 million for Apple’s operating system. In almost all cases, it is common to exploit memory errors in one or more stages to fully compromise those devices. In this paper, we review and discuss the importance of memory error vulnerabilities, and more specifically stack buffer overflows to provide a full view of how memory errors are exploited. We identify the root causes that make those attacks possible on modern x86-64 architecture in the presence of modern protection techniques. We have analyzed how unsafe library functions are prone to buffer overflows, revealing that although there are secure versions of those functions, they are not actually preventing buffer overflows from happening. Using secure functions does not result in software free from vulnerabilities and it requires developers to be security-aware. To overcome this problem, we discuss the three main security protection techniques present in all modern operating system; the non-eXecutable bit (NX), the Stack Smashing Protector (SSP) and the Address Space Layout Randomization (ASLR). After discussing their effectiveness, we conclude that although they provide a strong level of protection against classical exploitation techniques, modern attacks can bypass them. Full article
Show Figures

Figure 1

Article
Model Based Approach to Cyber–Physical Systems Status Monitoring
Computers 2020, 9(2), 47; https://doi.org/10.3390/computers9020047 - 07 Jun 2020
Viewed by 1599
Abstract
The distinctive feature of new generation information systems is not only their complexity in terms of number of elements, number of connections and hierarchy levels, but also their constantly changing structure and behavior. In this situation the problem of receiving actual information about [...] Read more.
The distinctive feature of new generation information systems is not only their complexity in terms of number of elements, number of connections and hierarchy levels, but also their constantly changing structure and behavior. In this situation the problem of receiving actual information about the observed complex Cyber–Physical Systems (CPS) current status becomes a rather difficult task. This information is needed by stakeholders for solving tasks concerning keeping the system operational, improving its efficiency, ensuring security, etc. Known approaches to solving the problem of the complex distributed CPS actual status definition are not enough effective. The authors propose a model based approach to solving the task of monitoring the status of complex CPS. There are a number of known model based approaches to complex distributed CPS monitoring, but their main difference in comparison with the suggested one is that known approaches by the most part use static models which are to be build manually by experts. It takes a lot of human efforts and often results in errors. Our idea is that automata models of structure and behavior of the observed system are used and both of these models are built and kept in actual state in automatic mode on the basis of log file information. The proposed approach is based, on one hand, on the results of the authors researches in the field of automatic synthesis of multi-level automata models of observed systems and, on the other hand, on well known algorithms of process mining. In the paper typical monitoring tasks are described and generalized algorithms for solving them using the proposed system of models are presented. An example of real life systems based on the suggested approach is given. The approach can be recommended to use for building CPS of medium and high complexity, characterized by high structural dynamics and cognitive behavior. Full article
(This article belongs to the Special Issue Selected Papers from MICSECS 2019)
Show Figures

Figure 1

Article
Classification of Vowels from Imagined Speech with Convolutional Neural Networks
Computers 2020, 9(2), 46; https://doi.org/10.3390/computers9020046 - 01 Jun 2020
Cited by 3 | Viewed by 1846
Abstract
Imagined speech is a relatively new electroencephalography (EEG) neuro-paradigm, which has seen little use in Brain-Computer Interface (BCI) applications. Imagined speech can be used to allow physically impaired patients to communicate and to use smart devices by imagining desired commands and then detecting [...] Read more.
Imagined speech is a relatively new electroencephalography (EEG) neuro-paradigm, which has seen little use in Brain-Computer Interface (BCI) applications. Imagined speech can be used to allow physically impaired patients to communicate and to use smart devices by imagining desired commands and then detecting and executing those commands in a smart device. The goal of this research is to verify previous classification attempts made and then design a new, more efficient neural network that is noticeably less complex (fewer number of layers) that still achieves a comparable classification accuracy. The classifiers are designed to distinguish between EEG signal patterns corresponding to imagined speech of different vowels and words. This research uses a dataset that consists of 15 subjects imagining saying the five main vowels (a, e, i, o, u) and six different words. Two previous studies on imagined speech classifications are verified as those studies used the same dataset used here. The replicated results are compared. The main goal of this study is to take the proposed convolutional neural network (CNN) model from one of the replicated studies and make it much more simpler and less complex, while attempting to retain a similar accuracy. The pre-processing of data is described and a new CNN classifier with three different transfer learning methods is described and used to classify EEG signals. Classification accuracy is used as the performance metric. The new proposed CNN, which uses half as many layers and less complex pre-processing methods, achieved a considerably lower accuracy, but still managed to outperform the initial model proposed by the authors of the dataset by a considerable margin. It is recommended that further studies investigating classifying imagined speech should use more data and more powerful machine learning techniques. Transfer learning proved beneficial and should be used to improve the effectiveness of neural networks. Full article
(This article belongs to the Special Issue Machine Learning for EEG Signal Processing)
Show Figures

Figure 1

Article
GeoQoE-Vanet: QoE-Aware Geographic Routing Protocol for Video Streaming over Vehicular Ad-hoc Networks
Computers 2020, 9(2), 45; https://doi.org/10.3390/computers9020045 - 31 May 2020
Viewed by 1810
Abstract
Video streaming is one of the challenging issues in vehicular ad-hoc networks (VANETs) due to their highly dynamic topology and frequent connectivity disruptions. Recent developments in the routing protocol methods used in VANETs have contributed to improvements in the quality of experience (QoE) [...] Read more.
Video streaming is one of the challenging issues in vehicular ad-hoc networks (VANETs) due to their highly dynamic topology and frequent connectivity disruptions. Recent developments in the routing protocol methods used in VANETs have contributed to improvements in the quality of experience (QoE) of the received video. One of these methods is the selection of the next-hop relay vehicle. In this paper, a QoE-aware geographic protocol for video streaming over VANETs is proposed. The selection process of the next relay vehicle is based on a correlated formula of QoE and quality of service (QoS) factors to enhance the users’ QoE. The simulation results show that the proposed GeoQoE-Vanet outperforms both GPSR and GPSR-2P protocols in providing the best end-user QoE of video streaming service. Full article
Show Figures

Figure 1

Article
A Comprehensive and Systematic Survey on the Internet of Things: Security and Privacy Challenges, Security Frameworks, Enabling Technologies, Threats, Vulnerabilities and Countermeasures
Computers 2020, 9(2), 44; https://doi.org/10.3390/computers9020044 - 30 May 2020
Cited by 4 | Viewed by 2276
Abstract
The Internet of Things (IoT) has experienced constant growth in the number of devices deployed and the range of applications in which such devices are used. They vary widely in size, computational power, capacity storage, and energy. The explosive growth and integration of [...] Read more.
The Internet of Things (IoT) has experienced constant growth in the number of devices deployed and the range of applications in which such devices are used. They vary widely in size, computational power, capacity storage, and energy. The explosive growth and integration of IoT in different domains and areas of our daily lives has created an Internet of Vulnerabilities (IoV). In the rush to build and implement IoT devices, security and privacy have not been adequately addressed. IoT devices, many of which are highly constrained, are vulnerable to cyber attacks, which threaten the security and privacy of users and systems. This survey provides a comprehensive overview of IoT in regard to areas of application, security architecture frameworks, recent security and privacy issues in IoT, as well as a review of recent similar studies on IoT security and privacy. In addition, the paper presents a comprehensive taxonomy of attacks on IoT based on the three-layer architecture model; perception, network, and application layers, as well as a suggestion of the impact of these attacks on CIA objectives in representative devices, are presented. Moreover, the study proposes mitigations and countermeasures, taking a multi-faceted approach rather than a per layer approach. Open research areas are also covered to provide researchers with the most recent research urgent questions in regard to securing IoT ecosystem. Full article
(This article belongs to the Special Issue IoT: Security, Privacy and Best Practices)
Show Figures

Figure 1

Article
Performance Evaluation of 5G Access Technologies and SDN Transport Network on an NS3 Simulator
Computers 2020, 9(2), 43; https://doi.org/10.3390/computers9020043 - 27 May 2020
Cited by 1 | Viewed by 2252
Abstract
In this article, we deal with the enhanced Mobile Broadband (eMBB) service class, defined within the new 5G communication paradigm, to evaluate the impact of the transition from 4G to 5G access technology on the Radio Access Network and on the Transport Network. [...] Read more.
In this article, we deal with the enhanced Mobile Broadband (eMBB) service class, defined within the new 5G communication paradigm, to evaluate the impact of the transition from 4G to 5G access technology on the Radio Access Network and on the Transport Network. Simulation results are obtained with ns3 and performance analyses are focused on 6 GHz radio scenarios for the Radio Access Network, where an Non-Standalone 5G configuration has been assumed, and on SDN-based scenarios for the Transport Network. Inspired by the 5G Transformer model, we describe and simulate each single element of the three main functional plains of the proposed architecture to aim a preliminary evaluation of the end-to-end system performances. Full article
Show Figures

Figure 1

Article
Evaluation of a Cyber-Physical Computing System with Migration of Virtual Machines during Continuous Computing
Computers 2020, 9(2), 42; https://doi.org/10.3390/computers9020042 - 23 May 2020
Cited by 5 | Viewed by 1828
Abstract
The Markov model of reliability of a failover cluster performing calculations in a cyber-physical system is considered. The continuity of the cluster computing process in the event of a failure of the physical resources of the servers is provided on the basis of [...] Read more.
The Markov model of reliability of a failover cluster performing calculations in a cyber-physical system is considered. The continuity of the cluster computing process in the event of a failure of the physical resources of the servers is provided on the basis of virtualization technology and is associated with the migration of virtual machines. The difference in the proposed model is that it considers the restrictions on the allowable time of interruption of the computational process during cluster recovery. This limitation is due to the fact that, if two physical servers fail, then object management is lost, which is unacceptable. Failure occurs if their recovery time is longer than the maximum allowable time of interruption of the computing process. The modes of operation of the cluster with and without system recovery in the event of a failure of part of the system resources that do not lead to loss of continuity of the computing process are considered. The results of the article are aimed at the possibility of assessing the probability of cluster operability while supporting the continuity of computations and its running to failure, leading to the interruption of the computational (control) process beyond the maximum permissible time. As a result of the calculation example for the presented models, it was shown that the mean time to failure during recovery under conditions of supporting the continuity of the computing process increases by more than two orders of magnitude. Full article
(This article belongs to the Special Issue Selected Papers from MICSECS 2019)
Show Figures

Figure 1

Article
A Unified Methodology for Heartbeats Detection in Seismocardiogram and Ballistocardiogram Signals
Computers 2020, 9(2), 41; https://doi.org/10.3390/computers9020041 - 22 May 2020
Viewed by 1725
Abstract
This work presents a methodology to analyze and segment both seismocardiogram (SCG) and ballistocardiogram (BCG) signals in a unified fashion. An unsupervised approach is followed to extract a template of SCG/BCG heartbeats, which is then used to fine-tune temporal waveform annotation. Rigorous performance [...] Read more.
This work presents a methodology to analyze and segment both seismocardiogram (SCG) and ballistocardiogram (BCG) signals in a unified fashion. An unsupervised approach is followed to extract a template of SCG/BCG heartbeats, which is then used to fine-tune temporal waveform annotation. Rigorous performance assessment is conducted in terms of sensitivity, precision, Root Mean Square Error (RMSE) and Mean Absolute Error (MAE) of annotation. The methodology is tested on four independent datasets, covering different measurement setups and time resolutions. A wide application range is therefore explored, which better characterizes the robustness and generality of the method with respect to a single dataset. Overall, sensitivity and precision scores are uniform across all datasets ( p > 0.05 from the Kruskal–Wallis test): the average sensitivity among datasets is 98.7%, with 98.2% precision. On the other hand, a slight yet significant difference in RMSE and MAE scores was found ( p < 0.01 ) in favor of datasets with higher sampling frequency. The best RMSE scores for SCG and BCG are 4.5 and 4.8 ms, respectively; similarly, the best MAE scores are 3.3 and 3.6 ms. The results were compared to relevant recent literature and are found to improve both detection performance and temporal annotation errors. Full article
Show Figures

Figure 1

Article
Inertial Sensor Based Solution for Finger Motion Tracking
Computers 2020, 9(2), 40; https://doi.org/10.3390/computers9020040 - 12 May 2020
Cited by 1 | Viewed by 1906
Abstract
Hand motion tracking plays an important role in virtual reality systems for immersion and interaction purposes. This paper discusses the problem of finger tracking and proposes the application of the extension of the Madgwick filter and a simple switching (motion recognition) algorithm as [...] Read more.
Hand motion tracking plays an important role in virtual reality systems for immersion and interaction purposes. This paper discusses the problem of finger tracking and proposes the application of the extension of the Madgwick filter and a simple switching (motion recognition) algorithm as a comparison. The proposed algorithms utilize the three-link finger model and provide complete information about the position and orientation of the metacarpus. The numerical experiment shows that this approach is feasible and overcomes some of the major limitations of inertial motion tracking. The paper’s proposed solution was created in order to track a user’s pointing and grasping movements during the interaction with the virtual reconstruction of the cultural heritage of historical cities. Full article
(This article belongs to the Special Issue Selected Papers from MICSECS 2019)
Show Figures

Figure 1

Article
Quality of Service (QoS) Management for Local Area Network (LAN) Using Traffic Policy Technique to Secure Congestion
Computers 2020, 9(2), 39; https://doi.org/10.3390/computers9020039 - 12 May 2020
Cited by 2 | Viewed by 1660
Abstract
This study presents the proposed testbed implementation for the Advanced Technology Training Center (ADTEC) Batu Pahat, one of Malaysia’s industrial training institutes. The objectives of this study are to discover the issues regarding network congestion, propose a suitable method to overcome such issues, [...] Read more.
This study presents the proposed testbed implementation for the Advanced Technology Training Center (ADTEC) Batu Pahat, one of Malaysia’s industrial training institutes. The objectives of this study are to discover the issues regarding network congestion, propose a suitable method to overcome such issues, and generate output data for the comparison of the results before and after the proposed implementation. The internet is directly connected to internet service providers (ISPs), which neither impose any rule nor filter the traffic components; all connections comply on the basis of the base effort services provided by the ISP. The congestion problem has been raised several times and the information technology (IT) department has been receiving complaints about poor and sometimes intermittent internet connection. Such issues provide some ideas for a possible solution because the end client is a human resource core business. In addition, budget constraints contribute to this problem. After a comprehensive review of related literature and discussion with experts, the implementation of quality of service through add-on rules, such as traffic policing on network traffic, was proposed. The proposed testbed also classified the traffic. Results show that the proposed testbed is stable. After the implementation of the generated solution, the IT department no longer receives any complaints, and thus fulfills the goal of having zero internet connection issues. Full article
Show Figures

Figure 1

Article
Indiscernibility Mask Key for Image Steganography
Computers 2020, 9(2), 38; https://doi.org/10.3390/computers9020038 - 11 May 2020
Cited by 2 | Viewed by 1934
Abstract
Our concern in this paper is to explore the possibility of using rough inclusions for image steganography. We present our initial research using indiscernibility relation as a steganographic key for hiding information into the stego carrier by means of a fixed mask. The [...] Read more.
Our concern in this paper is to explore the possibility of using rough inclusions for image steganography. We present our initial research using indiscernibility relation as a steganographic key for hiding information into the stego carrier by means of a fixed mask. The information can be embedded into the stego-carrier in a semi-random way, whereas the reconstruction is performed in a deterministic way. The information shall be placed in selected bytes, which are indiscernible with the mask to a fixed degree. The bits indiscernible with other ratios (smaller or greater) form random gaps that lead to somehow unpredictable hiding of information presence. We assume that in our technique it can modify bits, the change of which does not cause a visual modification detectable by human sight, so we do not limit ourselves to the least significant bit. The only assumption is that we do not use the position when the mask we define uses it. For simplicity’s sake, in this work we present its operation, features, using the Least Significant Bit (LSB) method. In the experimental part, we have implemented our method in the context of hiding image into the image. The LSB technique in its simplest form is not resistant to stegoanalisys, so we used the well-known LSB matching method to mask the presence of our steganographic key usage. To verify the resistance to stegoanalisys we have conducted and discussed Chi-square and LSB enhancement test. The positive features of our method include its simplicity and speed, to decode a message we need to hide, or pass to another channel, a several-bit mask, degree of indiscernibility and size of the hidden file. We hope that our method will find application in the art of creating steganographic keys. Full article
Show Figures

Figure 1

Article
Complex Data Imputation by Auto-Encoders and Convolutional Neural Networks—A Case Study on Genome Gap-Filling
Computers 2020, 9(2), 37; https://doi.org/10.3390/computers9020037 - 11 May 2020
Cited by 2 | Viewed by 2459
Abstract
Missing data imputation has been a hot topic in the past decade, and many state-of-the-art works have been presented to propose novel, interesting solutions that have been applied in a variety of fields. In the past decade, the successful results achieved by deep [...] Read more.
Missing data imputation has been a hot topic in the past decade, and many state-of-the-art works have been presented to propose novel, interesting solutions that have been applied in a variety of fields. In the past decade, the successful results achieved by deep learning techniques have opened the way to their application for solving difficult problems where human skill is not able to provide a reliable solution. Not surprisingly, some deep learners, mainly exploiting encoder-decoder architectures, have also been designed and applied to the task of missing data imputation. However, most of the proposed imputation techniques have not been designed to tackle “complex data”, that is high dimensional data belonging to datasets with huge cardinality and describing complex problems. Precisely, they often need critical parameters to be manually set or exploit complex architecture and/or training phases that make their computational load impracticable. In this paper, after clustering the state-of-the-art imputation techniques into three broad categories, we briefly review the most representative methods and then describe our data imputation proposals, which exploit deep learning techniques specifically designed to handle complex data. Comparative tests on genome sequences show that our deep learning imputers outperform the state-of-the-art KNN-imputation method when filling gaps in human genome sequences. Full article
Show Figures

Figure 1

Article
Advanced Convolutional Neural Network-Based Hybrid Acoustic Models for Low-Resource Speech Recognition
Computers 2020, 9(2), 36; https://doi.org/10.3390/computers9020036 - 02 May 2020
Viewed by 1938
Abstract
Deep neural networks (DNNs) have shown a great achievement in acoustic modeling for speech recognition task. Of these networks, convolutional neural network (CNN) is an effective network for representing the local properties of the speech formants. However, CNN is not suitable for modeling [...] Read more.
Deep neural networks (DNNs) have shown a great achievement in acoustic modeling for speech recognition task. Of these networks, convolutional neural network (CNN) is an effective network for representing the local properties of the speech formants. However, CNN is not suitable for modeling the long-term context dependencies between speech signal frames. Recently, the recurrent neural networks (RNNs) have shown great abilities for modeling long-term context dependencies. However, the performance of RNNs is not good for low-resource speech recognition tasks, and is even worse than the conventional feed-forward neural networks. Moreover, these networks often overfit severely on the training corpus in the low-resource speech recognition tasks. This paper presents the results of our contributions to combine CNN and conventional RNN with gate, highway, and residual networks to reduce the above problems. The optimal neural network structures and training strategies for the proposed neural network models are explored. Experiments were conducted on the Amharic and Chaha datasets, as well as on the limited language packages (10-h) of the benchmark datasets released under the Intelligence Advanced Research Projects Activity (IARPA) Babel Program. The proposed neural network models achieve 0.1–42.79% relative performance improvements over their corresponding feed-forward DNN, CNN, bidirectional RNN (BRNN), or bidirectional gated recurrent unit (BGRU) baselines across six language collections. These approaches are promising candidates for developing better performance acoustic models for low-resource speech recognition tasks. Full article
(This article belongs to the Special Issue Artificial Neural Networks in Pattern Recognition)
Show Figures

Figure 1

Article
Generating Trees for Comparison
Computers 2020, 9(2), 35; https://doi.org/10.3390/computers9020035 - 29 Apr 2020
Viewed by 1906
Abstract
Tree comparisons are used in various areas with various statistical or dissimilarity measures. Given that data in various domains are diverse, and a particular comparison approach could be more appropriate for specific applications, there is a need to evaluate different comparison approaches. As [...] Read more.
Tree comparisons are used in various areas with various statistical or dissimilarity measures. Given that data in various domains are diverse, and a particular comparison approach could be more appropriate for specific applications, there is a need to evaluate different comparison approaches. As gathering real data is often an extensive task, using generated trees provides a faster evaluation of the proposed solutions. This paper presents three algorithms for generating random trees: parametrized by tree size, shape based on the node distribution and the amount of difference between generated trees. The motivation for the algorithms came from unordered trees that are created from class hierarchies in object-oriented programs. The presented algorithms are evaluated by statistical and dissimilarity measures to observe stability, behavior, and impact on node distribution. The results in the case of dissimilarity measures evaluation show that the algorithms are suitable for tree comparison. Full article
Show Figures

Figure 1

Article
Employing Behavioral Analysis to Predict User Attitude towards Unwanted Content in Online Social Network Services: The Case of Makkah Region in Saudi Arabia
Computers 2020, 9(2), 34; https://doi.org/10.3390/computers9020034 - 20 Apr 2020
Cited by 2 | Viewed by 1938
Abstract
The high volume of user-generated content caused by the popular use of online social network services exposes users to different kinds of content that can be harmful or unwanted. Solutions to protect user privacy from such unwanted content cannot be generalized due to [...] Read more.
The high volume of user-generated content caused by the popular use of online social network services exposes users to different kinds of content that can be harmful or unwanted. Solutions to protect user privacy from such unwanted content cannot be generalized due to different perceptions of what is considered as unwanted for each individual. Thus, there is a substantial need to design a personalized privacy protection mechanism that takes into consideration differences in users’ privacy requirements. To achieve personalization, a user attitude about certain content must be acknowledged by the automated protection system. In this paper, we investigate the relationship between user attitude and user behavior among users from the Makkah region in Saudi Arabia to determine the applicability of considering users’ behaviors, as indicators of their attitudes towards unwanted content. We propose a semi-explicit attitude measure to infer user attitude from user-selected examples. Results revealed that semi-explicit attitude is a more reliable attitude measure to represent users’ actual attitudes than self-reported preferences for our sample. In addition, results show a statistically significant relationship between a user’s commenting behavior and the user’s semi-explicit attitude within our sample. Thus, commenting behavior is an effective indicator of the user’s semi-explicit attitude towards unwanted content for a user from the Makkah region in Saudi Arabia. We believe that our findings can have positive implications for designing an effective automated personalized privacy protection mechanism by reproducing the study considering other populations. Full article
Show Figures

Figure 1

Article
Evaluation of Features in Detection of Dislike Responses to Audio–Visual Stimuli from EEG Signals
Computers 2020, 9(2), 33; https://doi.org/10.3390/computers9020033 - 20 Apr 2020
Cited by 4 | Viewed by 2137
Abstract
There is a strong correlation between the like/dislike responses to audio–visual stimuli and the emotional arousal and valence reactions of a person. In the present work, our attention is focused on the automated detection of dislike responses based on EEG activity when music [...] Read more.
There is a strong correlation between the like/dislike responses to audio–visual stimuli and the emotional arousal and valence reactions of a person. In the present work, our attention is focused on the automated detection of dislike responses based on EEG activity when music videos are used as audio–visual stimuli. Specifically, we investigate the discriminative capacity of the Logarithmic Energy (LogE), Linear Frequency Cepstral Coefficients (LFCC), Power Spectral Density (PSD) and Discrete Wavelet Transform (DWT)-based EEG features, computed with and without segmentation of the EEG signal, on the dislike detection task. We carried out a comparative evaluation with eighteen modifications of the above-mentioned EEG features that cover different frequency bands and use different energy decomposition methods and spectral resolutions. For that purpose, we made use of Naïve Bayes classifier (NB), Classification and regression trees (CART), k-Nearest Neighbors (kNN) classifier, and support vector machines (SVM) classifier with a radial basis function (RBF) kernel trained with the Sequential Minimal Optimization (SMO) method. The experimental evaluation was performed on the well-known and widely used DEAP dataset. A classification accuracy of up to 98.6% was observed for the best performing combination of pre-processing, EEG features and classifier. These results support that the automated detection of like/dislike reactions based on EEG activity is feasible in a personalized setup. This opens opportunities for the incorporation of such functionality in entertainment, healthcare and security applications. Full article
(This article belongs to the Special Issue Machine Learning for EEG Signal Processing)
Show Figures

Figure 1

Article
An Approach to Chance Constrained Problems Based on Huge Data Sets Using Weighted Stratified Sampling and Adaptive Differential Evolution
Computers 2020, 9(2), 32; https://doi.org/10.3390/computers9020032 - 16 Apr 2020
Viewed by 1787
Abstract
In this paper, a new approach to solve Chance Constrained Problems (CCPs) using huge data sets is proposed. Specifically, instead of the conventional mathematical model, a huge data set is used to formulate CCP. This is because such a large data set is [...] Read more.
In this paper, a new approach to solve Chance Constrained Problems (CCPs) using huge data sets is proposed. Specifically, instead of the conventional mathematical model, a huge data set is used to formulate CCP. This is because such a large data set is available nowadays due to advanced information technologies. Since the data set is too large to evaluate the probabilistic constraint of CCP, a new data reduction method called Weighted Stratified Sampling (WSS) is proposed to describe a relaxation problem of CCP. An adaptive Differential Evolution combined with a pruning technique is also proposed to solve the relaxation problem of CCP efficiently. The performance of WSS is compared with a well known method, Simple Random Sampling. Then, the proposed approach is applied to a real-world application, namely the flood control planning formulated as CCP. Full article
Show Figures

Figure 1

Article
Design and Validation of a Minimal Complexity Algorithm for Stair Step Counting
Computers 2020, 9(2), 31; https://doi.org/10.3390/computers9020031 - 16 Apr 2020
Viewed by 2391
Abstract
Wearable sensors play a significant role for monitoring the functional ability of the elderly and in general, promoting active ageing. One of the relevant variables to be tracked is the number of stair steps (single stair steps) performed daily, which is more challenging [...] Read more.
Wearable sensors play a significant role for monitoring the functional ability of the elderly and in general, promoting active ageing. One of the relevant variables to be tracked is the number of stair steps (single stair steps) performed daily, which is more challenging than counting flight of stairs and detecting stair climbing. In this study, we proposed a minimal complexity algorithm composed of a hierarchical classifier and a linear model to estimate the number of stair steps performed during everyday activities. The algorithm was calibrated on accelerometer and barometer recordings measured using a sensor platform worn at the wrist from 20 healthy subjects. It was then tested on 10 older people, specifically enrolled for the study. The algorithm was then compared with other three state-of-the-art methods, which used the accelerometer, the barometer or both. The experiments showed the good performance of our algorithm (stair step counting error: 13.8%), comparable with the best state-of-the-art (p > 0.05), but using a lower computational load and model complexity. Finally, the algorithm was successfully implemented in a low-power smartwatch prototype with a memory footprint of about 4 kB. Full article
Show Figures

Figure 1

Article
Eliminating Nonuniform Smearing and Suppressing the Gibbs Effect on Reconstructed Images
Computers 2020, 9(2), 30; https://doi.org/10.3390/computers9020030 - 15 Apr 2020
Viewed by 1851
Abstract
In this work, the problem of eliminating a nonuniform rectilinear smearing of an image is considered, using a mathematical- and computer-based approach. An example of such a problem is a picture of several cars, moving with different speeds, taken with a fixed camera. [...] Read more.
In this work, the problem of eliminating a nonuniform rectilinear smearing of an image is considered, using a mathematical- and computer-based approach. An example of such a problem is a picture of several cars, moving with different speeds, taken with a fixed camera. The problem is described by a set of one-dimensional Fredholm integral equations (IEs) of the first kind of convolution type, with a one-dimensional point spread function (PSF) when uniform smearing, and by a set of new one-dimensional IEs of a general type (i.e., not the convolution type), with a two-dimensional PSF when nonuniform smearing. The problem is also described by a two-dimensional IE of the convolution type with a two-dimensional PSF when uniform smearing, and by a new two-dimensional IE of a general type with a four-dimensional PSF when nonuniform smearing. The problem of solving the Fredholm IE of the first kind is ill-posed (i.e., unstable). Therefore, IEs of the convolution type are solved by the Fourier transform (FT) method and Tikhonov’s regularization (TR), and IEs of the general type are solved by the quadrature/cubature and TR methods. Moreover, the magnitude of the image smear, Δ, is determined by the original “spectral method”, which increases the accuracy of image restoration. It is shown that the use of a set of one-dimensional IEs is preferable to one two-dimensional IE in the case of nonuniform smearing. In the inverse problem (i.e., image restoration), the Gibbs effect (the effect of false waves) in the image may occur. This may be an edge or an inner effect. The edge effect is well suppressed by the proposed technique, namely, “diffusing the edges”. The inner effect is difficult to eliminate, but the image smearing itself plays the role of diffusion and suppresses the inner Gibbs effect to a large extent. It is shown (in the presence of impulse noise in an image) that the well-known Tukey median filter can distort the image itself, and the Gonzalez adaptive filter also distorts the image (but to a lesser extent). We propose a modified adaptive filter. A software package was developed in MATLAB and illustrative calculations are performed. Full article
(This article belongs to the Special Issue Selected Papers from MICSECS 2019)
Show Figures

Figure 1

Article
Deep Transfer Learning in Diagnosing Leukemia in Blood Cells
Computers 2020, 9(2), 29; https://doi.org/10.3390/computers9020029 - 15 Apr 2020
Cited by 9 | Viewed by 2440
Abstract
Leukemia is a fatal disease that threatens the lives of many patients. Early detection can effectively improve its rate of remission. This paper proposes two automated classification models based on blood microscopic images to detect leukemia by employing transfer learning, rather than traditional [...] Read more.
Leukemia is a fatal disease that threatens the lives of many patients. Early detection can effectively improve its rate of remission. This paper proposes two automated classification models based on blood microscopic images to detect leukemia by employing transfer learning, rather than traditional approaches that have several disadvantages. In the first model, blood microscopic images are pre-processed; then, features are extracted by a pre-trained deep convolutional neural network named AlexNet, which makes classifications according to numerous well-known classifiers. In the second model, after pre-processing the images, AlexNet is fine-tuned for both feature extraction and classification. Experiments were conducted on a dataset consisting of 2820 images confirming that the second model performs better than the first because of 100% classification accuracy. Full article
Show Figures

Figure 1

Article
Insights into Mapping Solutions Based on OPC UA Information Model Applied to the Industry 4.0 Asset Administration Shell
Computers 2020, 9(2), 28; https://doi.org/10.3390/computers9020028 - 14 Apr 2020
Cited by 6 | Viewed by 2449
Abstract
In the context of Industry 4.0, lot of effort is being put to achieve interoperability among industrial applications. As the definition and adoption of communication standards are of paramount importance for the realization of interoperability, during the last few years different organizations have [...] Read more.
In the context of Industry 4.0, lot of effort is being put to achieve interoperability among industrial applications. As the definition and adoption of communication standards are of paramount importance for the realization of interoperability, during the last few years different organizations have developed reference architectures to align standards in the context of the fourth industrial revolution. One of the main examples is the reference architecture model for Industry 4.0, which defines the asset administration shell as the corner stone of the interoperability between applications managing manufacturing systems. Inside Industry 4.0 there is also so much interest behind the standard open platform communications unified architecture (OPC UA), which is listed as the one recommendation for realizing the communication layer of the reference architecture model. The contribution of this paper is to give some insights behind modelling techniques that should be adopted during the definition of OPC UA Information Model exposing information of the very recent metamodel defined for the asset administration shell. All the general rationales and solutions here provided are compared with the current OPC UA-based existing representation of asset administration shell provided by literature. Specifically, differences will be pointed out giving to the reader advantages and disadvantages behind each solution. Full article
Show Figures

Figure 1

Article
Cognification of Program Synthesis—A Systematic Feature-Oriented Analysis and Future Direction
Computers 2020, 9(2), 27; https://doi.org/10.3390/computers9020027 - 12 Apr 2020
Viewed by 2094
Abstract
Program synthesis is defined as a software development step aims at achieving an automatic process of code generation that is satisfactory given high-level specifications. There are various program synthesis applications built on Machine Learning (ML) and Natural Language Processing (NLP) based approaches. Recently, [...] Read more.
Program synthesis is defined as a software development step aims at achieving an automatic process of code generation that is satisfactory given high-level specifications. There are various program synthesis applications built on Machine Learning (ML) and Natural Language Processing (NLP) based approaches. Recently, there have been remarkable advancements in the Artificial Intelligent (AI) domain. The rise in advanced ML techniques has been remarkable. Deep Learning (DL), for instance, is considered an example of a currently attractive research field that has led to advances in the areas of ML and NLP. With this advancement, there is a need to gain greater benefits from these approaches to cognify synthesis processes for next-generation model-driven engineering (MDE) framework. In this work, a systematic domain analysis is conducted to explore the extent to the automatic generation of code can be enabled via the next generation of cognified MDE frameworks that support recent DL and NLP techniques. After identifying critical features that might be considered when distinguishing synthesis systems, it will be possible to introduce a conceptual design for the future involving program synthesis/MDE frameworks. By searching different research database sources, 182 articles related to program synthesis approaches and their applications were identified. After defining research questions, structuring the domain analysis, and applying inclusion and exclusion criteria on the classification scheme, 170 out of 182 articles were considered in a three-phase systematic analysis, guided by some research questions. The analysis is introduced as a key contribution. The results are documented using feature diagrams as a comprehensive feature model of program synthesis showing alternative techniques and architectures. The achieved outcomes serve as motivation for introducing a conceptual architectural design of the next generation of cognified MDE frameworks. Full article
Show Figures

Figure 1

Review
Survey on Decentralized Fingerprinting Solutions: Copyright Protection through Piracy Tracing
Computers 2020, 9(2), 26; https://doi.org/10.3390/computers9020026 - 03 Apr 2020
Cited by 3 | Viewed by 2277
Abstract
Copyright protection is one of the most relevant challenges in the network society. This paper focuses on digital fingerprinting, a technology that facilitates the tracing of the source of an illegal redistribution, making it possible for the copyright holder to take legal action [...] Read more.
Copyright protection is one of the most relevant challenges in the network society. This paper focuses on digital fingerprinting, a technology that facilitates the tracing of the source of an illegal redistribution, making it possible for the copyright holder to take legal action in case of copyright violation. The paper reviews recent digital fingerprinting solutions that are available for two particularly relevant scenarios: peer-to-peer distribution networks and broadcasting. After analyzing those solutions, a discussion is carried out to highlight the properties and the limitations of those techniques. Finally, some directions for further research on this topic are suggested. Full article
Article
A Multi-Hop Data Dissemination Algorithm for Vehicular Communication
Computers 2020, 9(2), 25; https://doi.org/10.3390/computers9020025 - 31 Mar 2020
Cited by 3 | Viewed by 2144
Abstract
In vehicular networks, efficient multi-hop message dissemination can be used for various purposes, such a informing the driver about the recent emergency event or propagating the local dynamic map of a predefined region. Dissemination of warning information up to a longer distance can [...] Read more.
In vehicular networks, efficient multi-hop message dissemination can be used for various purposes, such a informing the driver about the recent emergency event or propagating the local dynamic map of a predefined region. Dissemination of warning information up to a longer distance can reduce the accidents on the road. It provides a driver additional time to react to the situations adequately and assists in finding a safe route towards the destination. The adopted V2X standards, ETSI TS’s C-ITS and IEEE 1609/IEEE 802.11p, specify only primitive multi-hop message dissemination schemes. IEEE 1609.4 standard disseminates the broadcast messages using the method of flooding, which causes high redundancy, severe congestion, and long delay during multi-hop propagation. To address these problems, we propose an effective broadcast message dissemination method. It introduces a notion of source Lateral Crossing Line (LCL) algorithm, which elects a set of relay vehicles for each hop based on the vehicle locations in a way that reduces the redundant retransmission and congestion, consequently minimizing the delays. Our simulation results demonstrated that the proposed method can achieve about 15% reduction in delays and 2 times the enhancement in propagation distance compared with the previous methods. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop