Next Issue
Volume 9, March
Previous Issue
Volume 8, September

Computers, Volume 8, Issue 4 (December 2019) – 22 articles

Cover Story (view full-size image): In this paper, a robotic vision system is proposed that constantly uses a 3D camera, while actively switching to use a second RGB camera in cases where it is necessary. It detects objects in the view seen by the 3D camera, which is mounted on a PR2 humanoid robot’s head, and in the event of low confidence regarding the detection correctness, the secondary camera, which is installed on the robot’s arm, is moved toward the object to obtain another perspective of it. Detections in the two camera views are matched and their recognitions are fused through a novel approach based on the Dempster–Shafer evidence theory. Significant improvements in object detection performance are observed after employing the proposed active vision system. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Order results
Result details
Select all
Export citation of selected articles as:
Open AccessArticle
An Investigation of a Feature-Level Fusion for Noisy Speech Emotion Recognition
Computers 2019, 8(4), 91; https://doi.org/10.3390/computers8040091 - 13 Dec 2019
Cited by 2 | Viewed by 2530
Abstract
Because one of the key issues in improving the performance of Speech Emotion Recognition (SER) systems is the choice of an effective feature representation, most of the research has focused on developing a feature level fusion using a large set of features. In [...] Read more.
Because one of the key issues in improving the performance of Speech Emotion Recognition (SER) systems is the choice of an effective feature representation, most of the research has focused on developing a feature level fusion using a large set of features. In our study, we propose a relatively low-dimensional feature set that combines three features: baseline Mel Frequency Cepstral Coefficients (MFCCs), MFCCs derived from Discrete Wavelet Transform (DWT) sub-band coefficients that are denoted as DMFCC, and pitch based features. Moreover, the performance of the proposed feature extraction method is evaluated in clean conditions and in the presence of several real-world noises. Furthermore, conventional Machine Learning (ML) and Deep Learning (DL) classifiers are employed for comparison. The proposal is tested using speech utterances of both of the Berlin German Emotional Database (EMO-DB) and Interactive Emotional Dyadic Motion Capture (IEMOCAP) speech databases through speaker independent experiments. Experimental results show improvement in speech emotion detection over baselines. Full article
(This article belongs to the Special Issue Mobile, Secure and Programmable Networking (MSPN'2019))
Show Figures

Figure 1

Open AccessArticle
Beyond Platform Economy: A Comprehensive Model for Decentralized and Self-Organizing Markets on Internet-Scale
Computers 2019, 8(4), 90; https://doi.org/10.3390/computers8040090 - 09 Dec 2019
Viewed by 2670
Abstract
The platform economy denotes a subset of economic activities enabled by platforms such as Amazon, Alibaba, and Uber. Due to their tremendous success, more and more offerings concentrate around platforms increasing platforms’ positional-power, hence leading towards a de-facto centralization of previously decentralized online [...] Read more.
The platform economy denotes a subset of economic activities enabled by platforms such as Amazon, Alibaba, and Uber. Due to their tremendous success, more and more offerings concentrate around platforms increasing platforms’ positional-power, hence leading towards a de-facto centralization of previously decentralized online markets. Furthermore, platform models work well for individual products and services or predefined combinations of these. However, they fall short in supporting complex products (personalized combinations of individual products and services), the combination of which is required to fulfill a particular consumer need, consequently increasing transaction costs for consumers looking for such products. To address these issues, we envision a “post-platform economy”—an economy facilitated by decentralized and self-organized online structures named Distributed Market Spaces. This work proposes a comprehensive model to serve as a guiding framework for the analysis, design, and implementation of Distributed Market Spaces. The proposed model leverages the St. Gallen Media Reference Model by adjusting existing and adding new entities and elements. The resulting multidimensional and multi-view model defines how a reference Distributed Market Space (a) works on the strategic and operational levels, (b) enables market exchange for complex products, and (c) how its instances might unfold during different life stages. In a case study, we demonstrated the application of our model and evaluated its suitability of meeting the primary objectives it was designed for. Full article
Show Figures

Figure 1

Open AccessArticle
MoDAr-WA: Tool Support to Automate an MDA Approach for MVC Web Application
Computers 2019, 8(4), 89; https://doi.org/10.3390/computers8040089 - 05 Dec 2019
Cited by 2 | Viewed by 2644
Abstract
Model-driven engineering (MDE) uses models during the application development process. Thus, the MDE is particularly based on model-driven architecture (MDA), which is one of the important variants of the Object Management Group (OMG). MDA aims to generate source code from abstract models through [...] Read more.
Model-driven engineering (MDE) uses models during the application development process. Thus, the MDE is particularly based on model-driven architecture (MDA), which is one of the important variants of the Object Management Group (OMG). MDA aims to generate source code from abstract models through several model transformations between, and inside the different MDA levels: computation independent model (CIM), platform independent model (PIM), and platform specific model (PSM) before code. In this context, several methods and tools were proposed in the literature and in the industry that aim to automatically generate the source code from the MDA levels. However, researchers still meet many constraints—model specifications, transformation automation, and level traceability. In this paper, we present a tool support, the model-driven architecture for web application (MoDAr-WA), that implements our proposed approach, aiming to automate transformations from the highest MDA level (CIM) to the lowest one (code) to ensure traceability. This paper is a continuity of our previous works, where we automate transformation from the CIM level to the PIM level. For this aim, we present a set of meta-models, QVT and Acceleo transformations, as well as the tools used to develop our Eclipse plug-in, MoDAr-WA. In particular, we used QVT rules for transformations between models and Acceleo for generating code from models. Finally, we use MoDAr-WA to apply the proposed approach to the MusicStore system case study and compare the generated code from CIM to the original application code. Full article
Show Figures

Figure 1

Open AccessArticle
Implementation of a PSO-Based Security Defense Mechanism for Tracing the Sources of DDoS Attacks
Computers 2019, 8(4), 88; https://doi.org/10.3390/computers8040088 - 04 Dec 2019
Viewed by 2509
Abstract
Most existing approaches for solving the distributed denial-of-service (DDoS) problem focus on specific security mechanisms, for example, network intrusion detection system (NIDS) detection and firewall configuration, rather than on the packet routing approaches to defend DDoS threats by new flow management techniques. To [...] Read more.
Most existing approaches for solving the distributed denial-of-service (DDoS) problem focus on specific security mechanisms, for example, network intrusion detection system (NIDS) detection and firewall configuration, rather than on the packet routing approaches to defend DDoS threats by new flow management techniques. To defend against DDoS attacks, the present study proposes a modified particle swarm optimization (PSO) scheme based on an IP traceback (IPTBK) technique, designated as PSO-IPTBK, to solve the IP traceback problem. Specifically, this work focuses on analyzing the detection of DDoS attacks to predict the possible attack routes in a distributed network. In the proposed approach, the PSO-IPTBK identifies the source of DDoS attacks by reconstructing the probable attack routes from collected network packets. The performance of the PSO-IPTBK algorithm in reconstructing the attack route was investigated through a series of simulations using OMNeT++ 5.5.1 and the INET 4 Framework. The results show that the proposed scheme can determine the most possible route between the attackers and the victim to defend DDoS attacks. Full article
(This article belongs to the Special Issue Selected Papers from IIKII 2019 Conferences in Computers)
Show Figures

Figure 1

Open AccessReview
Neural Net-Based Approach to EEG Signal Acquisition and Classification in BCI Applications
Computers 2019, 8(4), 87; https://doi.org/10.3390/computers8040087 - 04 Dec 2019
Cited by 1 | Viewed by 2790
Abstract
The following contribution describes a neural net-based, noninvasive methodology for electroencephalographic (EEG) signal classification. The application concerns a brain–computer interface (BCI) allowing disabled people to interact with their environment using only brain activity. It consists of classifying user’s thoughts in order to translate [...] Read more.
The following contribution describes a neural net-based, noninvasive methodology for electroencephalographic (EEG) signal classification. The application concerns a brain–computer interface (BCI) allowing disabled people to interact with their environment using only brain activity. It consists of classifying user’s thoughts in order to translate them into commands, such as controlling wheelchairs, cursor movement, or spelling. The proposed method follows a functional model, as is the case for any BCI, and can be achieved through three main phases: data acquisition and preprocessing, feature extraction, and classification of brains activities. For this purpose, we propose an interpretation model implementing a quantization method using both fast Fourier transform with root mean square error for feature extraction and a self-organizing-map-based neural network to generate classifiers, allowing better interpretation of brain activities. In order to show the effectiveness of the proposed methodology, an experimental study was conducted by exploiting five mental activities acquired by a G.tec BCI system containing 16 simultaneously sampled bio-signal channels with 24 bits, with experiments performed on 10 randomly chosen subjects. Full article
(This article belongs to the Special Issue Machine Learning for EEG Signal Processing)
Show Figures

Figure 1

Open AccessArticle
Network Intrusion Detection with a Hashing Based Apriori Algorithm Using Hadoop MapReduce
Computers 2019, 8(4), 86; https://doi.org/10.3390/computers8040086 - 02 Dec 2019
Cited by 1 | Viewed by 2700
Abstract
Ubiquitous nature of Internet services across the globe has undoubtedly expanded the strategies and operational mode being used by cybercriminals to perpetrate their unlawful activities through intrusion on various networks. Network intrusion has led to many global financial loses and privacy problems for [...] Read more.
Ubiquitous nature of Internet services across the globe has undoubtedly expanded the strategies and operational mode being used by cybercriminals to perpetrate their unlawful activities through intrusion on various networks. Network intrusion has led to many global financial loses and privacy problems for Internet users across the globe. In order to safeguard the network and to prevent Internet users from being the regular victims of cyber-criminal activities, new solutions are needed. This research proposes solution for intrusion detection by using the improved hashing-based Apriori algorithm implemented on Hadoop MapReduce framework; capable of using association rules in mining algorithm for identifying and detecting network intrusions. We used the KDD dataset to evaluate the effectiveness and reliability of the solution. Our results obtained show that this approach provides a reliable and effective means of detecting network intrusion. Full article
Show Figures

Figure 1

Open AccessArticle
A Proposed DoS Detection Scheme for Mitigating DoS Attack Using Data Mining Techniques
Computers 2019, 8(4), 85; https://doi.org/10.3390/computers8040085 - 26 Nov 2019
Cited by 1 | Viewed by 2565
Abstract
A denial of service (DoS) attack in a computer network is an attack on the availability of computer resources to prevent users from having access to those resources over the network. Denial of service attacks can be costly, capable of reaching $100,000 per [...] Read more.
A denial of service (DoS) attack in a computer network is an attack on the availability of computer resources to prevent users from having access to those resources over the network. Denial of service attacks can be costly, capable of reaching $100,000 per hour. Development of easily-accessible, simple DoS tools has increased the frequency and reduced the level of expertise needed to launch an attack. Though these attack tools have been available for years, there has been no proposed defense mechanism targeted specifically at them. Most defense mechanisms in literature are designed to defend attacks captured in datasets like the KDD Cup 99 dataset from 20 years ago and from tools no longer in use in modern attacks. In this paper, we capture and analyze traffic generated by some of these DoS attack tools using Wireshark Network Analyzer and propose a signature-based DoS detection mechanism based on SVM classifier to defend against attacks launched by these attack tools. Our proposed detection mechanism was tested with Snort IDS and compared with some already existing defense mechanisms in literature and had a high detection accuracy, low positive rate and fast detection time. Full article
Show Figures

Figure 1

Open AccessArticle
Statistical-Hypothesis-Aided Tests for Epilepsy Classification
Computers 2019, 8(4), 84; https://doi.org/10.3390/computers8040084 - 20 Nov 2019
Cited by 1 | Viewed by 2644
Abstract
In this paper, an efficient, accurate, and nonparametric epilepsy detection and classification approach based on electroencephalogram (EEG) signals is proposed. The proposed approach mainly depends on a feature extraction process that is conducted using a set of statistical tests. Among the many existing [...] Read more.
In this paper, an efficient, accurate, and nonparametric epilepsy detection and classification approach based on electroencephalogram (EEG) signals is proposed. The proposed approach mainly depends on a feature extraction process that is conducted using a set of statistical tests. Among the many existing tests, those fit with processed data and for the purpose of the proposed approach were used. From each test, various output scalars were extracted and used as features in the proposed detection and classification task. Experiments that were conducted on the basis of a Bonn University dataset showed that the proposed approach had very accurate results ( 98.4 % ) in the detection task and outperformed state-of-the-art methods in a similar task on the same dataset. The proposed approach also had accurate results ( 94.0 % ) in the classification task, but it did not outperform state-of-the-art methods in a similar task on the same dataset. However, the proposed approach had less time complexity in comparison with those methods that achieved better results. Full article
(This article belongs to the Special Issue Machine Learning for EEG Signal Processing)
Show Figures

Figure 1

Open AccessArticle
A New Modular Petri Net for Modeling Large Discrete-Event Systems: A Proposal Based on the Literature Study
Computers 2019, 8(4), 83; https://doi.org/10.3390/computers8040083 - 15 Nov 2019
Cited by 2 | Viewed by 2674
Abstract
Petri net is a highly useful tool for modeling of discrete-event systems. However, Petri net models of real-life systems are enormous, and their state-spaces are usually of infinite size. Thus, performing analysis on the model becomes difficult. Hence, slicing of Petri Net is [...] Read more.
Petri net is a highly useful tool for modeling of discrete-event systems. However, Petri net models of real-life systems are enormous, and their state-spaces are usually of infinite size. Thus, performing analysis on the model becomes difficult. Hence, slicing of Petri Net is suggested to reduce the size of the Petri nets. However, the existing slicing algorithms are ineffective for real-world systems. Therefore, there is a need for alternative methodologies for slicing that are effective for Petri net models of large real-life systems. This paper proposes a new Modular Petri Net as a solution. In modular Petri net, large Petri net models are decomposed into modules. These modules are compact, and the state spaces of these modules are also compact enough to be exhaustively analyzed. The research contributions of this paper are the following: Firstly, an exhaustive literature study is done on Modular Petri Nets. Secondly, from the conclusions drawn from the literature study, a new Petri net is proposed that supports module composition with clearly defined syntax. Thirdly, the new Petri net is implemented in the software GPenSIM, which is crucial so that real-life discrete-event systems could be modeled, analyzed, and performance-optimized with GPenSIM. Full article
Show Figures

Figure 1

Open AccessArticle
Smart Health and Safety Equipment Monitoring System for Distributed Workplaces
Computers 2019, 8(4), 82; https://doi.org/10.3390/computers8040082 - 11 Nov 2019
Cited by 1 | Viewed by 2963
Abstract
This paper presents a design and prototype of an IoT-based health and safety monitoring system using MATLAB GUI. This system, which is called the Smart Health and Safety Monitoring System, is aimed at reducing the time, cost and manpower requirements of distributed workplaces. [...] Read more.
This paper presents a design and prototype of an IoT-based health and safety monitoring system using MATLAB GUI. This system, which is called the Smart Health and Safety Monitoring System, is aimed at reducing the time, cost and manpower requirements of distributed workplaces. The proposed system is a real-time control and monitoring system that can access on-line the status of consumable devices in the workplace via the internet and prioritise the critically high location that need replenishing. The system dynamically updates the status of all location, such as first aid boxes, earplug dispensers and fire extinguishers. Simulation results of the proposed system gives shorter path, time and cost in comparison to manual maintenance systems. Full article
Show Figures

Figure 1

Open AccessArticle
IP Spoofing In and Out of the Public Cloud: From Policy to Practice
Computers 2019, 8(4), 81; https://doi.org/10.3390/computers8040081 - 09 Nov 2019
Cited by 1 | Viewed by 2939
Abstract
In recent years, a trend that has been gaining particular popularity among cybercriminals is the use of public Cloud to orchestrate and launch distributed denial of service (DDoS) attacks. One of the suspected catalysts for this trend appears to be the increased tightening [...] Read more.
In recent years, a trend that has been gaining particular popularity among cybercriminals is the use of public Cloud to orchestrate and launch distributed denial of service (DDoS) attacks. One of the suspected catalysts for this trend appears to be the increased tightening of regulations and controls against IP spoofing by world-wide Internet service providers (ISPs). Three main contributions of this paper are (1) For the first time in the research literature, we provide a comprehensive look at a number of possible attacks that involve the transmission of spoofed packets from or towards the virtual private servers hosted by a public Cloud provider. (2) We summarize the key findings of our research on the regulation of IP spoofing in the acceptable-use and term-of-service policies of 35 real-world Cloud providers. The findings reveal that in over 50% of cases, these policies make no explicit mention or prohibition of IP spoofing, thus failing to serve as a potential deterrent. (3) Finally, we describe the results of our experimental study on the actual practical feasibility of IP spoofing involving a select number of real-world Cloud providers. These results show that most of the tested public Cloud providers do a very good job of preventing (potential) hackers from using their virtual private servers to launch spoofed-IP campaigns on third-party targets. However, the same very own virtual private servers of these Cloud providers appear themselves vulnerable to a number of attacks that involve the use of spoofed IP packets and/or could be deployed as packet-reflectors in attacks on third party targets. We hope the paper serves as a call for awareness and action and motivates the public Cloud providers to deploy better techniques for detection and elimination of spoofed IP traffic. Full article
Show Figures

Figure 1

Open AccessArticle
Design and Implementation of SFCI: A Tool for Security Focused Continuous Integration
Computers 2019, 8(4), 80; https://doi.org/10.3390/computers8040080 - 01 Nov 2019
Viewed by 2712
Abstract
Software security is a component of software development that should be integrated throughout its entire development lifecycle, and not simply as an afterthought. If security vulnerabilities are caught early in development, they can be fixed before the software is released in production environments. [...] Read more.
Software security is a component of software development that should be integrated throughout its entire development lifecycle, and not simply as an afterthought. If security vulnerabilities are caught early in development, they can be fixed before the software is released in production environments. Furthermore, finding a software vulnerability early in development will warn the programmer and lessen the likelihood of this type of programming error being repeated in other parts of the software project. Using Continuous Integration (CI) for checking for security vulnerabilities every time new code is committed to a repository can alert developers of security flaws almost immediately after they are introduced. Finally, continuous integration tests for security give software developers the option of making the test results public so that users or potential users are given assurance that the software is well tested for security flaws. While there already exists general-purpose continuous integration tools such as Jenkins-CI and GitLab-CI, our tool is primarily focused on integrating third party security testing programs and generating reports on classes of vulnerabilities found in a software project. Our tool performs all tests in a snapshot (stateless) virtual machine to be able to have reproducible tests in an environment similar to the deployment environment. This paper introduces the design and implementation of a tool for security-focused continuous integration. The test cases used demonstrate the ability of the tool to effectively uncover security vulnerabilities even in open source software products such as ImageMagick and a smart grid application, Emoncms. Full article
(This article belongs to the Special Issue Code Generation, Analysis and Quality Testing)
Show Figures

Figure 1

Open AccessArticle
Prevention of Crypto-Ransomware Using a Pre-Encryption Detection Algorithm
Computers 2019, 8(4), 79; https://doi.org/10.3390/computers8040079 - 01 Nov 2019
Cited by 7 | Viewed by 3143
Abstract
Ransomware is a relatively new type of intrusion attack, and is made with the objective of extorting a ransom from its victim. There are several types of ransomware attacks, but the present paper focuses only upon the crypto-ransomware, because it makes data unrecoverable [...] Read more.
Ransomware is a relatively new type of intrusion attack, and is made with the objective of extorting a ransom from its victim. There are several types of ransomware attacks, but the present paper focuses only upon the crypto-ransomware, because it makes data unrecoverable once the victim’s files have been encrypted. Therefore, in this research, it was proposed that machine learning is used to detect crypto-ransomware before it starts its encryption function, or at the pre-encryption stage. Successful detection at this stage is crucial to enable the attack to be stopped from achieving its objective. Once the victim was aware of the presence of crypto-ransomware, valuable data and files can be backed up to another location, and then an attempt can be made to clean the ransomware with minimum risk. Therefore we proposed a pre-encryption detection algorithm (PEDA) that consisted of two phases. In, PEDA-Phase-I, a Windows application programming interface (API) generated by a suspicious program would be captured and analyzed using the learning algorithm (LA). The LA can determine whether the suspicious program was a crypto-ransomware or not, through API pattern recognition. This approach was used to ensure the most comprehensive detection of both known and unknown crypto-ransomware, but it may have a high false positive rate (FPR). If the prediction was a crypto-ransomware, PEDA would generate a signature of the suspicious program, and store it in the signature repository, which was in Phase-II. In PEDA-Phase-II, the signature repository allows the detection of crypto-ransomware at a much earlier stage, which was at the pre-execution stage through the signature matching method. This method can only detect known crypto-ransomware, and although very rigid, it was accurate and fast. The two phases in PEDA formed two layers of early detection for crypto-ransomware to ensure zero files lost to the user. However in this research, we focused upon Phase-I, which was the LA. Based on our results, the LA had the lowest FPR of 1.56% compared to Naive Bayes (NB), Random Forest (RF), Ensemble (NB and RF) and EldeRan (a machine learning approach to analyze and classify ransomware). Low FPR indicates that LA has a low probability of predicting goodware wrongly. Full article
Show Figures

Figure 1

Open AccessArticle
On the Stability of a Hardware Compensation Mechanism for Embedded Energy Harvesting Emulators
Computers 2019, 8(4), 78; https://doi.org/10.3390/computers8040078 - 31 Oct 2019
Viewed by 2565
Abstract
The possibility of emulating renewable energy sources by means of portable, low-cost embedded devices is a key factor for the design and validation of ultra low-power networked embedded systems. Full characterisation of hardware-software platforms used for reliably and adaptively generating energy traces is [...] Read more.
The possibility of emulating renewable energy sources by means of portable, low-cost embedded devices is a key factor for the design and validation of ultra low-power networked embedded systems. Full characterisation of hardware-software platforms used for reliably and adaptively generating energy traces is therefore needed in order to clearly understand their adoption for testing energy harvesting devices or protocols. We investigate in this study a recently proposed embedded ultra-low power solution, which targets energy harvesting sources emulation with real-time responsiveness. The analyzed platform has been previously evaluated in terms of accuracy and reactiveness. However, given the presence of a positive feedback mechanism implemented by means of a compensation circuit, the possibility of unstable dynamics could hinder its applicability. It is therefore deemed interesting to delineate the conditions which guarantee the stability of the system. The aim of this article is to investigate the problem, to formally derive the electrical loads to be powered that allow for operate in a stable regime, and to experimentally assess properties in realistic scenarios. Theoretical and experimental results highlight the flexibility of the analyzed platform in terms of its capability to quickly adapt to changes in load conditions, while retaining bounded output dynamics. Full article
Show Figures

Figure 1

Open AccessReview
Review on Techniques for Plant Leaf Classification and Recognition
Computers 2019, 8(4), 77; https://doi.org/10.3390/computers8040077 - 21 Oct 2019
Cited by 6 | Viewed by 3077
Abstract
Plant systematics can be classified and recognized based on their reproductive system (flowers) and leaf morphology. Neural networks is one of the most popular machine learning algorithms for plant leaf classification. The commonly used neutral networks are artificial neural network (ANN), probabilistic neural [...] Read more.
Plant systematics can be classified and recognized based on their reproductive system (flowers) and leaf morphology. Neural networks is one of the most popular machine learning algorithms for plant leaf classification. The commonly used neutral networks are artificial neural network (ANN), probabilistic neural network (PNN), convolutional neural network (CNN), k-nearest neighbor (KNN) and support vector machine (SVM), even some studies used combined techniques for accuracy improvement. The utilization of several varying preprocessing techniques, and characteristic parameters in feature extraction appeared to improve the performance of plant leaf classification. The findings of previous studies are critically compared in terms of their accuracy based on the applied neural network techniques. This paper aims to review and analyze the implementation and performance of various methodologies on plant classification. Each technique has its advantages and limitations in leaf pattern recognition. The quality of leaf images plays an important role, and therefore, a reliable source of leaf database must be used to establish the machine learning algorithm prior to leaf recognition and validation. Full article
Show Figures

Figure 1

Open AccessArticle
Lithuanian Speech Recognition Using Purely Phonetic Deep Learning
Computers 2019, 8(4), 76; https://doi.org/10.3390/computers8040076 - 18 Oct 2019
Cited by 2 | Viewed by 2830
Abstract
Automatic speech recognition (ASR) has been one of the biggest and hardest challenges in the field. A large majority of research in this area focuses on widely spoken languages such as English. The problems of automatic Lithuanian speech recognition have attracted little attention [...] Read more.
Automatic speech recognition (ASR) has been one of the biggest and hardest challenges in the field. A large majority of research in this area focuses on widely spoken languages such as English. The problems of automatic Lithuanian speech recognition have attracted little attention so far. Due to complicated language structure and scarcity of data, models proposed for other languages such as English cannot be directly adopted for Lithuanian. In this paper we propose an ASR system for the Lithuanian language, which is based on deep learning methods and can identify spoken words purely from their phoneme sequences. Two encoder-decoder models are used to solve the ASR task: a traditional encoder-decoder model and a model with attention mechanism. The performance of these models is evaluated in isolated speech recognition task (with an accuracy of 0.993) and long phrase recognition task (with an accuracy of 0.992). Full article
Show Figures

Figure 1

Open AccessArticle
An Efficient Group-Based Control Signalling within Proxy Mobile IPv6 Protocol
Computers 2019, 8(4), 75; https://doi.org/10.3390/computers8040075 - 04 Oct 2019
Cited by 1 | Viewed by 2762
Abstract
Providing a seamless handover in the Internet of Thing (IoT) applications with minimal efforts is a big challenge in mobility management protocols. Several research efforts have been attempted to maintain the connectivity of nodes while performing mobility-related signalling, in order to enhance the [...] Read more.
Providing a seamless handover in the Internet of Thing (IoT) applications with minimal efforts is a big challenge in mobility management protocols. Several research efforts have been attempted to maintain the connectivity of nodes while performing mobility-related signalling, in order to enhance the system performance. However, these studies still fall short at the presence of short-term continuous movements of mobile nodes within the same network, which is a requirement in several applications. In this paper, we propose an efficient group-based handoff scheme for the Mobile Nodes (MNs) in order to reduce the nodes handover during their roaming. This scheme is named Enhanced Cluster Sensor Proxy Mobile IPv6 (E-CSPMIPv6). E-CSPMIPv6 introduces a fast handover scheme by implementing two mechanisms. In the first mechanism, we cluster mobile nodes that are moving as a group in order to register them at a prior time of their actual handoff. In the second mechanism, we manipulate the mobility-related signalling of the MNs triggering their handover signalling simultaneously. The efficiency of the proposed scheme is validated through extensive simulation experiments and numerical analyses in comparison to the state-of-the-art mobility management protocols under different scenarios and operation conditions. The results demonstrate that the E-CSPMIPv6 scheme significantly improves the overall system performance, by reducing handover delay, signalling cost and end-to-end delay. Full article
Show Figures

Figure 1

Open AccessArticle
Strategizing Information Systems: An Empirical Analysis of IT Alignment and Success in SMEs
Computers 2019, 8(4), 74; https://doi.org/10.3390/computers8040074 - 27 Sep 2019
Cited by 3 | Viewed by 2865
Abstract
IT investment is a crucial issue as it does not only influence the performance in Small-Medium Enterprises (SMEs) but it also helps executives to align business strategy with organizational performance. Admittedly, though, there is ineffective use of Information Systems (IS) due to a [...] Read more.
IT investment is a crucial issue as it does not only influence the performance in Small-Medium Enterprises (SMEs) but it also helps executives to align business strategy with organizational performance. Admittedly, though, there is ineffective use of Information Systems (IS) due to a lack of strategic planning and of formal processes resulting in executives’ failure to develop IS plans and achieve long-term sustainability. Therefore, the purpose of this paper is to examine the phases of Strategic Information Systems Planning (SISP) process that contribute to a greater extent of success so that guidelines regarding the implementation of the process in SMEs can be provided. Data was collected by 160 IS executives in Greek SMEs during February and May 2017. Multivariate Regression Analysis was applied on the detailed items of the SISP process and success constructs. The results of this survey present that managers should be aware of the strategic use of IS planning so as to increase competitive advantage. Senior executives should choose the appropriate IT infrastructure (related to their business strategy and organizational structure), so as to align business strategy with organizational structure. The findings of this paper could help IS executives concentrate their efforts on business objectives and recognize the great value of the planning process on their business. Full article
(This article belongs to the Special Issue Information Systems - EMCIS 2018)
Show Figures

Figure 1

Open AccessCommunication
Big Data Use and Challenges: Insights from Two Internet-Mediated Surveys
Computers 2019, 8(4), 73; https://doi.org/10.3390/computers8040073 - 24 Sep 2019
Cited by 1 | Viewed by 2832
Abstract
Big data and analytics have received great attention from practitioners and academics, nowadays representing a key resource for the renewed interest in artificial intelligence, especially for machine learning techniques. In this article we explore the use of big data and analytics by different [...] Read more.
Big data and analytics have received great attention from practitioners and academics, nowadays representing a key resource for the renewed interest in artificial intelligence, especially for machine learning techniques. In this article we explore the use of big data and analytics by different types of organizations, from various countries and industries, including the ones with a limited size and capabilities compared to corporations or new ventures. In particular, we are interested in organizations where the exploitation of big data and analytics may have social value in terms of, e.g., public and personal safety. Hence, this article discusses the results of two multi-industry and multi-country surveys carried out on a sample of public and private organizations. The results show a low rate of utilization of the data collected due to, among other issues, privacy and security, as well as the lack of staff trained in data analysis. Also, the two surveys show a challenge to reach an appropriate level of effectiveness in the use of big data and analytics, due to the shortage of the right tools and, again, capabilities, often related to a low rate of digital transformation. Full article
(This article belongs to the Special Issue Information Systems - EMCIS 2018)
Open AccessArticle
An Application of Deep Neural Networks for Segmentation of Microtomographic Images of Rock Samples
Computers 2019, 8(4), 72; https://doi.org/10.3390/computers8040072 - 24 Sep 2019
Cited by 5 | Viewed by 2980
Abstract
Image segmentation is a crucial step of almost any Digital Rock workflow. In this paper, we propose an approach for generation of a labelled dataset and investigate an application of three popular convolutional neural networks (CNN) architectures for segmentation of 3D microtomographic images [...] Read more.
Image segmentation is a crucial step of almost any Digital Rock workflow. In this paper, we propose an approach for generation of a labelled dataset and investigate an application of three popular convolutional neural networks (CNN) architectures for segmentation of 3D microtomographic images of samples of various rocks. Our dataset contains eight pairs of images of five specimens of sand and sandstones. For each sample, we obtain a single set of microtomographic shadow projections, but run reconstruction twice: one regular high-quality reconstruction, and one using just a quarter of all available shadow projections. Thoughtful manual Indicator Kriging (IK) segmentation of the full-quality image is used as the ground truth for segmentation of images with reduced quality. We assess the generalization capability of CNN by splitting our dataset into training and validation sets by five different manners. In addition, we compare neural networks results with segmentation by IK and thresholding. Segmentation outcomes by 2D and 3D U-nets are comparable to IK, but the deep neural networks operate in automatic mode, and there is big room for improvements in solutions based on CNN. The main difficulties are associated with the segmentation of fine structures that are relatively uncommon in our dataset. Full article
Show Figures

Figure 1

Open AccessArticle
Active Eye-in-Hand Data Management to Improve the Robotic Object Detection Performance
Computers 2019, 8(4), 71; https://doi.org/10.3390/computers8040071 - 23 Sep 2019
Viewed by 2851
Abstract
Adding to the number of sources of sensory information can be efficacious in enhancing the object detection capability of robots. In the realm of vision-based object detection, in addition to improving the general detection performance, observing objects of interest from different points of [...] Read more.
Adding to the number of sources of sensory information can be efficacious in enhancing the object detection capability of robots. In the realm of vision-based object detection, in addition to improving the general detection performance, observing objects of interest from different points of view can be central to handling occlusions. In this paper, a robotic vision system is proposed that constantly uses a 3D camera, while actively switching to make use of a second RGB camera in cases where it is necessary. The proposed system detects objects in the view seen by the 3D camera, which is mounted on a humanoid robot’s head, and computes a confidence measure for its recognitions. In the event of low confidence regarding the correctness of the detection, the secondary camera, which is installed on the robot’s arm, is moved toward the object to obtain another perspective of the object. With the objects detected in the scene viewed by the hand camera, they are matched to the detections of the head camera, and subsequently, their recognition decisions are fused together. The decision fusion method is a novel approach based on the Dempster–Shafer evidence theory. Significant improvements in object detection performance are observed after employing the proposed active vision system. Full article
(This article belongs to the Special Issue Vision, Image and Signal Processing (ICVISP))
Show Figures

Figure 1

Open AccessArticle
Font Design—Shape Processing of Text Information Structures in the Process of Non-Invasive Data Acquisition
Computers 2019, 8(4), 70; https://doi.org/10.3390/computers8040070 - 23 Sep 2019
Cited by 4 | Viewed by 3315
Abstract
Computer fonts can be a solution that supports the protection of information against electromagnetic penetration; however, not every font has features that counteract this process. The distinctive features of a font’s characters define the font. This article presents two new sets of computer [...] Read more.
Computer fonts can be a solution that supports the protection of information against electromagnetic penetration; however, not every font has features that counteract this process. The distinctive features of a font’s characters define the font. This article presents two new sets of computer fonts. These fonts are fully usable in everyday work. Additionally, they make it impossible to obtain information using non-invasive methods. The names of these fonts are directly related to the shapes of their characters. Each character in these fonts is built using only vertical and horizontal lines. The differences between the fonts lie in the widths of the vertical lines. The Safe Symmetrical font is built from vertical lines with the same width. The Safe Asymmetrical font is built from vertical lines with two different line widths. However, the appropriate proportions of the widths of the lines and clearances of each character need to be met for the safe fonts. The structures of the characters of the safe fonts ensure a high level of similarity between the characters. Additionally, these fonts do not make it difficult to read text in its primary form. However, sensitive transmissions are free from distinctive features, and the recognition of each character in reconstructed images is very difficult in contrast to traditional fonts, such as the Sang Mun font and Null Pointer font, which have many distinctive features. The usefulness of the computer fonts was assessed by the character error rate (CER); an analysis of this parameter was conducted in this work. The CER obtained very high values for the safe fonts; the values for traditional fonts were much lower. This article aims to presentat of a new solution in the area of protecting information against electromagnetic penetration. This is a new approach that could replace old solutions by incorporating heavy shielding, power and signal filters, and electromagnetic gaskets. Additionally, the application of these new fonts is very easy, as a user only needs to ensure that either the Safe Asymmetrical font or the Safe Symmetrical font is installed on the computer station that processes the text data. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop