Previous Issue

Table of Contents

Computers, Volume 8, Issue 1 (March 2019)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) We investigated user satisfaction in AR applied to three practical use cases. User satisfaction can [...] Read more.
View options order results:
result details:
Displaying articles 1-27
Export citation of selected articles as:
Open AccessArticle Symmetric-Key-Based Security for Multicast Communication in Wireless Sensor Networks
Received: 21 February 2019 / Revised: 12 March 2019 / Accepted: 12 March 2019 / Published: 19 March 2019
Viewed by 134 | PDF Full-text (786 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents a new key management protocol for group-based communications in non-hierarchical wireless sensor networks (WSNs), applied on a recently proposed IP-based multicast protocol. Confidentiality, integrity, and authentication are established, using solely symmetric-key-based operations. The protocol features a cloud-based network multicast manager [...] Read more.
This paper presents a new key management protocol for group-based communications in non-hierarchical wireless sensor networks (WSNs), applied on a recently proposed IP-based multicast protocol. Confidentiality, integrity, and authentication are established, using solely symmetric-key-based operations. The protocol features a cloud-based network multicast manager (NMM), which can create, control, and authenticate groups in the WSN, but is not able to derive the actual constructed group key. Three main phases are distinguished in the protocol. First, in the registration phase, the motes register to the group by sending a request to the NMM. Second, the members of the group calculate the shared group key in the key construction phase. For this phase, two different methods are tested. In the unicast approach, the key material is sent to each member individually using unicast messages, and in the multicast approach, a combination of Lagrange interpolation and a multicast packet are used. Finally, in the multicast communication phase, these keys are used to send confidential and authenticated messages. To investigate the impact of the proposed mechanisms on the WSN, the protocol was implemented in ContikiOS and simulated using COOJA, considering different group sizes and multi-hop communication. These simulations show that the multicast approach compared to the unicast approach results in significant smaller delays, is a bit more energy efficient, and requires more or less the same amount of memory for the code. Full article
Figures

Figure 1

Open AccessArticle The Use of an Artificial Neural Network to Process Hydrographic Big Data during Surface Modeling
Received: 30 January 2019 / Revised: 5 March 2019 / Accepted: 11 March 2019 / Published: 14 March 2019
Viewed by 194 | PDF Full-text (3890 KB) | HTML Full-text | XML Full-text
Abstract
At the present time, spatial data are often acquired using varied remote sensing sensors and systems, which produce big data sets. One significant product from these data is a digital model of geographical surfaces, including the surface of the sea floor. To improve [...] Read more.
At the present time, spatial data are often acquired using varied remote sensing sensors and systems, which produce big data sets. One significant product from these data is a digital model of geographical surfaces, including the surface of the sea floor. To improve data processing, presentation, and management, it is often indispensable to reduce the number of data points. This paper presents research regarding the application of artificial neural networks to bathymetric data reductions. This research considers results from radial networks and self-organizing Kohonen networks. During reconstructions of the seabed model, the results show that neural networks with fewer hidden neurons than the number of data points can replicate the original data set, while the Kohonen network can be used for clustering during big geodata reduction. Practical implementations of neural networks capable of creating surface models and reducing bathymetric data are presented. Full article
Figures

Figure 1

Open AccessArticle Concepts of a Modular System Architecture for Distributed Robotic Systems
Received: 31 January 2019 / Revised: 7 March 2019 / Accepted: 11 March 2019 / Published: 14 March 2019
Viewed by 199 | PDF Full-text (3445 KB) | HTML Full-text | XML Full-text
Abstract
Modern robots often use more than one processing unit to solve the requirements in robotics. Robots are frequently designed in a modular manner to fulfill the possibility to be extended for future tasks. The use of multiple processing units leads to a distributed [...] Read more.
Modern robots often use more than one processing unit to solve the requirements in robotics. Robots are frequently designed in a modular manner to fulfill the possibility to be extended for future tasks. The use of multiple processing units leads to a distributed system within one single robot. Therefore, the system architecture is even more important than in single-computer robots. The presented concept of a modular and distributed system architecture was designed for robotic systems. The architecture is based on the Operator–Controller Module (OCM). This article describes the adaption of the distributed OCM for mobile robots considering the requirements on such robots, including, for example, real-time and safety constraints. The presented architecture splits the system hierarchically into a three-layer structure of controllers and operators. The controllers interact directly with all sensors and actuators within the system. For that reason, hard real-time constraints need to comply. The reflective operator, however, processes the information of the controllers, which can be done by model-based principles using state machines. The cognitive operator is used to optimize the system. The article also shows the exemplary design of the DAEbot, a self-developed robot, and discusses the experience of applying these concepts on this robot. Full article
Figures

Figure 1

Open AccessFeature PaperArticle An Evaluation Approach for a Physically-Based Sticky Lip Model
Received: 18 January 2019 / Revised: 28 February 2019 / Accepted: 5 March 2019 / Published: 8 March 2019
Viewed by 219 | PDF Full-text (16961 KB) | HTML Full-text | XML Full-text
Abstract
Physically-based mouth models operate on the principle that a better mouth animation will be produced by simulating physically accurate behaviour of the mouth. In the development of these models, it is useful to have an evaluation approach which can be used to judge [...] Read more.
Physically-based mouth models operate on the principle that a better mouth animation will be produced by simulating physically accurate behaviour of the mouth. In the development of these models, it is useful to have an evaluation approach which can be used to judge the effectiveness of a model and draw comparisons against other models and real-life mouth behaviour. This article presents a set of metrics which can be used to describe the motion of the lips, as well as a process for measuring these from video of real or simulated mouths, implemented using Python and OpenCV. As an example, the process is used to evaluate a physically-based mouth model focusing on recreating the stickiness effect of saliva between the lips. The metrics highlight the changes in behaviour due to the addition of stickiness between the lips in the synthetic mouth model and show quantitatively improved behaviour in relation to real mouth movements. The article concludes that the presented metrics provide a useful approach for evaluation of mouth animation models that incorporate sticky lip effects. Full article
Figures

Figure 1

Open AccessArticle An Efficient Multicore Algorithm for Minimal Length Addition Chains
Received: 11 December 2018 / Revised: 12 February 2019 / Accepted: 4 March 2019 / Published: 7 March 2019
Viewed by 225 | PDF Full-text (1710 KB) | HTML Full-text | XML Full-text
Abstract
A minimal length addition chain for a positive integer m is a finite sequence of positive integers such that (1) the first and last elements in the sequence are 1 and m, respectively, (2) any element greater than 1 in the sequence [...] Read more.
A minimal length addition chain for a positive integer m is a finite sequence of positive integers such that (1) the first and last elements in the sequence are 1 and m, respectively, (2) any element greater than 1 in the sequence is the addition of two earlier elements (not necessarily distinct), and (3) the length of the sequence is minimal. Generating the minimal length addition chain for m is challenging due to the running time, which increases with the size of m and particularly with the number of 1s in the binary representation of m. In this paper, we introduce a new parallel algorithm to find the minimal length addition chain for m. The experimental studies on multicore systems show that the running time of the proposed algorithm is faster than the sequential algorithm. Moreover, the maximum speedup obtained by the proposed algorithm is 2.5 times the best known sequential algorithm. Full article
Figures

Figure 1

Open AccessArticle Natural Language Processing in OTF Computing: Challenges and the Need for Interactive Approaches
Received: 22 January 2019 / Revised: 23 February 2019 / Accepted: 3 March 2019 / Published: 6 March 2019
Viewed by 267 | PDF Full-text (3090 KB) | HTML Full-text | XML Full-text
Abstract
The vision of On-the-Fly (OTF) Computing is to compose and provide software services ad hoc, based on requirement descriptions in natural language. Since non-technical users write their software requirements themselves and in unrestricted natural language, deficits occur such as inaccuracy and incompleteness. These [...] Read more.
The vision of On-the-Fly (OTF) Computing is to compose and provide software services ad hoc, based on requirement descriptions in natural language. Since non-technical users write their software requirements themselves and in unrestricted natural language, deficits occur such as inaccuracy and incompleteness. These deficits are usually met by natural language processing methods, which have to face special challenges in OTF Computing because maximum automation is the goal. In this paper, we present current automatic approaches for solving inaccuracies and incompletenesses in natural language requirement descriptions and elaborate open challenges. In particular, we will discuss the necessity of domain-specific resources and show why, despite far-reaching automation, an intelligent and guided integration of end users into the compensation process is required. In this context, we present our idea of a chat bot that integrates users into the compensation process depending on the given circumstances. Full article
Figures

Figure 1

Open AccessArticle J48SS: A Novel Decision Tree Approach for the Handling of Sequential and Time Series Data
Received: 11 February 2019 / Revised: 25 February 2019 / Accepted: 27 February 2019 / Published: 5 March 2019
Viewed by 257 | PDF Full-text (493 KB) | HTML Full-text | XML Full-text
Abstract
Temporal information plays a very important role in many analysis tasks, and can be encoded in at least two different ways. It can be modeled by discrete sequences of events as, for example, in the business intelligence domain, with the aim of tracking [...] Read more.
Temporal information plays a very important role in many analysis tasks, and can be encoded in at least two different ways. It can be modeled by discrete sequences of events as, for example, in the business intelligence domain, with the aim of tracking the evolution of customer behaviors over time. Alternatively, it can be represented by time series, as in the stock market to characterize price histories. In some analysis tasks, temporal information is complemented by other kinds of data, which may be represented by static attributes, e.g., categorical or numerical ones. This paper presents J48SS, a novel decision tree inducer capable of natively mixing static (i.e., numerical and categorical), sequential, and time series data for classification purposes. The novel algorithm is based on the popular C4.5 decision tree learner, and it relies on the concepts of frequent pattern extraction and time series shapelet generation. The algorithm is evaluated on a text classification task in a real business setting, as well as on a selection of public UCR time series datasets. Results show that it is capable of providing competitive classification performances, while generating highly interpretable models and effectively reducing the data preparation effort. Full article
Figures

Figure 1

Open AccessArticle Software Requirement Specification Based on a Gray Box for Embedded Systems: A Case Study of a Mobile Phone Camera Sensor Controller
Received: 24 January 2019 / Revised: 15 February 2019 / Accepted: 17 February 2019 / Published: 2 March 2019
Viewed by 322 | PDF Full-text (3396 KB) | HTML Full-text | XML Full-text
Abstract
One of the most widely used models for specifying functional requirements is a use case model. The viewpoint of the use case model that views a system as a black box focuses on descriptions of external interactions between the system and related environments. [...] Read more.
One of the most widely used models for specifying functional requirements is a use case model. The viewpoint of the use case model that views a system as a black box focuses on descriptions of external interactions between the system and related environments. However, for embedded systems that do not disclose most implementation logics outside the system, black box-based use case models may experience the drawback that considerable information that must be defined for system developments is omitted. To solve this shortcoming, several studies have been proposed on the use of kind of white box technique in which the dynamic behaviors of embedded systems are defined first using a state diagram and the results are reflected in the requirement specifications. However, white box-based modeling has not been widely adopted by developers due to tasks that require a lot of time in the requirement analysis phase in the initial phase of the software development life cycle. This study proposes a gray box-based requirement specification method as a trade-off between two contradictory elements (the amount of information required to develop an embedded system and the cost of the effort required during the requirement analysis phase) in terms of the two approaches, the black and the white box-based models. The proposed method suggests that an appropriate depth level of embedded system modeling is required to define the requirements. This study also proposes a mechanism that automatically generates an application programming interface for each component based on the created model. The proposed method was applied to the development of a camera sensor controller in a mobile phone, and the case results proved the feasibility of the method through discussion of the application results. Full article
Figures

Figure 1

Open AccessArticle Automatic Correction of Arabic Dyslexic Text
Received: 31 December 2018 / Revised: 31 January 2019 / Accepted: 1 February 2019 / Published: 21 February 2019
Viewed by 378 | PDF Full-text (436 KB) | HTML Full-text | XML Full-text
Abstract
This paper proposes an automatic correction system that detects and corrects dyslexic errors in Arabic text. The system uses a language model based on the Prediction by Partial Matching (PPM) text compression scheme that generates possible alternatives for each misspelled word. Furthermore, the [...] Read more.
This paper proposes an automatic correction system that detects and corrects dyslexic errors in Arabic text. The system uses a language model based on the Prediction by Partial Matching (PPM) text compression scheme that generates possible alternatives for each misspelled word. Furthermore, the generated candidate list is based on edit operations (insertion, deletion, substitution and transposition), and the correct alternative for each misspelled word is chosen on the basis of the compression codelength of the trigram. The system is compared with widely-used Arabic word processing software and the Farasa tool. The system provided good results compared with the other tools, with a recall of 43%, precision 89%, F1 58% and accuracy 81%. Full article
Figures

Figure 1

Open AccessArticle Resource Allocation Model for Sensor Clouds under the Sensing as a Service Paradigm
Received: 26 January 2019 / Revised: 12 February 2019 / Accepted: 18 February 2019 / Published: 20 February 2019
Viewed by 342 | PDF Full-text (625 KB) | HTML Full-text | XML Full-text
Abstract
The Sensing as a Service is emerging as a new Internet of Things (IoT) business model for sensors and data sharing in the cloud. Under this paradigm, a resource allocation model for the assignment of both sensors and cloud resources to clients/applications is [...] Read more.
The Sensing as a Service is emerging as a new Internet of Things (IoT) business model for sensors and data sharing in the cloud. Under this paradigm, a resource allocation model for the assignment of both sensors and cloud resources to clients/applications is proposed. This model, contrarily to previous approaches, is adequate for emerging IoT Sensing as a Service business models supporting multi-sensing applications and mashups of Things in the cloud. A heuristic algorithm is also proposed having this model as a basis. Results show that the approach is able to incorporate strategies that lead to the allocation of fewer devices, while selecting the most adequate ones for application needs. Full article
Figures

Figure 1

Open AccessArticle SoS TextVis: An Extended Survey of Surveys on Text Visualization
Received: 14 January 2019 / Revised: 8 February 2019 / Accepted: 18 February 2019 / Published: 20 February 2019
Viewed by 346 | PDF Full-text (4330 KB) | HTML Full-text | XML Full-text
Abstract
Text visualization is a rapidly growing sub-field of information visualization and visual analytics. There are many approaches and techniques introduced every year to address a wide range of challenges and analysis tasks, enabling researchers from different disciplines to obtain leading-edge knowledge from digitized [...] Read more.
Text visualization is a rapidly growing sub-field of information visualization and visual analytics. There are many approaches and techniques introduced every year to address a wide range of challenges and analysis tasks, enabling researchers from different disciplines to obtain leading-edge knowledge from digitized collections of text. This can be challenging particularly when the data is massive. Additionally, the sources of digital text have spread substantially in the last decades in various forms, such as web pages, blogs, twitter, email, electronic publications, and digitized books. In response to the explosion of text visualization research literature, the first text visualization survey article was published in 2010. Furthermore, there are a growing number of surveys that review existing techniques and classify them based on text research methodology. In this work, we aim to present the first Survey of Surveys (SoS) that review all of the surveys and state-of-the-art papers on text visualization techniques and provide an SoS classification. We study and compare the 14 surveys, and categorize them into five groups: (1) Document-centered, (2) user task analysis, (3) cross-disciplinary, (4) multi-faceted, and (5) satellite-themed. We provide survey recommendations for researchers in the field of text visualization. The result is a very unique, valuable starting point and overview of the current state-of-the-art in text visualization research literature. Full article
Figures

Figure 1

Open AccessArticle Inter-Vehicle Communication Protocol Design for a Yielding Decision at an Unsignalized Intersection and Evaluation of the Protocol Using Radio Control Cars Equipped with Raspberry Pi
Received: 22 December 2018 / Revised: 11 February 2019 / Accepted: 14 February 2019 / Published: 18 February 2019
Viewed by 417 | PDF Full-text (6891 KB) | HTML Full-text | XML Full-text
Abstract
The Japanese government aims to introduce self-driven vehicles by 2020 to reduce the number of accidents and traffic jams. Various methods have been proposed for traffic control at accident-prone intersections to achieve safe and efficient self-driving. Most of them require roadside units to [...] Read more.
The Japanese government aims to introduce self-driven vehicles by 2020 to reduce the number of accidents and traffic jams. Various methods have been proposed for traffic control at accident-prone intersections to achieve safe and efficient self-driving. Most of them require roadside units to identify and control vehicles. However, it is difficult to install roadside units at all intersections. This paper proposes an inter-vehicle communication protocol that enables vehicles to transmit their vehicle information and moving direction information to nearby vehicles. Vehicles identify nearby vehicles using images captured by vehicle-mounted cameras. These arrangements make it possible for vehicles to exchange yielding intention at an unsignalized intersection without using a roadside unit. To evaluate the operations of the proposed protocol, we implemented the protocol in Raspberry Pi computers, which were connected to cameras and mounted on radio control cars and conducted experiments. The experiments simulated an unsignalized intersection where both self-driven and human-driven vehicles were present. The vehicle that had sent a yielding request identified the yielding vehicle by recognizing the colour of each radio control car, which was part of the vehicle information, from the image captured by its camera. We measured a series of time needed to complete the yielding sequence and evaluated the validity of yielding decisions. Full article
Figures

Figure 1

Open AccessArticle High Dynamic Range Image Deghosting Using Spectral Angle Mapper
Received: 31 December 2018 / Revised: 5 February 2019 / Accepted: 6 February 2019 / Published: 9 February 2019
Viewed by 459 | PDF Full-text (3884 KB) | HTML Full-text | XML Full-text
Abstract
The generation of high dynamic range (HDR) images in the presence of moving objects results in the appearance of blurred objects. These blurred objects are called ghosts. Over the past decade, numerous deghosting techniques have been proposed for removing blurred objects from HDR [...] Read more.
The generation of high dynamic range (HDR) images in the presence of moving objects results in the appearance of blurred objects. These blurred objects are called ghosts. Over the past decade, numerous deghosting techniques have been proposed for removing blurred objects from HDR images. These methods may try to identify moving objects and maximize dynamic range locally or may focus on removing moving objects and displaying static objects while enhancing the dynamic range. The resultant image may suffer from broken/incomplete objects or noise, depending upon the type of methodology selected. Generally, deghosting methods are computationally intensive; however, a simple deghosting method may provide sufficiently acceptable results while being computationally inexpensive. Inspired by this idea, a simple deghosting method based on the spectral angle mapper (SAM) measure is proposed. The advantage of using SAM is that it is intensity independent and focuses only on identifying the spectral—i.e., color—similarity between two images. The proposed method focuses on removing moving objects while enhancing the dynamic range of static objects. The subjective and objective results demonstrate the effectiveness of the proposed method. Full article
Figures

Figure 1

Open AccessArticle Robust Computer Vision Chess Analysis and Interaction with a Humanoid Robot
Received: 15 January 2019 / Revised: 2 February 2019 / Accepted: 5 February 2019 / Published: 8 February 2019
Viewed by 442 | PDF Full-text (5162 KB) | HTML Full-text | XML Full-text
Abstract
As we move towards improving the skill of computers to play games like chess against humans, the ability to accurately perceive real-world game boards and game states remains a challenge in many cases, hindering the development of game-playing robots. In this paper, we [...] Read more.
As we move towards improving the skill of computers to play games like chess against humans, the ability to accurately perceive real-world game boards and game states remains a challenge in many cases, hindering the development of game-playing robots. In this paper, we present a computer vision algorithm developed as part of a chess robot project that detects the chess board, squares, and piece positions in relatively unconstrained environments. Dynamically responding to lighting changes in the environment, accounting for perspective distortion, and using accurate detection methodologies results in a simple but robust algorithm that succeeds 100% of the time in standard environments, and 80% of the time in extreme environments with external lighting. The key contributions of this paper are a dynamic approach to the Hough line transform, and a hybrid edge and morphology-based approach for object/occupancy detection, that enable the development of a robot chess player that relies solely on the camera for sensory input. Full article
(This article belongs to the Special Issue Smart Interfacing)
Figures

Figure 1

Open AccessArticle Location Intelligence Systems and Data Integration for Airport Capacities Planning
Received: 27 December 2018 / Revised: 30 January 2019 / Accepted: 3 February 2019 / Published: 7 February 2019
Viewed by 471 | PDF Full-text (8555 KB) | HTML Full-text | XML Full-text
Abstract
This paper describes an approach introducing location intelligence using open-source software components as the solution for planning and construction of the airport infrastructure. As a case study, the spatial information system of the International Airport in Sarajevo is selected. Due to the frequent [...] Read more.
This paper describes an approach introducing location intelligence using open-source software components as the solution for planning and construction of the airport infrastructure. As a case study, the spatial information system of the International Airport in Sarajevo is selected. Due to the frequent construction work on new terminals and the increase of existing airport capacities, as one of the measures for more efficient management of airport infrastructures, the development team has suggested to airport management to introduce location intelligence, meaning to upgrade the existing information system with a functional WebGIS solution. This solution is based on OpenGeo architecture that includes a set of spatial data management technologies used to create an online internet map and build a location intelligence infrastructure. Full article
(This article belongs to the Special Issue Computer Technologies for Human-Centered Cyber World)
Figures

Graphical abstract

Open AccessFeature PaperArticle Feature-Rich, GPU-Assisted Scatterplots for Millions of Call Events
Received: 14 January 2019 / Revised: 1 February 2019 / Accepted: 2 February 2019 / Published: 5 February 2019
Viewed by 510 | PDF Full-text (8075 KB) | HTML Full-text | XML Full-text
Abstract
The contact center industry represents a large proportion of many country’s economies. For example, 4% of the entire United States and UK’s working population is employed in this sector. As in most modern industries, contact centers generate gigabytes of operational data that require [...] Read more.
The contact center industry represents a large proportion of many country’s economies. For example, 4% of the entire United States and UK’s working population is employed in this sector. As in most modern industries, contact centers generate gigabytes of operational data that require analysis to provide insight and to improve efficiency. Visualization is a valuable approach to data analysis, enabling trends and correlations to be discovered, particularly when using scatterplots. We present a feature-rich application that visualizes large call center data sets using scatterplots that support millions of points. The application features a scatterplot matrix to provide an overview of the call center data attributes, animation of call start and end times, and utilizes both the CPU and GPU acceleration for processing and filtering. We illustrate the use of the Open Computing Language (OpenCL) to utilize a commodity graphics card for the fast filtering of fields with multiple attributes. We demonstrate the use of the application with millions of call events from a month’s worth of real-world data and report domain expert feedback from our industry partner. Full article
Figures

Figure 1

Open AccessArticle Automated Hints Generation for Investigating Source Code Plagiarism and Identifying The Culprits on In-Class Individual Programming Assessment
Received: 12 December 2018 / Revised: 27 January 2019 / Accepted: 29 January 2019 / Published: 2 February 2019
Viewed by 482 | PDF Full-text (1965 KB) | HTML Full-text | XML Full-text
Abstract
Most source code plagiarism detection tools only rely on source code similarity to indicate plagiarism. This can be an issue since not all source code pairs with high similarity are plagiarism. Moreover, the culprits (i.e., the ones who plagiarise) cannot be differentiated from [...] Read more.
Most source code plagiarism detection tools only rely on source code similarity to indicate plagiarism. This can be an issue since not all source code pairs with high similarity are plagiarism. Moreover, the culprits (i.e., the ones who plagiarise) cannot be differentiated from the victims even though they need to be educated further on different ways. This paper proposes a mechanism to generate hints for investigating source code plagiarism and identifying the culprits on in-class individual programming assessment. The hints are collected from the culprits’ copying behaviour during the assessment. According to our evaluation, the hints from source code creation process and seating position are 76.88% and at least 80.87% accurate for indicating plagiarism. Further, the hints from source code creation process can be helpful for indicating the culprits as the culprits’ codes have at least one of our predefined conditions for the copying behaviour. Full article
Figures

Figure 1

Open AccessArticle Generalized Majority Voter Design Method for N-Modular Redundant Systems Used in Mission- and Safety-Critical Applications
Received: 12 December 2018 / Revised: 13 January 2019 / Accepted: 22 January 2019 / Published: 28 January 2019
Viewed by 575 | PDF Full-text (2520 KB) | HTML Full-text | XML Full-text
Abstract
Mission- and safety-critical circuits and systems employ redundancy in their designs to overcome any faults or failures of constituent circuits and systems during the normal operation. In this aspect, the N-modular redundancy (NMR) is widely used. An NMR system is comprised of N [...] Read more.
Mission- and safety-critical circuits and systems employ redundancy in their designs to overcome any faults or failures of constituent circuits and systems during the normal operation. In this aspect, the N-modular redundancy (NMR) is widely used. An NMR system is comprised of N identical systems, the corresponding outputs of which are majority voted to generate the system outputs. To perform majority voting, a majority voter is required, and the sizes of majority voters tend to vary depending on an NMR system. Majority voters corresponding to NMR systems are physically realized by enumerating the majority input clauses corresponding to an NMR system and then synthesizing the majority logic equation. The issue is that the number of majority input clauses corresponding to an NMR system is governed by a mathematical combination, the complexity of which increases substantially with increases in the level of redundancy. In this context, the design of a majority voter of any size corresponding to an NMR specification based on a new, generalized design approach is described. The proposed approach is inherently hierarchical and progressive since any NMR majority voter can be constructed from an (N − 2)MR majority voter along with additional logic corresponding to the two extra inputs. Further, the proposed approach paves the way for simultaneous production of the NMR system outputs corresponding to different degrees of redundancy, which is not intrinsic to the existing methods. This feature is additionally useful for any sharing of common logic with diverse degrees of redundancy in appropriate portions of an NMR implementation. Full article
Figures

Figure 1

Open AccessArticle User Satisfaction in Augmented Reality-Based Training Using Microsoft HoloLens
Received: 4 December 2018 / Revised: 14 January 2019 / Accepted: 18 January 2019 / Published: 25 January 2019
Viewed by 852 | PDF Full-text (6821 KB) | HTML Full-text | XML Full-text
Abstract
With the recent developments in augmented reality (AR) technologies comes an increased interest in the use of smart glasses for hands-on training. Whether this interest is turned into market success depends at the least on whether the interaction with smart AR glasses satisfies [...] Read more.
With the recent developments in augmented reality (AR) technologies comes an increased interest in the use of smart glasses for hands-on training. Whether this interest is turned into market success depends at the least on whether the interaction with smart AR glasses satisfies users, an aspect of AR use that so far has received little attention. With this contribution, we seek to change this. The objective of the article, therefore, is to investigate user satisfaction in AR applied to three cases of practical use. User satisfaction of AR can be broken down into satisfaction with the interaction and satisfaction with the delivery device. A total of 142 participants from three different industrial sectors contributed to this study, namely, aeronautics, medicine, and astronautics. In our analysis, we investigated the influence of different factors, such as age, gender, level of education, level of Internet knowledge, and the roles of the participants in the different sectors. Even though users were not familiar with the smart glasses, results show that general computer knowledge has a positive effect on user satisfaction. Further analysis using two-factor interactions showed that there is no significant interaction between the different factors and user satisfaction. The results of the study affirm that the questionnaires developed for user satisfaction of smart glasses and the AR application performed well, but leave room for improvement. Full article
(This article belongs to the Special Issue Augmented and Mixed Reality in Work Context)
Figures

Figure 1

Open AccessArticle Hidden Link Prediction in Criminal Networks Using the Deep Reinforcement Learning Technique
Received: 23 October 2018 / Revised: 20 December 2018 / Accepted: 21 December 2018 / Published: 11 January 2019
Viewed by 674 | PDF Full-text (3141 KB) | HTML Full-text | XML Full-text
Abstract
Criminal network activities, which are usually secret and stealthy, present certain difficulties in conducting criminal network analysis (CNA) because of the lack of complete datasets. The collection of criminal activities data in these networks tends to be incomplete and inconsistent, which is reflected [...] Read more.
Criminal network activities, which are usually secret and stealthy, present certain difficulties in conducting criminal network analysis (CNA) because of the lack of complete datasets. The collection of criminal activities data in these networks tends to be incomplete and inconsistent, which is reflected structurally in the criminal network in the form of missing nodes (actors) and links (relationships). Criminal networks are commonly analyzed using social network analysis (SNA) models. Most machine learning techniques that rely on the metrics of SNA models in the development of hidden or missing link prediction models utilize supervised learning. However, supervised learning usually requires the availability of a large dataset to train the link prediction model in order to achieve an optimum performance level. Therefore, this research is conducted to explore the application of deep reinforcement learning (DRL) in developing a criminal network hidden links prediction model from the reconstruction of a corrupted criminal network dataset. The experiment conducted on the model indicates that the dataset generated by the DRL model through self-play or self-simulation can be used to train the link prediction model. The DRL link prediction model exhibits a better performance than a conventional supervised machine learning technique, such as the gradient boosting machine (GBM) trained with a relatively smaller domain dataset. Full article
Figures

Figure 1

Open AccessEditorial Acknowledgement to Reviewers of Computers in 2018
Published: 10 January 2019
Viewed by 551 | PDF Full-text (440 KB) | HTML Full-text | XML Full-text
Abstract
Rigorous peer-review is the corner-stone of high-quality academic publishing [...] Full article
Open AccessArticle Position Certainty Propagation: A Localization Service for Ad-Hoc Networks
Received: 11 November 2018 / Revised: 31 December 2018 / Accepted: 2 January 2019 / Published: 7 January 2019
Viewed by 602 | PDF Full-text (1155 KB) | HTML Full-text | XML Full-text
Abstract
Location services for ad-hoc networks are of indispensable value for a wide range of applications, such as the Internet of Things (IoT) and vehicular ad-hoc networks (VANETs). Each context requires a solution that addresses the specific needs of the application. For instance, IoT [...] Read more.
Location services for ad-hoc networks are of indispensable value for a wide range of applications, such as the Internet of Things (IoT) and vehicular ad-hoc networks (VANETs). Each context requires a solution that addresses the specific needs of the application. For instance, IoT sensor nodes have resource constraints (i.e., computational capabilities), and so a localization service should be highly efficient to conserve the lifespan of these nodes. We propose an optimized energy-aware and low computational solution, requiring 3-GPS equipped nodes (anchor nodes) in the network. Moreover, the computations are lightweight and can be implemented distributively among nodes. Knowing the maximum range of communication for all nodes and distances between 1-hop neighbors, each node localizes itself and shares its location with the network in an efficient manner. We simulate our proposed algorithm in a NS-3 simulator, and compare our solution with state-of-the-art methods. Our method is capable of localizing more nodes (≈90% of nodes in a network with an average degree ≈10). Full article
Figures

Figure 1

Open AccessArticle Robust Cochlear-Model-Based Speech Recognition
Received: 14 October 2018 / Revised: 21 December 2018 / Accepted: 23 December 2018 / Published: 1 January 2019
Viewed by 700 | PDF Full-text (491 KB) | HTML Full-text | XML Full-text
Abstract
Accurate speech recognition can provide a natural interface for human–computer interaction. Recognition rates of the modern speech recognition systems are highly dependent on background noise levels and a choice of acoustic feature extraction method can have a significant impact on system performance. This [...] Read more.
Accurate speech recognition can provide a natural interface for human–computer interaction. Recognition rates of the modern speech recognition systems are highly dependent on background noise levels and a choice of acoustic feature extraction method can have a significant impact on system performance. This paper presents a robust speech recognition system based on a front-end motivated by human cochlear processing of audio signals. In the proposed front-end, cochlear behavior is first emulated by the filtering operations of the gammatone filterbank and subsequently by the Inner Hair cell (IHC) processing stage. Experimental results using a continuous density Hidden Markov Model (HMM) recognizer with the proposed Gammatone Hair Cell (GHC) coefficients are lower for clean speech conditions, but demonstrate significant improvement in performance in noisy conditions compared to standard Mel-Frequency Cepstral Coefficients (MFCC) baseline. Full article
Figures

Figure 1

Open AccessArticle Sentiment Analysis of Lithuanian Texts Using Traditional and Deep Learning Approaches
Received: 27 November 2018 / Revised: 21 December 2018 / Accepted: 24 December 2018 / Published: 1 January 2019
Viewed by 753 | PDF Full-text (4553 KB) | HTML Full-text | XML Full-text
Abstract
We describe the sentiment analysis experiments that were performed on the Lithuanian Internet comment dataset using traditional machine learning (Naïve Bayes Multinomial—NBM and Support Vector Machine—SVM) and deep learning (Long Short-Term Memory—LSTM and Convolutional Neural Network—CNN) approaches. The traditional machine learning techniques were [...] Read more.
We describe the sentiment analysis experiments that were performed on the Lithuanian Internet comment dataset using traditional machine learning (Naïve Bayes Multinomial—NBM and Support Vector Machine—SVM) and deep learning (Long Short-Term Memory—LSTM and Convolutional Neural Network—CNN) approaches. The traditional machine learning techniques were used with the features based on the lexical, morphological, and character information. The deep learning approaches were applied on the top of two types of word embeddings (Vord2Vec continuous bag-of-words with negative sampling and FastText). Both traditional and deep learning approaches had to solve the positive/negative/neutral sentiment classification task on the balanced and full dataset versions. The best deep learning results (reaching 0.706 of accuracy) were achieved on the full dataset with CNN applied on top of the FastText embeddings, replaced emoticons, and eliminated diacritics. The traditional machine learning approaches demonstrated the best performance (0.735 of accuracy) on the full dataset with the NBM method, replaced emoticons, restored diacritics, and lemma unigrams as features. Although traditional machine learning approaches were superior when compared to the deep learning methods; deep learning demonstrated good results when applied on the small datasets. Full article
Figures

Figure 1

Open AccessArticle Utilizing Transfer Learning and Homomorphic Encryption in a Privacy Preserving and Secure Biometric Recognition System
Received: 4 December 2018 / Revised: 24 December 2018 / Accepted: 26 December 2018 / Published: 29 December 2018
Viewed by 856 | PDF Full-text (8439 KB) | HTML Full-text | XML Full-text
Abstract
Biometric verification systems have become prevalent in the modern world with the wide usage of smartphones. These systems heavily rely on storing the sensitive biometric data on the cloud. Due to the fact that biometric data like fingerprint and iris cannot be changed, [...] Read more.
Biometric verification systems have become prevalent in the modern world with the wide usage of smartphones. These systems heavily rely on storing the sensitive biometric data on the cloud. Due to the fact that biometric data like fingerprint and iris cannot be changed, storing them on the cloud creates vulnerability and can potentially have catastrophic consequences if these data are leaked. In the recent years, in order to preserve the privacy of the users, homomorphic encryption has been used to enable computation on the encrypted data and to eliminate the need for decryption. This work presents DeepZeroID: a privacy-preserving cloud-based and multiple-party biometric verification system that uses homomorphic encryption. Via transfer learning, training on sensitive biometric data is eliminated and one pre-trained deep neural network is used as feature extractor. By developing an exhaustive search algorithm, this feature extractor is applied on the tasks of biometric verification and liveness detection. By eliminating the need for training on and decrypting the sensitive biometric data, this system preserves privacy, requires zero knowledge of the sensitive data distribution, and is highly scalable. Our experimental results show that DeepZeroID can deliver 95.47% F1 score in the verification of combined iris and fingerprint feature vectors with zero true positives and with a 100% accuracy in liveness detection. Full article
Figures

Figure 1

Open AccessArticle Neural Network-Based Formula for the Buckling Load Prediction of I-Section Cellular Steel Beams
Received: 29 November 2018 / Revised: 21 December 2018 / Accepted: 21 December 2018 / Published: 26 December 2018
Cited by 1 | Viewed by 1194 | PDF Full-text (3377 KB) | HTML Full-text | XML Full-text
Abstract
Cellular beams are an attractive option for the steel construction industry due to their versatility in terms of strength, size, and weight. Further benefits are the integration of services thereby reducing ceiling-to-floor depth (thus, building’s height), which has a great economic impact. Moreover, [...] Read more.
Cellular beams are an attractive option for the steel construction industry due to their versatility in terms of strength, size, and weight. Further benefits are the integration of services thereby reducing ceiling-to-floor depth (thus, building’s height), which has a great economic impact. Moreover, the complex localized and global failures characterizing those members have led several scientists to focus their research on the development of more efficient design guidelines. This paper aims to propose an artificial neural network (ANN)-based formula to precisely compute the critical elastic buckling load of simply supported cellular beams under uniformly distributed vertical loads. The 3645-point dataset used in ANN design was obtained from an extensive parametric finite element analysis performed in ABAQUS. The independent variables adopted as ANN inputs are the following: beam’s length, opening diameter, web-post width, cross-section height, web thickness, flange width, flange thickness, and the distance between the last opening edge and the end support. The proposed model shows a strong potential as an effective design tool. The maximum and average relative errors among the 3645 data points were found to be 3.7% and 0.4%, respectively, whereas the average computing time per data point is smaller than a millisecond for any current personal computer. Full article
Figures

Figure 1

Open AccessArticle Prototypes of User Interfaces for Mobile Applications for Patients with Diabetes
Received: 7 October 2018 / Revised: 7 December 2018 / Accepted: 18 December 2018 / Published: 23 December 2018
Viewed by 772 | PDF Full-text (1238 KB) | HTML Full-text | XML Full-text
Abstract
We live in a heavily technologized global society. It is therefore not surprising that efforts are being made to integrate current information technology into the treatment of diabetes mellitus. This paper is dedicated to improving the treatment of this disease through the use [...] Read more.
We live in a heavily technologized global society. It is therefore not surprising that efforts are being made to integrate current information technology into the treatment of diabetes mellitus. This paper is dedicated to improving the treatment of this disease through the use of well-designed mobile applications. Our analysis of relevant literature sources and existing solutions has revealed that the current state of mobile applications for diabetics is unsatisfactory. These limitations relate both to the content and the Graphical User Interface (GUI) of existing applications. Following the analysis of relevant studies, there are four key elements that a diabetes mobile application should contain. These elements are: (1) blood glucose levels monitoring; (2) effective treatment; (3) proper eating habits; and (4) physical activity. As the next step in this study, three prototypes of new mobile applications were designed. Each of the prototypes represents one group of applications according to a set of given rules. The most optimal solution based on the users’ preferences was determined by using a questionnaire survey conducted with a sample of 30 respondents participating in a questionnaire after providing their informed consent. The age of participants was from 15 until 30 years old, where gender was split to 13 males and 17 females. As a result of this study, the specifications of the proposed application were identified, which aims to respond to the findings discovered in the analytical part of the study, and to eliminate the limitations of the current solutions. All of the respondents expressed preference for an application that includes not only the key functions, but a number of additional functions, namely synchronization with one of the external devices for measuring blood glucose levels, while five-sixths of them found suggested additional functions as being sufficient. Full article
(This article belongs to the Special Issue Computer Technologies in Personalized Medicine and Healthcare)
Figures

Figure 1

Computers EISSN 2073-431X Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top