Previous Issue

Table of Contents

Computers, Volume 8, Issue 1 (March 2019)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-8
Export citation of selected articles as:
Open AccessArticle Hidden Link Prediction in Criminal Networks Using the Deep Reinforcement Learning Technique
Received: 23 October 2018 / Revised: 20 December 2018 / Accepted: 21 December 2018 / Published: 11 January 2019
Viewed by 128 | PDF Full-text (3141 KB) | HTML Full-text | XML Full-text
Abstract
Criminal network activities, which are usually secret and stealthy, present certain difficulties in conducting criminal network analysis (CNA) because of the lack of complete datasets. The collection of criminal activities data in these networks tends to be incomplete and inconsistent, which is reflected
[...] Read more.
Criminal network activities, which are usually secret and stealthy, present certain difficulties in conducting criminal network analysis (CNA) because of the lack of complete datasets. The collection of criminal activities data in these networks tends to be incomplete and inconsistent, which is reflected structurally in the criminal network in the form of missing nodes (actors) and links (relationships). Criminal networks are commonly analyzed using social network analysis (SNA) models. Most machine learning techniques that rely on the metrics of SNA models in the development of hidden or missing link prediction models utilize supervised learning. However, supervised learning usually requires the availability of a large dataset to train the link prediction model in order to achieve an optimum performance level. Therefore, this research is conducted to explore the application of deep reinforcement learning (DRL) in developing a criminal network hidden links prediction model from the reconstruction of a corrupted criminal network dataset. The experiment conducted on the model indicates that the dataset generated by the DRL model through self-play or self-simulation can be used to train the link prediction model. The DRL link prediction model exhibits a better performance than a conventional supervised machine learning technique, such as the gradient boosting machine (GBM) trained with a relatively smaller domain dataset. Full article
Figures

Figure 1

Open AccessEditorial Acknowledgement to Reviewers of Computers in 2018
Published: 10 January 2019
Viewed by 132 | PDF Full-text (440 KB) | HTML Full-text | XML Full-text
Abstract
Rigorous peer-review is the corner-stone of high-quality academic publishing [...] Full article
Open AccessArticle Position Certainty Propagation: A Localization Service for Ad-Hoc Networks
Received: 11 November 2018 / Revised: 31 December 2018 / Accepted: 2 January 2019 / Published: 7 January 2019
Viewed by 133 | PDF Full-text (1155 KB) | HTML Full-text | XML Full-text
Abstract
Location services for ad-hoc networks are of indispensable value for a wide range of applications, such as the Internet of Things (IoT) and vehicular ad-hoc networks (VANETs). Each context requires a solution that addresses the specific needs of the application. For instance, IoT
[...] Read more.
Location services for ad-hoc networks are of indispensable value for a wide range of applications, such as the Internet of Things (IoT) and vehicular ad-hoc networks (VANETs). Each context requires a solution that addresses the specific needs of the application. For instance, IoT sensor nodes have resource constraints (i.e., computational capabilities), and so a localization service should be highly efficient to conserve the lifespan of these nodes. We propose an optimized energy-aware and low computational solution, requiring 3-GPS equipped nodes (anchor nodes) in the network. Moreover, the computations are lightweight and can be implemented distributively among nodes. Knowing the maximum range of communication for all nodes and distances between 1-hop neighbors, each node localizes itself and shares its location with the network in an efficient manner. We simulate our proposed algorithm in a NS-3 simulator, and compare our solution with state-of-the-art methods. Our method is capable of localizing more nodes (≈90% of nodes in a network with an average degree ≈10). Full article
Figures

Figure 1

Open AccessArticle Robust Cochlear-Model-Based Speech Recognition
Received: 14 October 2018 / Revised: 21 December 2018 / Accepted: 23 December 2018 / Published: 1 January 2019
Viewed by 226 | PDF Full-text (491 KB) | HTML Full-text | XML Full-text
Abstract
Accurate speech recognition can provide a natural interface for human–computer interaction. Recognition rates of the modern speech recognition systems are highly dependent on background noise levels and a choice of acoustic feature extraction method can have a significant impact on system performance. This
[...] Read more.
Accurate speech recognition can provide a natural interface for human–computer interaction. Recognition rates of the modern speech recognition systems are highly dependent on background noise levels and a choice of acoustic feature extraction method can have a significant impact on system performance. This paper presents a robust speech recognition system based on a front-end motivated by human cochlear processing of audio signals. In the proposed front-end, cochlear behavior is first emulated by the filtering operations of the gammatone filterbank and subsequently by the Inner Hair cell (IHC) processing stage. Experimental results using a continuous density Hidden Markov Model (HMM) recognizer with the proposed Gammatone Hair Cell (GHC) coefficients are lower for clean speech conditions, but demonstrate significant improvement in performance in noisy conditions compared to standard Mel-Frequency Cepstral Coefficients (MFCC) baseline. Full article
Figures

Figure 1

Open AccessArticle Sentiment Analysis of Lithuanian Texts Using Traditional and Deep Learning Approaches
Received: 27 November 2018 / Revised: 21 December 2018 / Accepted: 24 December 2018 / Published: 1 January 2019
Viewed by 271 | PDF Full-text (4553 KB) | HTML Full-text | XML Full-text
Abstract
We describe the sentiment analysis experiments that were performed on the Lithuanian Internet comment dataset using traditional machine learning (Naïve Bayes Multinomial—NBM and Support Vector Machine—SVM) and deep learning (Long Short-Term Memory—LSTM and Convolutional Neural Network—CNN) approaches. The traditional machine learning techniques were
[...] Read more.
We describe the sentiment analysis experiments that were performed on the Lithuanian Internet comment dataset using traditional machine learning (Naïve Bayes Multinomial—NBM and Support Vector Machine—SVM) and deep learning (Long Short-Term Memory—LSTM and Convolutional Neural Network—CNN) approaches. The traditional machine learning techniques were used with the features based on the lexical, morphological, and character information. The deep learning approaches were applied on the top of two types of word embeddings (Vord2Vec continuous bag-of-words with negative sampling and FastText). Both traditional and deep learning approaches had to solve the positive/negative/neutral sentiment classification task on the balanced and full dataset versions. The best deep learning results (reaching 0.706 of accuracy) were achieved on the full dataset with CNN applied on top of the FastText embeddings, replaced emoticons, and eliminated diacritics. The traditional machine learning approaches demonstrated the best performance (0.735 of accuracy) on the full dataset with the NBM method, replaced emoticons, restored diacritics, and lemma unigrams as features. Although traditional machine learning approaches were superior when compared to the deep learning methods; deep learning demonstrated good results when applied on the small datasets. Full article
Figures

Figure 1

Open AccessArticle Utilizing Transfer Learning and Homomorphic Encryption in a Privacy Preserving and Secure Biometric Recognition System
Received: 4 December 2018 / Revised: 24 December 2018 / Accepted: 26 December 2018 / Published: 29 December 2018
Viewed by 299 | PDF Full-text (8439 KB) | HTML Full-text | XML Full-text
Abstract
Biometric verification systems have become prevalent in the modern world with the wide usage of smartphones. These systems heavily rely on storing the sensitive biometric data on the cloud. Due to the fact that biometric data like fingerprint and iris cannot be changed,
[...] Read more.
Biometric verification systems have become prevalent in the modern world with the wide usage of smartphones. These systems heavily rely on storing the sensitive biometric data on the cloud. Due to the fact that biometric data like fingerprint and iris cannot be changed, storing them on the cloud creates vulnerability and can potentially have catastrophic consequences if these data are leaked. In the recent years, in order to preserve the privacy of the users, homomorphic encryption has been used to enable computation on the encrypted data and to eliminate the need for decryption. This work presents DeepZeroID: a privacy-preserving cloud-based and multiple-party biometric verification system that uses homomorphic encryption. Via transfer learning, training on sensitive biometric data is eliminated and one pre-trained deep neural network is used as feature extractor. By developing an exhaustive search algorithm, this feature extractor is applied on the tasks of biometric verification and liveness detection. By eliminating the need for training on and decrypting the sensitive biometric data, this system preserves privacy, requires zero knowledge of the sensitive data distribution, and is highly scalable. Our experimental results show that DeepZeroID can deliver 95.47% F1 score in the verification of combined iris and fingerprint feature vectors with zero true positives and with a 100% accuracy in liveness detection. Full article
Figures

Figure 1

Open AccessArticle Neural Network-Based Formula for the Buckling Load Prediction of I-Section Cellular Steel Beams
Received: 29 November 2018 / Revised: 21 December 2018 / Accepted: 21 December 2018 / Published: 26 December 2018
Viewed by 555 | PDF Full-text (3377 KB) | HTML Full-text | XML Full-text
Abstract
Cellular beams are an attractive option for the steel construction industry due to their versatility in terms of strength, size, and weight. Further benefits are the integration of services thereby reducing ceiling-to-floor depth (thus, building’s height), which has a great economic impact. Moreover,
[...] Read more.
Cellular beams are an attractive option for the steel construction industry due to their versatility in terms of strength, size, and weight. Further benefits are the integration of services thereby reducing ceiling-to-floor depth (thus, building’s height), which has a great economic impact. Moreover, the complex localized and global failures characterizing those members have led several scientists to focus their research on the development of more efficient design guidelines. This paper aims to propose an artificial neural network (ANN)-based formula to precisely compute the critical elastic buckling load of simply supported cellular beams under uniformly distributed vertical loads. The 3645-point dataset used in ANN design was obtained from an extensive parametric finite element analysis performed in ABAQUS. The independent variables adopted as ANN inputs are the following: beam’s length, opening diameter, web-post width, cross-section height, web thickness, flange width, flange thickness, and the distance between the last opening edge and the end support. The proposed model shows a strong potential as an effective design tool. The maximum and average relative errors among the 3645 data points were found to be 3.7% and 0.4%, respectively, whereas the average computing time per data point is smaller than a millisecond for any current personal computer. Full article
Figures

Figure 1

Open AccessArticle Prototypes of User Interfaces for Mobile Applications for Patients with Diabetes
Received: 7 October 2018 / Revised: 7 December 2018 / Accepted: 18 December 2018 / Published: 23 December 2018
Viewed by 305 | PDF Full-text (1238 KB) | HTML Full-text | XML Full-text
Abstract
We live in a heavily technologized global society. It is therefore not surprising that efforts are being made to integrate current information technology into the treatment of diabetes mellitus. This paper is dedicated to improving the treatment of this disease through the use
[...] Read more.
We live in a heavily technologized global society. It is therefore not surprising that efforts are being made to integrate current information technology into the treatment of diabetes mellitus. This paper is dedicated to improving the treatment of this disease through the use of well-designed mobile applications. Our analysis of relevant literature sources and existing solutions has revealed that the current state of mobile applications for diabetics is unsatisfactory. These limitations relate both to the content and the Graphical User Interface (GUI) of existing applications. Following the analysis of relevant studies, there are four key elements that a diabetes mobile application should contain. These elements are: (1) blood glucose levels monitoring; (2) effective treatment; (3) proper eating habits; and (4) physical activity. As the next step in this study, three prototypes of new mobile applications were designed. Each of the prototypes represents one group of applications according to a set of given rules. The most optimal solution based on the users’ preferences was determined by using a questionnaire survey conducted with a sample of 30 respondents participating in a questionnaire after providing their informed consent. The age of participants was from 15 until 30 years old, where gender was split to 13 males and 17 females. As a result of this study, the specifications of the proposed application were identified, which aims to respond to the findings discovered in the analytical part of the study, and to eliminate the limitations of the current solutions. All of the respondents expressed preference for an application that includes not only the key functions, but a number of additional functions, namely synchronization with one of the external devices for measuring blood glucose levels, while five-sixths of them found suggested additional functions as being sufficient. Full article
(This article belongs to the Special Issue Computer Technologies in Personalized Medicine and Healthcare)
Figures

Figure 1

Computers EISSN 2073-431X Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top