Next Issue
Previous Issue

Table of Contents

Computers, Volume 5, Issue 4 (December 2016)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-13
Export citation of selected articles as:

Research

Jump to: Review

Open AccessArticle Array Multipliers for High Throughput in Xilinx FPGAs with 6-Input LUTs
Computers 2016, 5(4), 20; doi:10.3390/computers5040020
Received: 19 June 2016 / Revised: 13 September 2016 / Accepted: 18 September 2016 / Published: 23 September 2016
Cited by 1 | PDF Full-text (2172 KB) | HTML Full-text | XML Full-text
Abstract
Multiplication is the dominant operation for many applications implemented on field-programmable gate arrays (FPGAs). Although most current FPGA families have embedded hard multipliers, soft multipliers using lookup tables (LUTs) in the logic fabric remain important. This paper presents a novel two-operand addition circuit
[...] Read more.
Multiplication is the dominant operation for many applications implemented on field-programmable gate arrays (FPGAs). Although most current FPGA families have embedded hard multipliers, soft multipliers using lookup tables (LUTs) in the logic fabric remain important. This paper presents a novel two-operand addition circuit (patent pending) that combines radix-4 partial-product generation with addition and shows how it can be used to implement two’s-complement array multipliers. The circuit is specific to modern Xilinx FPGAs that are based on a 6-input LUT architecture. Proposed pipelined multipliers use 42%–52% fewer LUTs, and some versions can be clocked up to 23% faster than delay-optimized LogiCORE IP multipliers. This allows 1.72–2.10-times as many multipliers to be implemented in the same logic fabric and potentially offers 1.86–2.58-times the throughput by increasing the clock frequency. Full article
Figures

Figure 1

Open AccessArticle Non-Invasive Sensor Technology for the Development of a Dairy Cattle Health Monitoring System
Computers 2016, 5(4), 23; doi:10.3390/computers5040023
Received: 29 July 2016 / Revised: 7 October 2016 / Accepted: 8 October 2016 / Published: 12 October 2016
PDF Full-text (578 KB) | HTML Full-text | XML Full-text
Abstract
The intention of this research is to establish a relationship between dairy cattle diseases with various non-invasive sensors for the development of a health monitoring system. This paper expands on the conference paper titled “Sensor technology for animal health monitoring” published in the
[...] Read more.
The intention of this research is to establish a relationship between dairy cattle diseases with various non-invasive sensors for the development of a health monitoring system. This paper expands on the conference paper titled “Sensor technology for animal health monitoring” published in the International Journal on Smart Sensing and Intelligent Systems (s2is) for the proceedings of International Conference on Sensing Technology (ICST) 2014. This paper studies and explores particular characteristics of dairy cattle’s health and behavioural symptoms. The aim is to consider the nature of the diseases a cow may have and relate it with one or many sensors that are suitable for accurate measurement of the behavioural changes. The research uses ontological relationship mapping or ontology matching to integrate heterogeneous databases of diseases and sensors and explains it in detail. This study identifies the sensors needed to determine illnesses in a dairy cow and how they would be beneficial for the development of non-invasive, wearable, smart, dairy cattle health monitoring system to be placed on the cows’ neck. It also explains how the primary sensors identified by this research can be used to forecast cattle health in a simple, basic manner. The scope of this paper is limited to the discussion about the non-invasive, wearable sensors that are needed to determine the cattle diseases. We focused only on non-invasive sensors because they are easy to install on cows and no training is required for them to be installed as compared to invasive sensors. Development of such a system and its evaluation is not in the scope of this paper and is left for our next paper. Full article
(This article belongs to the Special Issue Theory, Design and Prototyping of Wearable Electronics and Computing)
Figures

Figure 1

Open AccessArticle Low Noise Low Power CMOS Telescopic-OTA for Bio-Medical Applications
Computers 2016, 5(4), 25; doi:10.3390/computers5040025
Received: 11 August 2016 / Revised: 12 October 2016 / Accepted: 14 October 2016 / Published: 25 October 2016
PDF Full-text (2427 KB) | HTML Full-text | XML Full-text
Abstract
The preamplifier block is crucial in bio-medical signal processing. The power intensive Operational Transconductance Amplifier (OTA) is considered, and the performance of preamplifier is studied. A low noise and low power telescopic OTA is proposed in this work. To reduce the noise contribution
[...] Read more.
The preamplifier block is crucial in bio-medical signal processing. The power intensive Operational Transconductance Amplifier (OTA) is considered, and the performance of preamplifier is studied. A low noise and low power telescopic OTA is proposed in this work. To reduce the noise contribution in the active load transistors, source degeneration technique is incorporated in the current stealing branch of the OTA. The OTA design optimization is achieved by g m / I d methodology, which helps to determine the device geometrical parameters (W/L ratio). The proposed design was implemented in CMOS 90 nm with bias current and supply voltage of 1.6 µA and 1.2 V, respectively. The post layout simulation results of the proposed amplifier gave a gain of 62 dB with phase margin 57°, CMRR 78 dB, input referred noise 3.2 µVrms, Noise Efficiency Factor (NEF) 1.86 and power consumption of 1.92 µW. Full article
Figures

Figure 1

Open AccessArticle Development and Evaluation of the Virtual Prototype of the First Saudi Arabian-Designed Car
Computers 2016, 5(4), 26; doi:10.3390/computers5040026
Received: 31 August 2016 / Revised: 13 October 2016 / Accepted: 14 October 2016 / Published: 25 October 2016
PDF Full-text (7891 KB) | HTML Full-text | XML Full-text
Abstract
Prototyping and evaluation are imperative phases of the present product design and development process. Although digital modeling and analysis methods are widely employed at various product development stages, still, building a physical prototype makes the present typical process expensive and time consuming. Therefore,
[...] Read more.
Prototyping and evaluation are imperative phases of the present product design and development process. Although digital modeling and analysis methods are widely employed at various product development stages, still, building a physical prototype makes the present typical process expensive and time consuming. Therefore, it is necessary to implement new technologies, such as virtual prototyping, which can enable industry to have a rapid and more controlled decision making process. Virtual prototyping has come a long way in recent years, where current environments enable stereoscopic visuals, surround sound and ample interaction with the generated models. It is also important to evaluate how representative the developed virtual prototype is when compared to the real-world counterpart and the sense of presence reported by users of the virtual prototype. This paper describes the systematic procedure to develop a virtual prototype of Gazal-1 (i.e., the first car prototype designed by Saudi engineers) in a semi-immersive virtual environment. The steps to develop a virtual prototype from CAD (computer-aided design) models are explained in detail. Various issues involved in the different phases for the development of the virtual prototype are also discussed comprehensively. The paper further describes the results of the subjective assessment of a developed virtual prototype of a Saudi Arabian-designed automobile. User’s feedback is recorded using a presence questionnaire. Based on the user-based study, it is revealed that the virtual prototype is representative of the real Saudi Arabian car and offers a flexible environment to analyze design features when compared against its physical prototype. The capabilities of the virtual environment are validated with the application of the car prototype. Finally, vital requirements and directions for future research are also presented. Full article
Figures

Figure 1

Open AccessArticle A Security Analysis of Cyber-Physical Systems Architecture for Healthcare
Computers 2016, 5(4), 27; doi:10.3390/computers5040027
Received: 23 June 2016 / Revised: 4 October 2016 / Accepted: 25 October 2016 / Published: 31 October 2016
PDF Full-text (2950 KB) | HTML Full-text | XML Full-text
Abstract
This paper surveys the available system architectures for cyber-physical systems. Several candidate architectures are examined using a series of essential qualities for cyber-physical systems for healthcare. Next, diagrams detailing the expected functionality of infusion pumps in two of the architectures are analyzed. The
[...] Read more.
This paper surveys the available system architectures for cyber-physical systems. Several candidate architectures are examined using a series of essential qualities for cyber-physical systems for healthcare. Next, diagrams detailing the expected functionality of infusion pumps in two of the architectures are analyzed. The STRIDE Threat Model is then used to decompose each to determine possible security issues and how they can be addressed. Finally, a comparison of the major security issues in each architecture is presented to help determine which is most adaptable to meet the security needs of cyber-physical systems in healthcare. Full article
Figures

Figure 1

Open AccessArticle DeepCAD: A Computer-Aided Diagnosis System for Mammographic Masses Using Deep Invariant Features
Computers 2016, 5(4), 28; doi:10.3390/computers5040028
Received: 5 August 2016 / Revised: 28 September 2016 / Accepted: 26 October 2016 / Published: 31 October 2016
Cited by 2 | PDF Full-text (1985 KB) | HTML Full-text | XML Full-text
Abstract
The development of a computer-aided diagnosis (CAD) system for differentiation between benign and malignant mammographic masses is a challenging task due to the use of extensive pre- and post-processing steps and ineffective features set. In this paper, a novel CAD system is proposed
[...] Read more.
The development of a computer-aided diagnosis (CAD) system for differentiation between benign and malignant mammographic masses is a challenging task due to the use of extensive pre- and post-processing steps and ineffective features set. In this paper, a novel CAD system is proposed called DeepCAD, which uses four phases to overcome these problems. The speed-up robust features (SURF) and local binary pattern variance (LBPV) descriptors are extracted from each mass. These descriptors are then transformed into invariant features. Afterwards, the deep invariant features (DIFs) are constructed in supervised and unsupervised fashion through multilayer deep-learning architecture. A fine-tuning step is integrated to determine the features, and the final decision is performed via softmax linear classifier. To evaluate this DeepCAD system, a dataset of 600 region-of-interest (ROI) masses including 300 benign and 300 malignant masses was obtained from two publicly available data sources. The performance of DeepCAD system is compared with the state-of-the-art methods in terms of area under the receiver operating characteristics (AUC) curve. The difference between AUC of DeepCAD and other methods is statistically significant, as it demonstrates a sensitivity (SN) of 92%, specificity (SP) of 84.2%, accuracy (ACC) of 91.5% and AUC of 0.91. The experimental results indicate that the proposed DeepCAD system is reliable for providing aid to radiologists without the need for explicit design. Full article
Figures

Figure 1

Open AccessArticle An Improved Retrievability-Based Cluster-Resampling Approach for Pseudo Relevance Feedback
Computers 2016, 5(4), 29; doi:10.3390/computers5040029
Received: 31 July 2016 / Revised: 3 November 2016 / Accepted: 10 November 2016 / Published: 15 November 2016
PDF Full-text (633 KB) | HTML Full-text | XML Full-text
Abstract
Cluster-based pseudo-relevance feedback (PRF) is an effective approach for searching relevant documents for relevance feedback. Standard approach constructs clusters for PRF only on the basis of high similarity between retrieved documents. The standard approach works quite well if the retrieval bias of the
[...] Read more.
Cluster-based pseudo-relevance feedback (PRF) is an effective approach for searching relevant documents for relevance feedback. Standard approach constructs clusters for PRF only on the basis of high similarity between retrieved documents. The standard approach works quite well if the retrieval bias of the retrieval model does not create any effect on the retrievability of documents. In our experiments we observed when a collection contains retrieval bias, then high retrievable documents of clusters are frequently retrieved at top positions for most of the queries, and these drift the relevance feedback away from relevant documents. For reducing (retrieval bias) noise, we enhance the standard cluster construction approach by constructing clusters on the basis of high similarity and retrievability. We call this retrievability and cluster-based PRF. This enhanced approach keeps only those documents in the clusters that are not frequently retrieve due to retrieval bias. Although this approach improves the effectiveness, however, it penalizes high retrievable documents even if these documents are most relevant to the clusters. To handle this problem, in a second approach, we extend the basic retrievability concept by mining frequent neighbors of the clusters. The frequent neighbors approach keeps only those documents in the clusters that are frequently retrieved with other neighbors of clusters and infrequently retrieved with those documents that are not part of the clusters. Experimental results show that two proposed extensions are helpful for identifying relevant documents for relevance feedback and increasing the effectiveness of queries. Full article
Figures

Open AccessArticle Store-Carry and Forward-Type M2M Communication Protocol Enabling Guide Robots to Work together and the Method of Identifying Malfunctioning Robots Using the Byzantine Algorithm
Computers 2016, 5(4), 30; doi:10.3390/computers5040030
Received: 25 August 2016 / Revised: 14 November 2016 / Accepted: 21 November 2016 / Published: 28 November 2016
PDF Full-text (7521 KB) | HTML Full-text | XML Full-text
Abstract
This paper concerns a service in which multiple guide robots in an area display arrows to guide individual users to their destinations. It proposes a method of identifying malfunctioning robots and robots that give wrong directions to users. In this method, users’ mobile
[...] Read more.
This paper concerns a service in which multiple guide robots in an area display arrows to guide individual users to their destinations. It proposes a method of identifying malfunctioning robots and robots that give wrong directions to users. In this method, users’ mobile terminals and robots form a store-carry and forward-type M2M communication network, and a distributed cooperative protocol is used to enable robots to share information and identify malfunctioning robots using the Byzantine algorithm. The robots do not directly communicate with each other, but through users’ mobile terminals. We have introduced the concept of the quasi-synchronous number, so whether a certain robot is malfunctioning can be determined even when items of information held by all of the robots are not synchronized. Using simulation, we have evaluated the proposed method in terms of the rate of identifying malfunctioning robots, the rate of reaching the destination and the average length of time to reach the destination. Full article
Figures

Figure 1

Open AccessArticle An N100-P300 Spelling Brain-Computer Interface with Detection of Intentional Control
Computers 2016, 5(4), 31; doi:10.3390/computers5040031
Received: 1 October 2016 / Revised: 18 November 2016 / Accepted: 24 November 2016 / Published: 2 December 2016
PDF Full-text (1672 KB) | HTML Full-text | XML Full-text
Abstract
A brain-computer interface (BCI) is a tool to communicate with a computer via brain signals without the user making any physical movements, thus enabling disabled people to communicate with their environment and with others. P300-based ERP spellers are a widely used spelling visual
[...] Read more.
A brain-computer interface (BCI) is a tool to communicate with a computer via brain signals without the user making any physical movements, thus enabling disabled people to communicate with their environment and with others. P300-based ERP spellers are a widely used spelling visual BCI using the P300 component of event-related potential (ERP). However, they have a technical problem in that at least 2 N flashes are required to present N characters. This prevents the improvement of accuracy and restricts the typing speed. To address this issue, we propose a method that uses N100 in addition to P300. We utilize novel stimulus images to detect the user’s gazing position by using N100. By using both P300 and N100, the proposed visual BCI reduces the number of flashes and improves the accuracy of the P300 speller. We also propose using N100 to classify non-control (NC) and intentional control (IC) states. In our experiments, the detection accuracy of N100 was significantly higher than that of P300 and the proposed method exhibited a higher information transfer rate (ITR) than the P300 speller. Full article
(This article belongs to the Special Issue Event-Related Potential Brain-Computer Interfaces)
Figures

Figure 1

Open AccessArticle BSEA: A Blind Sealed-Bid E-Auction Scheme for E-Commerce Applications
Computers 2016, 5(4), 32; doi:10.3390/computers5040032
Received: 4 September 2016 / Revised: 7 November 2016 / Accepted: 5 December 2016 / Published: 14 December 2016
PDF Full-text (889 KB) | HTML Full-text | XML Full-text
Abstract
Due to an increase in the number of internet users, electronic commerce has grown significantly during the last decade. Electronic auction (e-auction) is one of the famous e-commerce applications. Even so, security and robustness of e-auction schemes still remain a challenge. Requirements like
[...] Read more.
Due to an increase in the number of internet users, electronic commerce has grown significantly during the last decade. Electronic auction (e-auction) is one of the famous e-commerce applications. Even so, security and robustness of e-auction schemes still remain a challenge. Requirements like anonymity and privacy of the b i d value are under threat from the attackers. Any auction protocol must not leak the anonymity and the privacy of the b i d value of an honest Bidder. Keeping these requirements in mind, we have firstly proposed a controlled traceable blind signature scheme (CTBSS) because e-auction schemes should be able to trace the Bidders. Using CTBSS, a blind sealed-bid electronic auction scheme is proposed (BSEA). We have incorporated the notion of blind signature to e-auction schemes. Moreover, both the schemes are based upon elliptic curve cryptography (ECC), which provides a similar level of security with a comparatively smaller key size than the discrete logarithm problem (DLP) based e-auction protocols. The analysis shows that BSEA fulfills all the requirements of e-auction protocol, and the total computation overhead is lower than the existing schemes. Full article
Figures

Figure 1

Review

Jump to: Research

Open AccessReview A Survey of 2D Face Recognition Techniques
Computers 2016, 5(4), 21; doi:10.3390/computers5040021
Received: 24 June 2016 / Revised: 18 September 2016 / Accepted: 20 September 2016 / Published: 28 September 2016
Cited by 1 | PDF Full-text (1445 KB) | HTML Full-text | XML Full-text
Abstract
Despite the existence of various biometric techniques, like fingerprints, iris scan, as well as hand geometry, the most efficient and more widely-used one is face recognition. This is because it is inexpensive, non-intrusive and natural. Therefore, researchers have developed dozens of face recognition
[...] Read more.
Despite the existence of various biometric techniques, like fingerprints, iris scan, as well as hand geometry, the most efficient and more widely-used one is face recognition. This is because it is inexpensive, non-intrusive and natural. Therefore, researchers have developed dozens of face recognition techniques over the last few years. These techniques can generally be divided into three categories, based on the face data processing methodology. There are methods that use the entire face as input data for the proposed recognition system, methods that do not consider the whole face, but only some features or areas of the face and methods that use global and local face characteristics simultaneously. In this paper, we present an overview of some well-known methods in each of these categories. First, we expose the benefits of, as well as the challenges to the use of face recognition as a biometric tool. Then, we present a detailed survey of the well-known methods by expressing each method’s principle. After that, a comparison between the three categories of face recognition techniques is provided. Furthermore, the databases used in face recognition are mentioned, and some results of the applications of these methods on face recognition databases are presented. Finally, we highlight some new promising research directions that have recently appeared. Full article
Figures

Figure 1

Open AccessReview Ambient Technology to Assist Elderly People in Indoor Risks
Computers 2016, 5(4), 22; doi:10.3390/computers5040022
Received: 10 August 2016 / Revised: 16 September 2016 / Accepted: 26 September 2016 / Published: 10 October 2016
Cited by 2 | PDF Full-text (2061 KB) | HTML Full-text | XML Full-text
Abstract
While elderly people perform their daily indoor activities, they are subjected to several risks. To improve the quality of life of elderly people and promote healthy aging and independent living, elderly people need to be provided with an assistive technology platform to rely
[...] Read more.
While elderly people perform their daily indoor activities, they are subjected to several risks. To improve the quality of life of elderly people and promote healthy aging and independent living, elderly people need to be provided with an assistive technology platform to rely on during their activities. We reviewed the literature and identified the major indoor risks addressed by assistive technology that elderly people face during their indoor activities. In this paper, we identify these risks as: fall, wrong self-medication management, fire, burns, intoxication by gas/smoke, and the risk of inactivity. In addition, we discuss the existing assistive technology systems and classify the risk detection algorithms, techniques and the basic system principles and interventions to enhance safety of elderly people. Full article
(This article belongs to the Special Issue Theory, Design and Prototyping of Wearable Electronics and Computing)
Figures

Figure 1

Open AccessReview Quantum Genetic Algorithms for Computer Scientists
Computers 2016, 5(4), 24; doi:10.3390/computers5040024
Received: 7 July 2016 / Revised: 2 October 2016 / Accepted: 11 October 2016 / Published: 15 October 2016
Cited by 2 | PDF Full-text (11230 KB) | HTML Full-text | XML Full-text
Abstract
Genetic algorithms (GAs) are a class of evolutionary algorithms inspired by Darwinian natural selection. They are popular heuristic optimisation methods based on simulated genetic mechanisms, i.e., mutation, crossover, etc. and population dynamical processes such as reproduction, selection, etc. Over the last decade, the
[...] Read more.
Genetic algorithms (GAs) are a class of evolutionary algorithms inspired by Darwinian natural selection. They are popular heuristic optimisation methods based on simulated genetic mechanisms, i.e., mutation, crossover, etc. and population dynamical processes such as reproduction, selection, etc. Over the last decade, the possibility to emulate a quantum computer (a computer using quantum-mechanical phenomena to perform operations on data) has led to a new class of GAs known as “Quantum Genetic Algorithms” (QGAs). In this review, we present a discussion, future potential, pros and cons of this new class of GAs. The review will be oriented towards computer scientists interested in QGAs “avoiding” the possible difficulties of quantum-mechanical phenomena. Full article
Figures

Figure 1

Journal Contact

MDPI AG
Computers Editorial Office
St. Alban-Anlage 66, 4052 Basel, Switzerland
E-Mail: 
Tel. +41 61 683 77 34
Fax: +41 61 302 89 18
Editorial Board
Contact Details Submit to Computers Edit a special issue Review for Computers
logo
loading...
Back to Top