Special Issue "Information Technology: New Generations (ITNG 2018)"

A special issue of Information (ISSN 2078-2489). This special issue belongs to the section "Information Applications".

Deadline for manuscript submissions: closed (30 September 2018)

Special Issue Editors

Guest Editor
Prof. Dr. Shahram Latifi

Department of Electrical & Computer Engineering, University of Nevada, Las Vegas, NV, USA
Website | E-Mail
Interests: image processing; data and image compression; gaming and statistics; information coding; sensor networks; reliability; applied graph theory; biometrics; bio-surveillance; computer networks; fault tolerant computing; parallel processing; interconnection networks
Guest Editor
Assist. Prof. Dr. Doina Bein

Department of Computer Science, California State University, Fullerton, CA, USA
Website | E-Mail
Interests: automatic dynamic decision-making; computational sensing; distributed algorithms; energy-efficient wireless networks; fault tolerant data structures; fault tolerant network coverage; graph embedding; multi-modal sensor fusion; randomized algorithms; routing and broadcasting in wireless networks; secure network communication; self-stabilizing algorithms; self-organizing ad-hoc networks; supervised machine learning; urban sensor networks; wireless sensor networks

Special Issue Information

Dear Colleagues,

Information proposes a Special Issue on “Information Technology: New Generations” (ITNG 2018). Contributors are invited to submit original papers dealing with state-of-the-art technologies pertaining to digital information and communications for publication in this Special Issue of the journal. The papers need to be submitted to the Guest Editor by email: [email protected] (or the Information Editorial Office: [email protected]). Please follow the instructions available here regarding the number of pages and the page formatting. The research papers should reach us latest by July 31, 2018.

Prof. Dr. Shahram Latifi
Assist. Prof. Dr. Doina Bein
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Information is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 850 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Networking and wireless communications
  • Internet of Things (IoT)
  • Software Defined Networking
  • Cyber Physical Systems
  • Machine learning
  • Robotics
  • High performance computing
  • Software engineering and testing
  • Cybersecurity and privacy
  • Big Data
  • High performance computing
  • Cryptography
  • E-health
  • Sensor networks
  • Algorithms
  • Education

Published Papers (5 papers)

View options order results:
result details:
Displaying articles 1-5
Export citation of selected articles as:

Research

Jump to: Review

Open AccessArticle An Empirical Study of Exhaustive Matching for Improving Motion Field Estimation
Information 2018, 9(12), 320; https://doi.org/10.3390/info9120320
Received: 20 October 2018 / Revised: 6 December 2018 / Accepted: 7 December 2018 / Published: 12 December 2018
PDF Full-text (17968 KB) | HTML Full-text | XML Full-text
Abstract
Optical flow is defined as the motion field of pixels between two consecutive images. Traditionally, in order to estimate pixel motion field (or optical flow), an energy model is proposed. This energy model is composed of (i) a data term and (ii) a
[...] Read more.
Optical flow is defined as the motion field of pixels between two consecutive images. Traditionally, in order to estimate pixel motion field (or optical flow), an energy model is proposed. This energy model is composed of (i) a data term and (ii) a regularization term. The data term is an optical flow error estimation and the regularization term imposes spatial smoothness. Traditional variational models use a linearization in the data term. This linearized version of data term fails when the displacement of the object is larger than its own size. Recently, the precision of the optical flow method has been increased due to the use of additional information, obtained from correspondences computed between two images obtained by different methods such as SIFT, deep-matching, and exhaustive search. This work presents an empirical study in order to evaluate different strategies for locating exhaustive correspondences improving flow estimation. We considered a different location for matching random locations, uniform locations, and locations on maximum gradient magnitude. Additionally, we tested the combination of large and medium gradients with uniform locations. We evaluated our methodology in the MPI-Sintel database, which represents the state-of-the-art evaluation databases. Our results in MPI-Sintel show that our proposal outperforms classical methods such as Horn-Schunk, TV-L1, and LDOF, and our method performs similar to MDP-Flow. Full article
(This article belongs to the Special Issue Information Technology: New Generations (ITNG 2018))
Figures

Figure 1

Open AccessArticle A Diabetes Management Information System with Glucose Prediction
Information 2018, 9(12), 319; https://doi.org/10.3390/info9120319
Received: 31 October 2018 / Revised: 6 December 2018 / Accepted: 7 December 2018 / Published: 12 December 2018
PDF Full-text (1034 KB) | HTML Full-text | XML Full-text
Abstract
Diabetes has become a serious health concern. The use and popularization of blood glucose measurement devices have led to a tremendous increase on health for diabetics. Tracking and maintaining traceability between glucose measurements, insulin doses and carbohydrate intake can provide useful information to
[...] Read more.
Diabetes has become a serious health concern. The use and popularization of blood glucose measurement devices have led to a tremendous increase on health for diabetics. Tracking and maintaining traceability between glucose measurements, insulin doses and carbohydrate intake can provide useful information to physicians, health professionals, and patients. This paper presents an information system, called GLUMIS (GLUcose Management Information System), aimed to support diabetes management activities. It is made of two modules, one for glucose prediction and one for data visualization and a reasoner to aid users in their treatment. Through integration with glucose measurement devices, it is possible to collect historical data on the treatment. In addition, the integration with a tool called the REALI System allows GLUMIS to also process data on insulin doses and eating habits. Quantitative and qualitative data were collected through an experimental case study involving 10 participants. It was able to demonstrate that the GLUMIS system is feasible. It was able to discover rules for predicting future values of blood glucose by processing the past history of measurements. Then, it presented reports that can help diabetics choose the amount of insulin they should take and the amount of carbohydrate they should consume during the day. Rules found by using one patient’s measurements were analyzed by a specialist that found three of them to be useful for improving the patient’s treatment. One such rule was “if glucose before breakfast [ 47 , 89 ] , then glucose at afternoon break in [ 160 , 306 ]”. The results obtained through the experimental study and other verifications associated with the algorithm created had a double objective. It was possible to show that participants, through a questionnaire, viewed the visualizations as easy, or very easy, to understand. The secondary objective showed that the innovative algorithm applied in the GLUMIS system allows the decision maker to have much more precision and less loss of information than in algorithms that require the data to be discretized. Full article
(This article belongs to the Special Issue Information Technology: New Generations (ITNG 2018))
Figures

Figure 1

Open AccessArticle Prototyping a Traffic Light Recognition Device with Expert Knowledge
Information 2018, 9(11), 278; https://doi.org/10.3390/info9110278
Received: 27 September 2018 / Revised: 18 October 2018 / Accepted: 9 November 2018 / Published: 13 November 2018
PDF Full-text (3002 KB) | HTML Full-text | XML Full-text
Abstract
Traffic light detection and recognition (TLR) research has grown every year. In addition, Machine Learning (ML) has been largely used not only in traffic light research but in every field where it is useful and possible to generalize data and automatize human behavior.
[...] Read more.
Traffic light detection and recognition (TLR) research has grown every year. In addition, Machine Learning (ML) has been largely used not only in traffic light research but in every field where it is useful and possible to generalize data and automatize human behavior. ML algorithms require a large amount of data to work properly and, thus, a lot of computational power is required to analyze the data. We argue that expert knowledge should be used to decrease the burden of collecting a huge amount of data for ML tasks. In this paper, we show how such kind of knowledge was used to reduce the amount of data and improve the accuracy rate for traffic light detection and recognition. Results show an improvement in the accuracy rate around 15%. The paper also proposes a TLR device prototype using both camera and processing unit of a smartphone which can be used as a driver assistance. To validate such layout prototype, a dataset was built and used to test an ML model based on adaptive background suppression filter (AdaBSF) and Support Vector Machines (SVMs). Results show 100% precision rate and recall of 65%. Full article
(This article belongs to the Special Issue Information Technology: New Generations (ITNG 2018))
Figures

Figure 1

Open AccessArticle Semantic Clustering of Functional Requirements Using Agglomerative Hierarchical Clustering
Information 2018, 9(9), 222; https://doi.org/10.3390/info9090222
Received: 31 July 2018 / Revised: 25 August 2018 / Accepted: 29 August 2018 / Published: 3 September 2018
PDF Full-text (667 KB) | HTML Full-text | XML Full-text
Abstract
Software applications have become a fundamental part in the daily work of modern society as they meet different needs of users in different domains. Such needs are known as software requirements (SRs) which are separated into functional (software services) and non-functional (quality attributes).
[...] Read more.
Software applications have become a fundamental part in the daily work of modern society as they meet different needs of users in different domains. Such needs are known as software requirements (SRs) which are separated into functional (software services) and non-functional (quality attributes). The first step of every software development project is SR elicitation. This step is a challenge task for developers as they need to understand and analyze SRs manually. For example, the collected functional SRs need to be categorized into different clusters to break-down the project into a set of sub-projects with related SRs and devote each sub-project to a separate development team. However, functional SRs clustering has never been considered in the literature. Therefore, in this paper, we propose an approach to automatically cluster functional requirements based on semantic measure. An empirical evaluation is conducted using four open-access software projects to evaluate our proposal. The experimental results demonstrate that the proposed approach identifies semantic clusters according to well-known used measures in the subject. Full article
(This article belongs to the Special Issue Information Technology: New Generations (ITNG 2018))
Figures

Figure 1

Review

Jump to: Research

Open AccessReview The Impact of Code Smells on Software Bugs: A Systematic Literature Review
Information 2018, 9(11), 273; https://doi.org/10.3390/info9110273
Received: 1 October 2018 / Revised: 30 October 2018 / Accepted: 2 November 2018 / Published: 6 November 2018
PDF Full-text (507 KB) | HTML Full-text | XML Full-text
Abstract
Context: Code smells are associated to poor design and programming style, which often degrades code quality and hampers code comprehensibility and maintainability. Goal: identify published studies that provide evidence of the influence of code smells on the occurrence of software bugs. Method: We
[...] Read more.
Context: Code smells are associated to poor design and programming style, which often degrades code quality and hampers code comprehensibility and maintainability. Goal: identify published studies that provide evidence of the influence of code smells on the occurrence of software bugs. Method: We conducted a Systematic Literature Review (SLR) to reach the stated goal. Results: The SLR selected studies from July 2007 to September 2017, which analyzed the source code of open source software projects and several code smells. Based on evidence of 16 studies covered in this SLR, we conclude that 24 code smells are more influential in the occurrence of bugs relative to the remaining smells analyzed. In contrast, three studies reported that at least 6 code smells are less influential in such occurrences. Evidence from the selected studies also point out tools, techniques, and procedures that should be applied to analyze the influence of the smells. Conclusions: To the best of our knowledge, this is the first SLR to target this goal. This study provides an up-to-date and structured understanding of the influence of code smells on the occurrence of software bugs based on findings systematically collected from a list of relevant references in the latest decade. Full article
(This article belongs to the Special Issue Information Technology: New Generations (ITNG 2018))
Figures

Figure 1

Back to Top