Special Issue "Information Technology: New Generations (ITNG 2017)"

A special issue of Information (ISSN 2078-2489). This special issue belongs to the section "Information Applications".

Deadline for manuscript submissions: closed (23 February 2018)

Special Issue Editors

Guest Editor
Prof. Dr. Shahram Latifi

Department of Electrical & Computer Engineering, University of Nevada, Las Vegas, NV, USA
Website | E-Mail
Interests: image processing; data and image compression; gaming and statistics; information coding; sensor networks; reliability; applied graph theory; biometrics; bio-surveillance; computer networks; fault tolerant computing; parallel processing; interconnection networks
Guest Editor
Assist. Prof. Dr. Doina Bein

Department of Computer Science, California State University, Fullerton, CA, USA
Website | E-Mail
Interests: automatic dynamic decision-making; computational sensing; distributed algorithms; energy-efficient wireless networks; fault tolerant data structures; fault tolerant network coverage; graph embedding; multi-modal sensor fusion; randomized algorithms; routing and broadcasting in wireless networks; secure network communication; self-stabilizing algorithms; self-organizing ad-hoc networks; supervised machine learning; urban sensor networks; wireless sensor networks

Special Issue Information

Dear Colleagues,

Information proposes a Special Issue on “Information Technology: New Generations” (ITNG). Contributors are invited to submit original papers dealing with state-of-the-art technologies pertaining to digital information and communications for publication in this Special Issue of the journal. The papers need to be submitted to the Guest Editor by email: dbein@fullerton.edu. Please follow the instructions available here regarding the number of pages and the page formatting. The research papers should reach us latest by June 30, 2017.

Dr. Shahram Latifi
Dr. Doina Bein
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Information is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 850 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Networking and wireless communications
  • Internet of Things (IoT)
  • Software Defined Networking
  • Cyber Physical Systems
  • Machine learning
  • Robotics
  • High performance computing
  • Software engineering and testing
  • Cybersecurity and privacy
  • Big Data
  • High performance computing
  • Cryptography
  • E-health
  • Sensor networks
  • Algorithms
  • Education

Published Papers (8 papers)

View options order results:
result details:
Displaying articles 1-8
Export citation of selected articles as:

Research

Jump to: Review

Open AccessArticle Multiple Congestion Points and Congestion Reaction Mechanisms for Improving DCTCP Performance in Data Center Networks
Information 2018, 9(6), 139; https://doi.org/10.3390/info9060139
Received: 23 February 2018 / Revised: 22 May 2018 / Accepted: 6 June 2018 / Published: 8 June 2018
PDF Full-text (1677 KB) | HTML Full-text | XML Full-text
Abstract
For addressing problems such as long delays, latency fluctuations, and frequent timeouts in conventional Transmission Control Protocol (TCP) in a data center environment, Data Center TCP (DCTCP) has been proposed as a TCP replacement to satisfy the requirements of data center networks. It
[...] Read more.
For addressing problems such as long delays, latency fluctuations, and frequent timeouts in conventional Transmission Control Protocol (TCP) in a data center environment, Data Center TCP (DCTCP) has been proposed as a TCP replacement to satisfy the requirements of data center networks. It is gaining more popularity in academic as well as industry areas due to its performance in terms of high throughput and low latency, and is widely deployed in data centers. However, according to the recent research about the performance of DCTCP, authors have found that most times the sender’s congestion window reduces to one segment, which results in timeouts. In addition, the authors observed that the nonlinear marking mechanism of DCTCP causes severe queue oscillation, which results in low throughput. To address the above issues of DCTCP, we propose multiple congestion points using double threshold and congestion reaction using window adjustment (DT-CWA) mechanisms for improving the performance of DCTCP by reducing the number of timeouts. The results of a series of simulations in a typical data center network topology using Qualnet network simulator, the most widely used network simulator, demonstrate that the proposed window-based solution can significantly reduce the timeouts and noticeably improves the throughput compared to DCTCP under various network conditions. Full article
(This article belongs to the Special Issue Information Technology: New Generations (ITNG 2017))
Figures

Figure 1

Open AccessArticle Hadoop Cluster Deployment: A Methodological Approach
Information 2018, 9(6), 131; https://doi.org/10.3390/info9060131
Received: 27 February 2018 / Revised: 24 May 2018 / Accepted: 25 May 2018 / Published: 29 May 2018
PDF Full-text (3106 KB) | HTML Full-text | XML Full-text
Abstract
For a long time, data has been treated as a general problem because it just represents fractions of an event without any relevant purpose. However, the last decade has been just about information and how to get it. Seeking meaning in data and
[...] Read more.
For a long time, data has been treated as a general problem because it just represents fractions of an event without any relevant purpose. However, the last decade has been just about information and how to get it. Seeking meaning in data and trying to solve scalability problems, many frameworks have been developed to improve data storage and its analysis. As a framework, Hadoop was presented as a powerful tool to deal with large amounts of data. However, it still causes doubts about how to deal with its deployment and if there is any reliable method to compare the performance of distinct Hadoop clusters. This paper presents a methodology based on benchmark analysis to guide the Hadoop cluster deployment. The experiments employed The Apache Hadoop and the Hadoop distributions of Cloudera, Hortonworks, and MapR, analyzing the architectures on local and on clouding—using centralized and geographically distributed servers. The results show the methodology can be dynamically applied on a reliable comparison among different architectures. Additionally, the study suggests that the knowledge acquired can be used to improve the data analysis process by understanding the Hadoop architecture. Full article
(This article belongs to the Special Issue Information Technology: New Generations (ITNG 2017))
Figures

Figure 1

Open AccessArticle Hybrid Visualization Approach to Show Documents Similarity and Content in a Single View
Information 2018, 9(6), 129; https://doi.org/10.3390/info9060129
Received: 27 February 2018 / Revised: 16 May 2018 / Accepted: 17 May 2018 / Published: 23 May 2018
PDF Full-text (5858 KB) | HTML Full-text | XML Full-text
Abstract
Multidimensional projection techniques can be employed to project datasets from a higher to a lower dimensional space (e.g., 2D space). These techniques can be used to present the relationships of dataset instances based on distance by grouping or separating clusters of instances in
[...] Read more.
Multidimensional projection techniques can be employed to project datasets from a higher to a lower dimensional space (e.g., 2D space). These techniques can be used to present the relationships of dataset instances based on distance by grouping or separating clusters of instances in the projected space. Several works have used multidimensional projections to aid in the exploration of document collections. Even though the projection techniques can organize a dataset, the user needs to read each document to understand the cluster generation. Alternatively, techniques such as topic extraction or tag clouds can be employed to present a summary of the document contents. To minimize the exploratory work and to aid in cluster analysis, this work proposes a new hybrid visualization to show both document relationship and content in a single view, employing multidimensional projections to relate documents and tag clouds. We show the effectiveness of the proposed approach in the exploration of two document collections composed by world news. Full article
(This article belongs to the Special Issue Information Technology: New Generations (ITNG 2017))
Figures

Figure 1

Open AccessArticle Analysis of Document Pre-Processing Effects in Text and Opinion Mining
Information 2018, 9(4), 100; https://doi.org/10.3390/info9040100
Received: 23 February 2018 / Revised: 10 April 2018 / Accepted: 17 April 2018 / Published: 20 April 2018
Cited by 1 | PDF Full-text (1412 KB) | HTML Full-text | XML Full-text
Abstract
Typically, textual information is available as unstructured data, which require processing so that data mining algorithms can handle such data; this processing is known as the pre-processing step in the overall text mining process. This paper aims at analyzing the strong impact that
[...] Read more.
Typically, textual information is available as unstructured data, which require processing so that data mining algorithms can handle such data; this processing is known as the pre-processing step in the overall text mining process. This paper aims at analyzing the strong impact that the pre-processing step has on most mining tasks. Therefore, we propose a methodology to vary distinct combinations of pre-processing steps and to analyze which pre-processing combination allows high precision. In order to show different combinations of pre-processing methods, experiments were performed by comparing some combinations such as stemming, term weighting, term elimination based on low frequency cut and stop words elimination. These combinations were applied in text and opinion mining tasks, from which correct classification rates were computed to highlight the strong impact of the pre-processing combinations. Additionally, we provide graphical representations from each pre-processing combination to show how visual approaches are useful to show the processing effects on document similarities and group formation (i.e., cohesion and separation). Full article
(This article belongs to the Special Issue Information Technology: New Generations (ITNG 2017))
Figures

Figure 1

Open AccessArticle Experimental Analysis of Stemming on Jurisprudential Documents Retrieval
Information 2018, 9(2), 28; https://doi.org/10.3390/info9020028
Received: 3 January 2018 / Revised: 24 January 2018 / Accepted: 25 January 2018 / Published: 27 January 2018
PDF Full-text (3612 KB) | HTML Full-text | XML Full-text
Abstract
Stemming algorithms are commonly used during textual preprocessing phase in order to reduce data dimensionality. However, this reduction presents different efficacy levels depending on the domain that it is applied to. Thus, for instance, there are reports in the literature that show the
[...] Read more.
Stemming algorithms are commonly used during textual preprocessing phase in order to reduce data dimensionality. However, this reduction presents different efficacy levels depending on the domain that it is applied to. Thus, for instance, there are reports in the literature that show the effect of stemming when applied to dictionaries or textual bases of news. On the other hand, we have not found any studies analyzing the impact of radicalization on Brazilian judicial jurisprudence, composed of decisions handed down by the judiciary, a fundamental instrument for law professionals to play their role. Thus, this work presents two complete experiments, showing the results obtained through the analysis and evaluation of the stemmers applied on real jurisprudential documents, originating from the Court of Justice of the State of Sergipe. In the first experiment, the results showed that, among the analyzed algorithms, the RSLP (Removedor de Sufixos da Lingua Portuguesa) possessed the greatest capacity of dimensionality reduction of the data. In the second one, through the evaluation of the stemming algorithms on the legal documents retrieval, the RSLP-S (Removedor de Sufixos da Lingua Portuguesa Singular) and UniNE (University of Neuchâtel), less aggressive stemmers, presented the best cost-benefit ratio, since they reduced the dimensionality of the data and increased the effectiveness of the information retrieval evaluation metrics in one of analyzed collections. Full article
(This article belongs to the Special Issue Information Technology: New Generations (ITNG 2017))
Figures

Figure 1

Open AccessArticle Usability as the Key Factor to the Design of a Web Server for the CReF Protein Structure Predictor: The wCReF
Information 2018, 9(1), 20; https://doi.org/10.3390/info9010020
Received: 20 December 2017 / Revised: 11 January 2018 / Accepted: 13 January 2018 / Published: 17 January 2018
PDF Full-text (4737 KB) | HTML Full-text | XML Full-text
Abstract
Protein structure prediction servers use various computational methods to predict the three-dimensional structure of proteins from their amino acid sequence. Predicted models are used to infer protein function and guide experimental efforts. This can contribute to solving the problem of predicting tertiary protein
[...] Read more.
Protein structure prediction servers use various computational methods to predict the three-dimensional structure of proteins from their amino acid sequence. Predicted models are used to infer protein function and guide experimental efforts. This can contribute to solving the problem of predicting tertiary protein structures, one of the main unsolved problems in bioinformatics. The challenge is to understand the relationship between the amino acid sequence of a protein and its three-dimensional structure, which is related to the function of these macromolecules. This article is an extended version of the article wCReF: The Web Server for the Central Residue Fragment-based Method (CReF) Protein Structure Predictor, published in the 14th International Conference on Information Technology: New Generations. In the first version, we presented the wCReF, a protein structure prediction server for the central residue fragment-based method. The wCReF interface was developed with a focus on usability and user interaction. With this tool, users can enter the amino acid sequence of their target protein and obtain its approximate 3D structure without the need to install all the multitude of necessary tools. In this extended version, we present the design process of the prediction server in detail, which includes: (A) identification of user needs: aiming at understanding the features of a protein structure prediction server, the end user profiles and the commonly-performed tasks; (B) server usability inspection: in order to define wCReF’s requirements and features, we have used heuristic evaluation guided by experts in both the human-computer interaction and bioinformatics domain areas, applied to the protein structure prediction servers I-TASSER, QUARK and Robetta; as a result, changes were found in all heuristics resulting in 89 usability problems; (C) software requirements document and prototype: assessment results guiding the key features that wCReF must have compiled in a software requirements document; from this step, prototyping was carried out; (D) wCReF usability analysis: a glimpse at the detection of new usability problems with end users by adapting the Ssemugabi satisfaction questionnaire; users’ evaluation had 80% positive feedback; (E) finally, some specific guidelines for interface design are presented, which may contribute to the design of interactive computational resources for the field of bioinformatics. In addition to the results of the original article, we present the methodology used in wCReF’s design and evaluation process (sample, procedures, evaluation tools) and the results obtained. Full article
(This article belongs to the Special Issue Information Technology: New Generations (ITNG 2017))
Figures

Figure 1

Open AccessFeature PaperArticle SeMiner: A Flexible Sequence Miner Method to Forecast Solar Time Series
Information 2018, 9(1), 8; https://doi.org/10.3390/info9010008
Received: 12 December 2017 / Revised: 29 December 2017 / Accepted: 2 January 2018 / Published: 4 January 2018
Cited by 1 | PDF Full-text (1942 KB) | HTML Full-text | XML Full-text
Abstract
X-rays emitted by the Sun can damage electronic devices of spaceships, satellites, positioning systems and electricity distribution grids. Thus, the forecasting of solar X-rays is needed to warn organizations and mitigate undesirable effects. Traditional mining classification methods categorize observations into labels, and we
[...] Read more.
X-rays emitted by the Sun can damage electronic devices of spaceships, satellites, positioning systems and electricity distribution grids. Thus, the forecasting of solar X-rays is needed to warn organizations and mitigate undesirable effects. Traditional mining classification methods categorize observations into labels, and we aim to extend this approach to predict future X-ray levels. Therefore, we developed the “SeMiner” method, which allows the prediction of future events. “SeMiner” processes X-rays into sequences employing a new algorithm called “Series-to-Sequence” (SS). It employs a sliding window approach configured by a specialist. Then, the sequences are submitted to a classifier to generate a model that predicts X-ray levels. An optimized version of “SS” was also developed using parallelization techniques and Graphical Processing Units, in order to speed up the entire forecasting process. The obtained results indicate that “SeMiner” is well-suited to predict solar X-rays and solar flares within the defined time range. It reached more than 90% of accuracy for a 2-day forecast, and more than 80% of True Positive (TPR) and True Negative (TNR) rates predicting X-ray levels. It also reached an accuracy of 72.7%, with a TPR of 70.9% and TNR of 79.7% when predicting solar flares. Moreover, the optimized version of “SS” proved to be 4.36 faster than its initial version. Full article
(This article belongs to the Special Issue Information Technology: New Generations (ITNG 2017))
Figures

Figure 1

Review

Jump to: Research

Open AccessReview Frequent Releases in Open Source Software: A Systematic Review
Information 2017, 8(3), 109; https://doi.org/10.3390/info8030109
Received: 26 June 2017 / Revised: 12 August 2017 / Accepted: 31 August 2017 / Published: 5 September 2017
Cited by 2 | PDF Full-text (558 KB) | HTML Full-text | XML Full-text
Abstract
Context: The need to accelerate software delivery, supporting faster time-to-market and frequent community developer/user feedback are issues that have led to relevant changes in software development practices. One example is the adoption of Rapid Release (RR) by several Open Source Software projects (OSS).
[...] Read more.
Context: The need to accelerate software delivery, supporting faster time-to-market and frequent community developer/user feedback are issues that have led to relevant changes in software development practices. One example is the adoption of Rapid Release (RR) by several Open Source Software projects (OSS). This raises the need to know how these projects deal with software release approaches. Goal: Identify the main characteristics of software release initiatives in OSS projects, the motivations behind their adoption, strategies applied, as well as advantages and difficulties found. Method: We conducted a Systematic Literature Review (SLR) to reach the stated goal. Results: The SLR includes 33 publications from January 2006 to July 2016 and reveals nine advantages that characterize software release approaches in OSS projects; four challenge issues; three possibilities of implementation and two main motivations towards the adoption of RR; and finally four main strategies to implement it. Conclusion: This study provides an up-to-date and structured understanding of the software release approaches in the context of OSS projects based on findings systematically collected from a list of relevant references in the last decade. Full article
(This article belongs to the Special Issue Information Technology: New Generations (ITNG 2017))
Figures

Figure 1

Back to Top