Special Issue "Information Technology: New Generations (ITNG 2017)"

A special issue of Information (ISSN 2078-2489). This special issue belongs to the section "Information Applications".

Deadline for manuscript submissions: 23 February 2018

Special Issue Editors

Guest Editor
Prof. Dr. Shahram Latifi

Department of Electrical & Computer Engineering, University of Nevada, Las Vegas, NV, USA
Website | E-Mail
Interests: image processing; data and image compression; gaming and statistics; information coding; sensor networks; reliability; applied graph theory; biometrics; bio-surveillance; computer networks; fault tolerant computing; parallel processing; interconnection networks
Guest Editor
Assist. Prof. Dr. Doina Bein

Department of Computer Science, California State University, Fullerton, CA, USA
Website | E-Mail
Interests: automatic dynamic decision-making; computational sensing; distributed algorithms; energy-efficient wireless networks; fault tolerant data structures; fault tolerant network coverage; graph embedding; multi-modal sensor fusion; randomized algorithms; routing and broadcasting in wireless networks; secure network communication; self-stabilizing algorithms; self-organizing ad-hoc networks; supervised machine learning; urban sensor networks; wireless sensor networks

Special Issue Information

Dear Colleagues,

Information proposes a Special Issue on “Information Technology: New Generations” (ITNG). Contributors are invited to submit original papers dealing with state-of-the-art technologies pertaining to digital information and communications for publication in this Special Issue of the journal. The papers need to be submitted to the Guest Editor by email: dbein@fullerton.edu. Please follow the instructions available here regarding the number of pages and the page formatting. The research papers should reach us latest by June 30, 2017.

Dr. Shahram Latifi
Dr. Doina Bein
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Information is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 850 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Networking and wireless communications
  • Internet of Things (IoT)
  • Software Defined Networking
  • Cyber Physical Systems
  • Machine learning
  • Robotics
  • High performance computing
  • Software engineering and testing
  • Cybersecurity and privacy
  • Big Data
  • High performance computing
  • Cryptography
  • E-health
  • Sensor networks
  • Algorithms
  • Education

Published Papers (4 papers)

View options order results:
result details:
Displaying articles 1-4
Export citation of selected articles as:

Research

Jump to: Review

Open AccessArticle Experimental Analysis of Stemming on Jurisprudential Documents Retrieval
Information 2018, 9(2), 28; doi:10.3390/info9020028
Received: 3 January 2018 / Revised: 24 January 2018 / Accepted: 25 January 2018 / Published: 27 January 2018
PDF Full-text (3612 KB) | HTML Full-text | XML Full-text
Abstract
Stemming algorithms are commonly used during textual preprocessing phase in order to reduce data dimensionality. However, this reduction presents different efficacy levels depending on the domain that it is applied to. Thus, for instance, there are reports in the literature that show the
[...] Read more.
Stemming algorithms are commonly used during textual preprocessing phase in order to reduce data dimensionality. However, this reduction presents different efficacy levels depending on the domain that it is applied to. Thus, for instance, there are reports in the literature that show the effect of stemming when applied to dictionaries or textual bases of news. On the other hand, we have not found any studies analyzing the impact of radicalization on Brazilian judicial jurisprudence, composed of decisions handed down by the judiciary, a fundamental instrument for law professionals to play their role. Thus, this work presents two complete experiments, showing the results obtained through the analysis and evaluation of the stemmers applied on real jurisprudential documents, originating from the Court of Justice of the State of Sergipe. In the first experiment, the results showed that, among the analyzed algorithms, the RSLP (Removedor de Sufixos da Lingua Portuguesa) possessed the greatest capacity of dimensionality reduction of the data. In the second one, through the evaluation of the stemming algorithms on the legal documents retrieval, the RSLP-S (Removedor de Sufixos da Lingua Portuguesa Singular) and UniNE (University of Neuchâtel), less aggressive stemmers, presented the best cost-benefit ratio, since they reduced the dimensionality of the data and increased the effectiveness of the information retrieval evaluation metrics in one of analyzed collections. Full article
(This article belongs to the Special Issue Information Technology: New Generations (ITNG 2017))
Figures

Figure 1

Open AccessArticle Usability as the Key Factor to the Design of a Web Server for the CReF Protein Structure Predictor: The wCReF
Information 2018, 9(1), 20; doi:10.3390/info9010020
Received: 20 December 2017 / Revised: 11 January 2018 / Accepted: 13 January 2018 / Published: 17 January 2018
PDF Full-text (4737 KB) | HTML Full-text | XML Full-text
Abstract
Protein structure prediction servers use various computational methods to predict the three-dimensional structure of proteins from their amino acid sequence. Predicted models are used to infer protein function and guide experimental efforts. This can contribute to solving the problem of predicting tertiary protein
[...] Read more.
Protein structure prediction servers use various computational methods to predict the three-dimensional structure of proteins from their amino acid sequence. Predicted models are used to infer protein function and guide experimental efforts. This can contribute to solving the problem of predicting tertiary protein structures, one of the main unsolved problems in bioinformatics. The challenge is to understand the relationship between the amino acid sequence of a protein and its three-dimensional structure, which is related to the function of these macromolecules. This article is an extended version of the article wCReF: The Web Server for the Central Residue Fragment-based Method (CReF) Protein Structure Predictor, published in the 14th International Conference on Information Technology: New Generations. In the first version, we presented the wCReF, a protein structure prediction server for the central residue fragment-based method. The wCReF interface was developed with a focus on usability and user interaction. With this tool, users can enter the amino acid sequence of their target protein and obtain its approximate 3D structure without the need to install all the multitude of necessary tools. In this extended version, we present the design process of the prediction server in detail, which includes: (A) identification of user needs: aiming at understanding the features of a protein structure prediction server, the end user profiles and the commonly-performed tasks; (B) server usability inspection: in order to define wCReF’s requirements and features, we have used heuristic evaluation guided by experts in both the human-computer interaction and bioinformatics domain areas, applied to the protein structure prediction servers I-TASSER, QUARK and Robetta; as a result, changes were found in all heuristics resulting in 89 usability problems; (C) software requirements document and prototype: assessment results guiding the key features that wCReF must have compiled in a software requirements document; from this step, prototyping was carried out; (D) wCReF usability analysis: a glimpse at the detection of new usability problems with end users by adapting the Ssemugabi satisfaction questionnaire; users’ evaluation had 80% positive feedback; (E) finally, some specific guidelines for interface design are presented, which may contribute to the design of interactive computational resources for the field of bioinformatics. In addition to the results of the original article, we present the methodology used in wCReF’s design and evaluation process (sample, procedures, evaluation tools) and the results obtained. Full article
(This article belongs to the Special Issue Information Technology: New Generations (ITNG 2017))
Figures

Figure 1

Open AccessFeature PaperArticle SeMiner: A Flexible Sequence Miner Method to Forecast Solar Time Series
Information 2018, 9(1), 8; doi:10.3390/info9010008
Received: 12 December 2017 / Revised: 29 December 2017 / Accepted: 2 January 2018 / Published: 4 January 2018
PDF Full-text (1942 KB) | HTML Full-text | XML Full-text
Abstract
X-rays emitted by the Sun can damage electronic devices of spaceships, satellites, positioning systems and electricity distribution grids. Thus, the forecasting of solar X-rays is needed to warn organizations and mitigate undesirable effects. Traditional mining classification methods categorize observations into labels, and we
[...] Read more.
X-rays emitted by the Sun can damage electronic devices of spaceships, satellites, positioning systems and electricity distribution grids. Thus, the forecasting of solar X-rays is needed to warn organizations and mitigate undesirable effects. Traditional mining classification methods categorize observations into labels, and we aim to extend this approach to predict future X-ray levels. Therefore, we developed the “SeMiner” method, which allows the prediction of future events. “SeMiner” processes X-rays into sequences employing a new algorithm called “Series-to-Sequence” (SS). It employs a sliding window approach configured by a specialist. Then, the sequences are submitted to a classifier to generate a model that predicts X-ray levels. An optimized version of “SS” was also developed using parallelization techniques and Graphical Processing Units, in order to speed up the entire forecasting process. The obtained results indicate that “SeMiner” is well-suited to predict solar X-rays and solar flares within the defined time range. It reached more than 90% of accuracy for a 2-day forecast, and more than 80% of True Positive (TPR) and True Negative (TNR) rates predicting X-ray levels. It also reached an accuracy of 72.7%, with a TPR of 70.9% and TNR of 79.7% when predicting solar flares. Moreover, the optimized version of “SS” proved to be 4.36 faster than its initial version. Full article
(This article belongs to the Special Issue Information Technology: New Generations (ITNG 2017))
Figures

Figure 1

Review

Jump to: Research

Open AccessReview Frequent Releases in Open Source Software: A Systematic Review
Information 2017, 8(3), 109; doi:10.3390/info8030109
Received: 26 June 2017 / Revised: 12 August 2017 / Accepted: 31 August 2017 / Published: 5 September 2017
PDF Full-text (558 KB) | HTML Full-text | XML Full-text
Abstract
Context: The need to accelerate software delivery, supporting faster time-to-market and frequent community developer/user feedback are issues that have led to relevant changes in software development practices. One example is the adoption of Rapid Release (RR) by several Open Source Software projects (OSS).
[...] Read more.
Context: The need to accelerate software delivery, supporting faster time-to-market and frequent community developer/user feedback are issues that have led to relevant changes in software development practices. One example is the adoption of Rapid Release (RR) by several Open Source Software projects (OSS). This raises the need to know how these projects deal with software release approaches. Goal: Identify the main characteristics of software release initiatives in OSS projects, the motivations behind their adoption, strategies applied, as well as advantages and difficulties found. Method: We conducted a Systematic Literature Review (SLR) to reach the stated goal. Results: The SLR includes 33 publications from January 2006 to July 2016 and reveals nine advantages that characterize software release approaches in OSS projects; four challenge issues; three possibilities of implementation and two main motivations towards the adoption of RR; and finally four main strategies to implement it. Conclusion: This study provides an up-to-date and structured understanding of the software release approaches in the context of OSS projects based on findings systematically collected from a list of relevant references in the last decade. Full article
(This article belongs to the Special Issue Information Technology: New Generations (ITNG 2017))
Figures

Figure 1

Back to Top