Special Issue "Applications of Information Theory to Software Engineering"

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Information Theory, Probability and Statistics".

Deadline for manuscript submissions: closed (20 November 2020).

Special Issue Editors

Dr. David Clark
Website
Guest Editor
Department of Computer Science, University College London, London, UK
Interests: software testing; information flow control; program analysis; malware detection; applications of information theory to software engineering
Dr. Earl T. Barr
Website
Guest Editor
Department of Computer Science, University College London, London, UK
Interests: Machine Learning for Software Engineering; Malware Detection; Debugging; Program Testing and Analysis; Applications of Information Theory to Software Engineering

Special Issue Information

Dear Colleagues,

Software engineering is fast becoming a fundamental industrial activity for our civilization, just as software increasingly underpins all other activities. Software engineering has a need to measure things: the diversity of a test set, the similarity between software and its refactoring, the leakage of secrets from a supposedly secure software system, and the convergence on pattern recognition when training a deep neural net, to name some.

Software engineering measures of diversity and its twin, similarity, measures of redundancy, of convergence, and measures of flow in software are ultimately rooted in information theory. Significant waypoints in understanding this connection include Denning’s observation that Shannon entropy could be used to measure leaks in secure software, Feldt’s use of Kolmogorov complexity to measure cognitive diversity in test sets for software, Clark’s use of entropy loss to model error masking in software testing, and Hindle’s modelling of software redundancy.

This Special Issue seeks to recognise, extend, and deepen this fundamental connection between software engineering and information theory. We solicit papers that contain novel applications of information theory to problems in software engineering. Of particular interest are papers in this context that introduce novel results in information theory.

Dr. David Clark
Dr. Earl T. Barr
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • test oracle problems
  • test set diversity
  • failed error propagation
  • coincidental correctness
  • software similarity
  • software testing
  • software redundancy
  • software robustness
  • software naturalness
  • software metrics
  • Kolmogorov complexity
  • leakage of secrets
  • Renyi entropy
  • Shannon entropy
  • algorithmic information theory

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
Entropy-Based Approach in Selection Exact String-Matching Algorithms
Entropy 2021, 23(1), 31; https://doi.org/10.3390/e23010031 - 28 Dec 2020
Viewed by 815
Abstract
The string-matching paradigm is applied in every computer science and science branch in general. The existence of a plethora of string-matching algorithms makes it hard to choose the best one for any particular case. Expressing, measuring, and testing algorithm efficiency is a challenging [...] Read more.
The string-matching paradigm is applied in every computer science and science branch in general. The existence of a plethora of string-matching algorithms makes it hard to choose the best one for any particular case. Expressing, measuring, and testing algorithm efficiency is a challenging task with many potential pitfalls. Algorithm efficiency can be measured based on the usage of different resources. In software engineering, algorithmic productivity is a property of an algorithm execution identified with the computational resources the algorithm consumes. Resource usage in algorithm execution could be determined, and for maximum efficiency, the goal is to minimize resource usage. Guided by the fact that standard measures of algorithm efficiency, such as execution time, directly depend on the number of executed actions. Without touching the problematics of computer power consumption or memory, which also depends on the algorithm type and the techniques used in algorithm development, we have developed a methodology which enables the researchers to choose an efficient algorithm for a specific domain. String searching algorithms efficiency is usually observed independently from the domain texts being searched. This research paper aims to present the idea that algorithm efficiency depends on the properties of searched string and properties of the texts being searched, accompanied by the theoretical analysis of the proposed approach. In the proposed methodology, algorithm efficiency is expressed through character comparison count metrics. The character comparison count metrics is a formal quantitative measure independent of algorithm implementation subtleties and computer platform differences. The model is developed for a particular problem domain by using appropriate domain data (patterns and texts) and provides for a specific domain the ranking of algorithms according to the patterns’ entropy. The proposed approach is limited to on-line exact string-matching problems based on information entropy for a search pattern. Meticulous empirical testing depicts the methodology implementation and purports soundness of the methodology. Full article
(This article belongs to the Special Issue Applications of Information Theory to Software Engineering)
Show Figures

Figure 1

Back to TopTop