Code Generation, Analysis and Quality Testing

A special issue of Computers (ISSN 2073-431X).

Deadline for manuscript submissions: closed (30 September 2019) | Viewed by 19050

Special Issue Editors


E-Mail Website
Guest Editor
2Ai, School of Technology, Polytechnic Institute of Cávado e Ave, 4750-810 Barcelos, Portugal
Interests: natural language processing; programming languages; compilers; computer programming education
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Informatics, Media Arts and Design School, Polytechnic of Porto, 4200-465 Porto, Portugal
Interests: computer programming education; gamification; knowledge management systems; e-learning
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Programming is still a mainly manual process. The amount of detail a programmer needs to consider while developing and algorithm can be quite large. To help programmers, there are three main areas of research:

1) Code Generation: The development of tools to generate code based on domai- specific languages. Some good old examples are compiler generators, like flex or yacc. Other more recent tools follow this idea, like Microsoft Entity Framework, or the VDMTools specification language. Nevertheless, there is still room for improvement and automation;

2) Code Analysis: A large amount of code is running on a daily basis on the planet. While some code is quite recent, there is a large amount of legacy software still running. Either to keep legacy software running smoothly, or to help finding bugs in recently developed software, there are tools and companies dedicated to analyzing code, finding its flaws. Unfortunately, most of these tools do static analysis and still have limitations when it comes to detecting some critical situations;

3) Code Testing: Related to the analysis of software is software quality testing. In the last few years, a large number of programmers have been adopting agile techniques that foster the usage of test-driven development, feature-driven development, and continuous integration and deployment. The main drawback is that tests are being written by the same programmers developing applications, making the development slower, and resulting in biased unit tests.

This Special Issue’s main focus is the development of tools and practices to help developers in these three aspects.

Prof. Alberto Simões
Prof. Ricardo Queirós
Prof. Mário Pinto
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Computers is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • automated development
  • code generation
  • quality assessment
  • static code analysis
  • unit testing
  • automatic code tests
  • code quality metrics

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 1584 KiB  
Article
Extract Class Refactoring Based on Cohesion and Coupling: A Greedy Approach
by Musaad Alzahrani
Computers 2022, 11(8), 123; https://doi.org/10.3390/computers11080123 - 16 Aug 2022
Cited by 3 | Viewed by 2553
Abstract
A large class with many responsibilities is a design flaw that commonly occurs in real-world object-oriented systems during their lifespan. Such a class tends to be more difficult to comprehend, test, and change. Extract class refactoring (ECR) is the technique that is used [...] Read more.
A large class with many responsibilities is a design flaw that commonly occurs in real-world object-oriented systems during their lifespan. Such a class tends to be more difficult to comprehend, test, and change. Extract class refactoring (ECR) is the technique that is used to address this design flaw by trying to extract a set of smaller classes with better quality from the large class. Unfortunately, ECR is a costly process that takes great time and effort when it is conducted completely by hand. Thus, many approaches have been introduced in the literature that tried to automatically suggest the best set of classes that can be extracted from a large class. However, most of these approaches focus on improving the cohesion of the extracted classes yet neglect the coupling between them which can lead to the extraction of highly coupled classes. Therefore, this paper proposes a novel approach that considers the combination of the cohesion and coupling to identify the set of classes that can be extracted from a large class. The proposed approach was empirically evaluated based on real-world Blobs taken from two open-source object-oriented systems. The results of the empirical evaluation revealed that the proposed approach is potentially useful and leads to improvement in the overall quality. Full article
(This article belongs to the Special Issue Code Generation, Analysis and Quality Testing)
Show Figures

Figure 1

29 pages, 11989 KiB  
Article
Accidental Choices—How JVM Choice and Associated Build Tools Affect Interpreter Performance
by Jonathan Lambert, Rosemary Monahan and Kevin Casey
Computers 2022, 11(6), 96; https://doi.org/10.3390/computers11060096 - 14 Jun 2022
Cited by 1 | Viewed by 3149
Abstract
Considering the large number of optimisation techniques that have been integrated into the design of the Java Virtual Machine (JVM) over the last three decades, the Java interpreter continues to persist as a significant bottleneck in the performance of bytecode execution. This paper [...] Read more.
Considering the large number of optimisation techniques that have been integrated into the design of the Java Virtual Machine (JVM) over the last three decades, the Java interpreter continues to persist as a significant bottleneck in the performance of bytecode execution. This paper examines the relationship between Java Runtime Environment (JRE) performance concerning the interpreted execution of Java bytecode and the effect modern compiler selection and integration within the JRE build toolchain has on that performance. We undertook this evaluation relative to a contemporary benchmark suite of application workloads, the Renaissance Benchmark Suite. Our results show that the choice of GNU GCC compiler version used within the JRE build toolchain statistically significantly affects runtime performance. More importantly, not all OpenJDK releases and JRE JVM interpreters are equal. Our results show that OpenJDK JVM interpreter performance is associated with benchmark workload. In addition, in some cases, rolling back to an earlier OpenJDK version and using a more recent GNU GCC compiler within the build toolchain of the JRE can significantly positively impact JRE performance. Full article
(This article belongs to the Special Issue Code Generation, Analysis and Quality Testing)
Show Figures

Figure 1

16 pages, 760 KiB  
Article
Design and Implementation of SFCI: A Tool for Security Focused Continuous Integration
by Michael Lescisin, Qusay H. Mahmoud and Anca Cioraca
Computers 2019, 8(4), 80; https://doi.org/10.3390/computers8040080 - 1 Nov 2019
Viewed by 6391
Abstract
Software security is a component of software development that should be integrated throughout its entire development lifecycle, and not simply as an afterthought. If security vulnerabilities are caught early in development, they can be fixed before the software is released in production environments. [...] Read more.
Software security is a component of software development that should be integrated throughout its entire development lifecycle, and not simply as an afterthought. If security vulnerabilities are caught early in development, they can be fixed before the software is released in production environments. Furthermore, finding a software vulnerability early in development will warn the programmer and lessen the likelihood of this type of programming error being repeated in other parts of the software project. Using Continuous Integration (CI) for checking for security vulnerabilities every time new code is committed to a repository can alert developers of security flaws almost immediately after they are introduced. Finally, continuous integration tests for security give software developers the option of making the test results public so that users or potential users are given assurance that the software is well tested for security flaws. While there already exists general-purpose continuous integration tools such as Jenkins-CI and GitLab-CI, our tool is primarily focused on integrating third party security testing programs and generating reports on classes of vulnerabilities found in a software project. Our tool performs all tests in a snapshot (stateless) virtual machine to be able to have reproducible tests in an environment similar to the deployment environment. This paper introduces the design and implementation of a tool for security-focused continuous integration. The test cases used demonstrate the ability of the tool to effectively uncover security vulnerabilities even in open source software products such as ImageMagick and a smart grid application, Emoncms. Full article
(This article belongs to the Special Issue Code Generation, Analysis and Quality Testing)
Show Figures

Figure 1

18 pages, 2560 KiB  
Article
A Complexity Metrics Suite for Cascading Style Sheets
by Adewole Adewumi, Sanjay Misra and Robertas Damaševičius
Computers 2019, 8(3), 54; https://doi.org/10.3390/computers8030054 - 10 Jul 2019
Cited by 2 | Viewed by 5637
Abstract
We perform a theoretical and empirical analysis of a set of Cascading Style Sheets (CSS) document complexity metrics. The metrics are validated using a practical framework that demonstrates their viability. The theoretical analysis is performed using the Weyuker’s properties−a widely adopted approach to [...] Read more.
We perform a theoretical and empirical analysis of a set of Cascading Style Sheets (CSS) document complexity metrics. The metrics are validated using a practical framework that demonstrates their viability. The theoretical analysis is performed using the Weyuker’s properties−a widely adopted approach to conducting empirical validations of metrics proposals. The empirical analysis is conducted using visual and statistical analysis of distribution of metric values, Cliff’s delta, Chi-square and Liliefors statistical normality tests, and correlation analysis on our own dataset of CSS documents. The results show that five out of the nine metrics (56%) satisfy Weyuker’s properties except for the Number of Attributes Defined per Rule Block (NADRB) metric, which satisfies six out of nine (67%) properties. In addition, the results from the statistical analysis show good statistical distribution characteristics (only the Number of Extended Rule Blocks (NERB) metric exceeds the rule-of-thumb threshold value of the Cliff’s delta). The correlation between the metric values and the size of the CSS documents is insignificant, suggesting that the presented metrics are indeed complexity rather than size metrics. The practical application of the presented CSS complexity metric suite is to assess the risk of CSS documents. The proposed CSS complexity metrics suite allows identification of CSS files that require immediate attention of software maintenance personnel. Full article
(This article belongs to the Special Issue Code Generation, Analysis and Quality Testing)
Show Figures

Figure 1

Back to TopTop