Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline

Article Types

Countries / Regions

Search Results (1)

Search Parameters:
Keywords = NASA MDP

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 3224 KiB  
Article
Making More with Less: Improving Software Testing Outcomes Using a Cross-Project and Cross-Language ML Classifier Based on Cost-Sensitive Training
by Alexandre M. Nascimento, Gabriel Kenji G. Shimanuki and Luiz Alberto V. Dias
Appl. Sci. 2024, 14(11), 4880; https://doi.org/10.3390/app14114880 - 4 Jun 2024
Viewed by 2189
Abstract
As digitalization expands across all sectors, the economic toll of software defects on the U.S. economy reaches up to $2.41 trillion annually. High-profile incidents like the Boeing 787-Max 8 crash have shown the devastating potential of these defects, highlighting the critical importance of [...] Read more.
As digitalization expands across all sectors, the economic toll of software defects on the U.S. economy reaches up to $2.41 trillion annually. High-profile incidents like the Boeing 787-Max 8 crash have shown the devastating potential of these defects, highlighting the critical importance of software testing within quality assurance frameworks. However, due to its complexity and resource intensity, the exhaustive nature of comprehensive testing often surpasses budget constraints. This research utilizes a machine learning (ML) model to enhance software testing decisions by pinpointing areas most susceptible to defects and optimizing scarce resource allocation. Previous studies have shown promising results using cost-sensitive training to refine ML models, improving predictive accuracy by reducing false negatives through addressing class imbalances in defect prediction datasets. This approach facilitates more targeted and effective testing efforts. Nevertheless, these models’ in-company generalizability across different projects (cross-project) and programming languages (cross-language) remained untested. This study validates the approach’s applicability across diverse development environments by integrating various datasets from distinct projects into a unified dataset, using a more interpretable ML technique. The results demonstrate that ML can support software testing decisions, enabling teams to identify up to 7× more defective modules compared to benchmark with the same testing effort. Full article
(This article belongs to the Special Issue Soft Computing Methods and Applications for Decision Making)
Show Figures

Figure 1

Back to TopTop