Skip to Content
  • 29 days
    Time to First Decision

Software

Software is an international, peer-reviewed, open access journal on all aspects of software engineering published quarterly online by MDPI.

All Articles (116)

Software repositories such as Git are significant sources of metadata about software projects, containing information such as modified files, change authors, and often commentary describing the change. An emerging approach to support software change impact analysis is to exploit this metadata to determine which files are linked by co-committal, i.e., when two files are frequently updated together within the same Git commit. Such information can serve as an indicator for identifying potential change-impact sets in future development activities. The aim of this study is to determine whether co-committal is a reliable indicator of links between software artifacts stored in Git and, if so, whether these links persist as the artifacts evolve—thereby offering a potentially valuable dimension for change impact analysis. To investigate this, we mined the metadata of five large Git repositories comprising over 14K commits and extracted co-change sets from the resulting data. The results show that: (1) co-committal links between artifacts vary widely in both strength and frequency, with these variations strongly influenced by the development style and activity levels of the contributing developers, and (2) although co-committal can serve as an indicator of evolutionary coupling in certain scenarios, its usefulness depends on project-specific development practices and observable patterns of developer behavior.

1 March 2026

Number of authors for all repositories and activity threshold values from 0 to 1.

Machine learning (ML) engineering increasingly incorporates principles from software and requirements engineering to improve development rigor; however, key non-functional requirements (NFRs) such as interpretability and explainability remain difficult to specify and verify using traditional requirements practices. Although prior work defines these qualities conceptually, their lack of measurable criteria prevents systematic verification. This paper presents a novel provenance-driven approach that decomposes ML interpretability and explainability NFRs into verifiable functional requirements (FRs) by leveraging model and data provenance to make model behavior transparent. The approach identifies the specific provenance artifacts required to validate each FR and demonstrates how their verification collectively establishes compliance with interpretability and explainability NFRs. The results show that ML provenance can operationalize otherwise abstract NFRs, transforming interpretability and explainability into quantifiable, testable properties and enabling more rigorous, requirements-based ML engineering.

14 February 2026

The three “Starting Point” classes and some of their subclasses in PROV-O [50].

The increasing decentralization of industrial processes in Industry 4.0 necessitates the distribution and coordination of resources such as machines, materials, expertise, and knowledge across organizations in a value chain. To facilitate effective operations in such distributed environments, it is essential to digitize processes and resources, establish interconnectedness, and implement a scalable management approach. The present paper addresses these challenges through the knowledge-based production planning (KPP) system, which was originally developed as a monolithic prototype. It is argued that the KPP-System must evolve towards a service-oriented architecture (SOA) in order to align with distributed and interoperable Industry 4.0 requirements. The paper provides a comprehensive overview of the motivation and background of KPP, identifies the key research questions that are to be addressed, and presents a conceptual design for transitioning KPP into an SOA. The approach under discussion is notable for its consideration of compatibility with the Arrowhead Framework (AF), a consideration that is intended to ensure interoperability with smart production environments. The contribution of this work is the first architectural concept that demonstrates how KPP components can be encapsulated as services and integrated into local cloud environments, thus laying the foundation for adaptive, ontology-based process planning in distributed manufacturing. In addition to the conceptual architecture, the first implementation phase has been conducted to validate the proposed approach. This includes the realization and evaluation of the mediator-based service layer, which operationalizes the transformation of planning data into semantic function blocks (FBs) and enables the interaction of distributed services within the envisioned SO-KPP architecture. The implementation demonstrates the feasibility of the service-oriented transformation and provides a functional proof of concept for ontology-based integration in future adaptive production planning systems.

12 February 2026

Conceptual architecture of the proposed SO-KPP-System. The Figure illustrates the separation of dynamic, core, manager, and Arrowhead services, as well as the ontology layer. The blue highlighted Mediator-Services (SPP, ECPP, and OPP) indicate the components that are implemented and evaluated in this work.

The stream pipeline idiom provides a fluent and composable way to express computations over collections. It gained widespread popularity after its introduction in .NET in 2005, later influencing many platforms, including Java in 2014 with the introduction of Java Streams, and continues to be adopted in contemporary languages such as Kotlin. However, the set of operations available in standard libraries is limited, and developers often need to introduce operations that are not provided out of the box. Two options typically arise: implementing custom operations using the standard API or adopting a third-party collections library that offers a richer suite of operations. In this article, we show that both approaches may incur performance overhead, and that the former can also suffer from verbosity and reduced readability. We propose an alternative approach that remains faithful to the stream-pipeline pattern: developers implement the unit operations of the pipeline from scratch using a functional yield-based traversal pattern. We demonstrate that this approach requires low programming effort, eliminates the performance overheads of existing alternatives, and preserves the key qualities of a stream pipeline. Our experimental results show up to a 3× speedup over the use of native yield in custom extensions.

10 February 2026

Class diagram of Advancer and Yield types.

News & Conferences

Issues

Open for Submission

Editor's Choice

Get Alerted

Add your email address to receive forthcoming issues of this journal.

XFacebookLinkedIn
Software - ISSN 2674-113X