Skip to Content
You are currently on the new version of our website. Access the old version .
  • 29 days
    Time to First Decision

Software

Software is an international, peer-reviewed, open access journal on all aspects of software engineering published quarterly online by MDPI.

All Articles (113)

The increasing decentralization of industrial processes in Industry 4.0 necessitates the distribution and coordination of resources such as machines, materials, expertise, and knowledge across organizations in a value chain. To facilitate effective operations in such distributed environments, it is essential to digitize processes and resources, establish interconnectedness, and implement a scalable management approach. The present paper addresses these challenges through the knowledge-based production planning (KPP) system, which was originally developed as a monolithic prototype. It is argued that the KPP-System must evolve towards a service-oriented architecture (SOA) in order to align with distributed and interoperable Industry 4.0 requirements. The paper provides a comprehensive overview of the motivation and background of KPP, identifies the key research questions that are to be addressed, and presents a conceptual design for transitioning KPP into an SOA. The approach under discussion is notable for its consideration of compatibility with the Arrowhead Framework (AF), a consideration that is intended to ensure interoperability with smart production environments. The contribution of this work is the first architectural concept that demonstrates how KPP components can be encapsulated as services and integrated into local cloud environments, thus laying the foundation for adaptive, ontology-based process planning in distributed manufacturing. In addition to the conceptual architecture, the first implementation phase has been conducted to validate the proposed approach. This includes the realization and evaluation of the mediator-based service layer, which operationalizes the transformation of planning data into semantic function blocks (FBs) and enables the interaction of distributed services within the envisioned SO-KPP architecture. The implementation demonstrates the feasibility of the service-oriented transformation and provides a functional proof of concept for ontology-based integration in future adaptive production planning systems.

12 February 2026

Conceptual architecture of the proposed SO-KPP-System. The Figure illustrates the separation of dynamic, core, manager, and Arrowhead services, as well as the ontology layer. The blue highlighted Mediator-Services (SPP, ECPP, and OPP) indicate the components that are implemented and evaluated in this work.

The stream pipeline idiom provides a fluent and composable way to express computations over collections. It gained widespread popularity after its introduction in .NET in 2005, later influencing many platforms, including Java in 2014 with the introduction of Java Streams, and continues to be adopted in contemporary languages such as Kotlin. However, the set of operations available in standard libraries is limited, and developers often need to introduce operations that are not provided out of the box. Two options typically arise: implementing custom operations using the standard API or adopting a third-party collections library that offers a richer suite of operations. In this article, we show that both approaches may incur performance overhead, and that the former can also suffer from verbosity and reduced readability. We propose an alternative approach that remains faithful to the stream-pipeline pattern: developers implement the unit operations of the pipeline from scratch using a functional yield-based traversal pattern. We demonstrate that this approach requires low programming effort, eliminates the performance overheads of existing alternatives, and preserves the key qualities of a stream pipeline. Our experimental results show up to a 3× speedup over the use of native yield in custom extensions.

10 February 2026

Class diagram of Advancer and Yield types.

Modern DevSecOps environments face a persistent tension between accelerating deployment velocity and maintaining verifiable compliance with regulatory, security, and internal governance standards. Traditional snapshot-in-time audits and fragmented compliance tooling struggle to capture the dynamic nature of containerized, continuous delivery, often resulting in compliance drift and delayed remediation. This paper introduces the Continuous Compliance Framework (CCF), a data-centric reference architecture that embeds compliance validation directly into CI/CD pipelines. The framework treats compliance as a first-class, computable system property by combining declarative policies-as-code, standardized evidence collection, and cryptographically verifiable attestations. Central to the approach is a Compliance Data Lakehouse that transforms heterogeneous pipeline artifacts into a queryable, time-indexed compliance data product, enabling audit-ready evidence generation and continuous assurance. The proposed architecture is validated through an end-to-end synthetic microservice implementation. Experimental results demonstrate full policy lifecycle enforcement with a minimal pipeline overhead and sub-second policy evaluation latency. These findings indicate that compliance can be shifted from a post hoc audit activity to an intrinsic, verifiable property of the software delivery process without materially degrading deployment velocity.

10 February 2026

Logical architecture of the Continuous Compliance Framework (CCF). The figure illustrates the three conceptual planes of the Continuous Compliance Framework (CCF), showing the interaction between the Policy Definition Plane, Instrumentation Plane, and Validation Plane, with cryptographic attestation feedback loops. This view emphasizes functional separation of concerns and governance boundaries, independent of specific tool implementations or execution order. The green dashed arrow represents the validation gate interaction with the policy engine, where the gate queries evidence and policies and receives an allow/deny decision.

Data-Centric Serverless Computing with LambdaStore

  • Kai Mast,
  • Suyan Qu and
  • Remzi Arpaci-Dusseau
  • + 2 authors

LambdaStore is a data-centric serverless platform that breaks the split between stateless functions and external storage in classic cloud computing platforms. By scheduling serverless invocations near data instead of pulling data to compute, LambdaStore substantially reduces the state access cost that dominates today’s serverless workloads. Leveraging its transactional storage engine, LambdaStore delivers serializable guarantees and exactly-once semantics across chains of lambda invocations—a capability missing in current Function-as-a-Service offerings. We make three key contributions: (1) an object-oriented programming model that ties function invocations with its data; (2) a transaction layer with adaptive lock granularity and an optimistic concurrency control protocol designed for serverless workloads to keep contention low while preserving serializability; and (3) an elastic storage system that preserves the elasticity of the serverless paradigm while lambda functions run close to their data. Under read-heavy workloads, LambdaStore lifts throughput by orders of magnitude over existing serverless platforms while holding end-to-end latency below 20 ms.

21 January 2026

LambdaStore employs a shared-nothing architecture. The Lambda requests are sent directly to the primary worker, which will launch serverless runtimes to execute the request locally or delegate to a secondary replica. Each worker has a Transaction Manager (Section 4.3.2), a Chain Replication module, Entry Sets (Section 4.3.1), and a local storage engine (LevelDB [37]). LambdaStore has a centralized coordinating service that manages shard metadata such as object placement and shard membership. It monitors worker status and handles reconfiguration. It also makes object migration (Section 4.4.3) and light replication (Section 4.4.2) decisions to provide elasticity. Notably, the coordinating service does not participate during most lambda invocations.

News & Conferences

Issues

Open for Submission

Editor's Choice

Get Alerted

Add your email address to receive forthcoming issues of this journal.

XFacebookLinkedIn
Software - ISSN 2674-113X