Next Article in Journal
Changes in Biomechanical Profile of an Artistic Swimming Duet over a Training Macrocycle: A Case Study
Previous Article in Journal
Assessment of Egg Quality Across Seasons, Storage Durations, and Temperatures in Commercial Laying Hens
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Uncovering the Implicit: A Comparative Evaluation of Modeling Approaches for Environmental Assumptions

by
Mounifah Alenazi
Department of Computer Science and Engineering, College of Computer Science and Engineering, University of Hafr Al Batin, Hafar Al Batin 39524, Saudi Arabia
Appl. Sci. 2025, 15(19), 10345; https://doi.org/10.3390/app151910345
Submission received: 5 September 2025 / Revised: 17 September 2025 / Accepted: 22 September 2025 / Published: 24 September 2025
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

This paper presents an extended investigation into the role of environmental assumptions in the emergence and management of Requirements Technical Debt (RTD). Building on earlier work that identified environmental assumptions as a critical yet often overlooked source of RTD, we provide a comparative evaluation of seven representative modeling frameworks: KAOS, i*, Obstacle Analysis, Failure Frames, Claims, SysML, and RDAL. The analysis is structured around five dimensions: modeling focus, tracing support, validation capability, integration readiness, and assumption evolution. The results show substantial variation in support for assumption management. Among the frameworks, KAOS and Obstacle Analysis stand out for explicitly modeling assumptions and their violations. SysML, on the other hand, is particularly strong in its ability to integrate with industrial toolchains. RDAL demonstrates its greatest strength in tracing, where assumptions, requirements, and verification conditions are systematically linked, though its explicit modeling support remains more limited compared to goal-oriented approaches. i* and Claims capture assumptions more implicitly, with weaker validation and evolution capabilities, while Failure Frames focus on assumption violations but lack integration with broader MBSE workflows. The paper’s main contribution is a synthesis of trade-offs across assumption-aware modeling frameworks, highlighting both their strengths and remaining gaps. This provides actionable insights for researchers and practitioners in selecting, combining, or extending modeling approaches to better manage environmental assumptions and mitigate assumption-related technical debt.

1. Introduction

The requirements engineering (RE) phase plays a crucial role in shaping the quality and reliability of software systems [1]. It is during this early stage that developers define what the system must achieve and under what conditions it should operate. Among the many factors that influence this process, environmental assumptions, i.e., expectations about how the system’s external context behaves, are especially important [2]. These assumptions can involve a wide range of environmental elements, including physical conditions (e.g., temperature, terrain), human interactions (e.g., operator behavior), hardware performance (e.g., sensor precision, battery life), and integration with external systems or infrastructures.
Although often treated as background details or left implicit, such assumptions strongly influence the structure and feasibility of system requirements. They can influence system boundaries, design trade-offs, fault tolerance strategies, and even safety constraints. When such assumptions are incomplete, incorrect, or remain undocumented, they create blind spots that may not surface until much later in the development lifecycle, often during system deployment, or operation [3,4].
These blind spots often manifest as Requirements Technical Debt (RTD), i.e., hidden issues or incomplete decisions rooted in early misunderstandings of the system’s context [5,6,7]. In practice, RTD can lead to extensive rework or unexpected failures that deviate from what stakeholders expect. Historical examples illustrate the severity of such outcomes. In the Ariane 5 rocket failure [8], engineers reused software that contained assumptions about velocity ranges appropriate for Ariane 4, which caused an overflow error and led to the rocket’s destruction. In another case, the Therac-25 radiation therapy machine [9] suffered fatal consequences due to invalid assumptions about hardware safety interlocks, the reliability of reused software, and the ability of operators to respond to unclear system messages. These incidents illustrate how neglected assumptions in requirements engineering can result in costly and even catastrophic outcomes.
Our prior work introduced the concept of assumption-driven RTD and emphasized the importance of identifying and managing environmental assumptions early in the requirements engineering process [3]. When left implicit or unmanaged, these assumptions can accumulate as hidden debt, often surfacing only when systems behave unexpectedly or fail in critical situations. Proactively modeling, tracing, and evolving these assumptions is therefore essential to improving system dependability and reducing costly late-stage rework.
To address this gap, we present a structured comparative analysis of seven representative modeling frameworks: KAOS [10], i* [11], Obstacle Analysis [12], Failure Frames [13], Claims [14], SysML [15], and RDAL [16]. These frameworks were identified through a prior systematic mapping study [3]. Each approach is evaluated across five dimensions: modeling focus, tracing support, validation capability, integration readiness, and assumption evolution. Instead of proposing a new tool, we synthesize insights from existing approaches to clarify their strengths, trade-offs, and limitations. The goal is to support researchers and practitioners in selecting, adapting, or combining modeling techniques that can better manage environmental assumptions. In doing so, the paper contributes to a clearer understanding of the current modeling landscape and promotes more resilient and assumption-aware system development practices.
To the best of our knowledge, this is the first study to systematically compare established requirements engineering frameworks in the context of environmental assumptions, and to position this comparison as a foundation for integrating RTD with assumption-aware modeling.
The rest of the paper is structured as follows. Section 2 presents background and related work. Section 3 outlines the methodology used for the comparative evaluation. Section 4 presents and analyzes the evaluation results. Section 5 discusses the key findings and their implications. Section 6 addresses threats to validity. Finally, Section 7 concludes the paper and outlines directions for future research.

2. Background and Related Work

2.1. Environmental Assumptions and RTD

Jackson’s seminal work [1,2] introduces a fundamental distinction between the software system being constructed, known as the machine, and its surrounding operational context, known as the environment. The environment includes conditions, entities, and phenomena existing independently of the software, while the machine encompasses only the software artifact explicitly built to fulfill specific goals. Interaction between these two domains occurs through shared phenomena, i.e., observable events through which the machine perceives and responds to its environment. Central to this perspective are environmental assumptions, which explicitly state conditions or behaviors expected to hold true independently of the machine’s presence. These assumptions guide the specification of system behaviors in response to environmental events. Kang [17] emphasizes that environmental assumptions should be treated as first-class entities in system modeling, since deviations between assumed and actual environmental behavior are a major source of system failures and must be explicitly managed throughout the development lifecycle.
Building on this, environmental assumptions can become major contributors to RTD, understood as the long-term costs caused by incomplete, incorrect, or implicit decisions in the requirements engineering phase [7]. These assumptions describe expected properties of the system’s operational context, such as hardware reliability, user behavior, external systems, or physical conditions. Misalignment occurs when environmental assumptions no longer reflect changes or uncertainties in the operational context, requiring adjustments and refactoring to preserve system correctness. If assumptions remain implicit, they can lead to costly rework or system failures. Thus, explicitly documenting, validating, and continuously revisiting environment assumptions is crucial for minimizing RTD and ensuring sustainable system evolution [3,18].
Given this challenge, a range of techniques in requirements engineering have attempted to address the impact of environmental uncertainty. Uncertainty in system environments is widely recognized as a central challenge in requirements engineering and dependable system design [19,20,21,22,23]. Various studies have addressed how uncertainty affects the specification, verification, and evolution of requirements, particularly in adaptive systems and open contexts. Whittle et al. [24] introduced RELAX, a fuzzy logic–based language for specifying self-adaptive requirements under environmental uncertainty, while Cheng et al. [25] extended this work by embedding uncertainty into goal models and adding techniques for threat modeling and goal relaxation. FLAGS [26] further builds on RELAX by applying possibility theory to define fuzzy and crisp goals with built-in adaptation strategies. Although these methods emphasize flexibility and tolerance for incomplete information, they typically lack structured mechanisms to identify and manage the underlying environmental assumptions that may later become invalid. That said, our work focuses specifically on the explicit modeling of environmental assumptions as first-class entities, with the goal of reducing the accumulation of RTD due to implicit or outdated environmental assumptions.
Formal verification techniques, such as model checking, have long been employed to ensure system correctness under well-defined environmental assumptions [27]. In safety-critical domains like aerospace and avionics, as exemplified by NASA’s assurance practices [28,29,30,31], environmental assumptions are often encoded as part of the system specification and rigorously verified using mathematical models. Approaches such as assume guarantee reasoning [32], temporal logic [33], and exhaustive state space exploration [34] offer strong guarantees that a system will behave correctly, as long as its environmental assumptions hold. However, these techniques typically demand highly precise, formal specifications and are often resource intensive, which limits their applicability during the early stages of requirements engineering or in iterative development contexts [31]. This study, however, focuses on modeling approaches that are more broadly applicable within standard requirements and systems engineering workflows. While model checking is well suited for verification under fixed assumptions, our interest lies in how assumptions are initially documented, maintained, and adapted over time, both before and after formal verification activities. This shift in focus supports a more proactive approach to managing evolving environmental assumptions and mitigating the risk of assumption driven RTD.
In our prior work [3], we conducted a systematic mapping study to identify modeling approaches that explicitly support the representation of environmental assumptions. The seven identified approaches span a range of paradigms, including goal-oriented modeling, argumentation-based structures, and model-based systems engineering. Each offers varying degrees of support for capturing, tracing, and evolving assumptions throughout the system development lifecycle. However, no structured comparison has yet been undertaken to assess how effectively these approaches address key aspects of environmental assumption management. This paper addresses this gap by presenting a comparative analysis of seven representative frameworks, evaluated across five critical dimensions: modeling focus, tracing support, validation capability, integration readiness, and assumption evolution. Each framework is briefly described below, with emphasis on how it models environmental assumptions within its respective paradigm. The Section 3 outlines the evaluation criteria and approach used to perform a detailed comparative analysis.

2.2. Modeling Approaches Supporting Environmental Assumptions

The seven modeling frameworks differ in their theoretical foundations. Below, we summarize each framework:
  • Knowledge Acquisition in automated Specification (KAOS) [10] is a goal-oriented requirements engineering framework that supports the structured elicitation, refinement, and analysis of system goals. It allows engineers to model system objectives hierarchically, from high-level strategic goals down to operational requirements. Environmental assumptions in KAOS are typically captured as domain properties. These domain properties are treated as first-class modeling entities and can be explicitly linked to system goals, agents, and obstacles.
  • i* (i-star) [11] is a goal-oriented modeling framework designed to capture the social and intentional relationships among actors within a system. It focuses on modeling stakeholders’ goals, tasks, dependencies, and softgoals, particularly in early requirements analysis. In i*, environmental assumptions are usually represented indirectly through actor dependencies and contextual constraints. For example, assumptions about how external agents, e.g., users, external systems will behave or fulfill delegated tasks are embedded within Strategic Dependency and Strategic Rationale models. Although i* lacks a dedicated construct for explicitly modeling environmental assumptions, its focus on agent interactions allows such assumptions to surface through analysis of dependencies and vulnerabilities.
  • Obstacle Analysis [12] is a threat modeling technique used primarily in conjunction with goal-oriented frameworks like KAOS to systematically identify and address potential hindrances to goal satisfaction. In this context, obstacles are conditions, often arising from the environment, that can prevent the system from achieving its intended goals. Environmental assumptions are modeled by analyzing what must hold true in the environment for a goal to remain achievable. When such assumptions fail, corresponding obstacles emerge. This makes Obstacle Analysis effective for uncovering latent or implicit assumptions that might otherwise be overlooked. Although not a standalone modeling language, Obstacle Analysis strengthen assumption management by making environmental threats explicit and supporting traceability between assumptions, goals, and mitigations.
  • Failure Frames [13] is a problem-oriented modeling approach developed to support the explicit analysis of system failures by documenting the assumptions that, when violated, contribute to those failures. Unlike goal-oriented methods that focus on desired outcomes, Failure Frames emphasize what can go wrong and why, especially in safety-critical or high-dependability domains. Environmental assumptions are treated as primary contributors to potential failures and are captured as part of the structured “frame,” which includes the context, failure type, and the violated assumption. This makes Failure Frames particularly useful for distinguishing failures caused by internal faults from those stemming from incorrect or unstable environmental assumptions.
  • Claims [14] modeling is an argumentation-based approach used to document design rationale and support assurance cases, particularly in systems that must demonstrate dependability, safety, or compliance. In this framework, a claim represents a statement or assertion about a system property, e.g., safety, correctness, which is supported by evidence and underlying assumptions. Environmental assumptions are incorporated as part of the rationale that justifies the claim, often forming the unstated conditions under which the claim holds. While assumptions are not always treated as first-class modeling entities, they are critical to the integrity of the argument structure.
  • Systems Modeling Language (SysML) [15] is a general-purpose modeling language widely used in systems engineering to represent requirements, structure, behavior, and parametrics. SysML does not provide dedicated constructs for modeling environmental assumptions, but it offers flexibility through stereotypes, constraints, and custom profiles that allow assumptions to be annotated and linked to system elements. Environmental assumptions are often embedded within requirement diagrams or documented as constraint blocks, informal notes, or auxiliary artifacts such as rationale fields. This makes them accessible but not explicitly treated as first-class modeling elements.
  • Requirements Definition and Analysis Language (RDAL) [16] is a domain-specific modeling language aimed at improving the precision, traceability, and analyzability of requirements in embedded and safety-critical systems. RDAL provides dedicated constructs for requirements, their refinement, verification conditions, and related metadata, and allows engineers to document assumptions as structured annotations linked to requirements. Despite its focused support for modeling assumptions, RDAL has not achieved the same level of visibility or adoption as more established frameworks like SysML. One reason is its relatively recent emergence. In contrast, SysML is a standardized, general-purpose modeling language widely adopted across industries such as aerospace, automotive, and healthcare. Backed by the Object Management Group (OMG) and supported by mature tools like Cameo and Enterprise Architect, SysML benefits from extensive documentation and integration into established engineering workflows. RDAL, while offering precise modeling constructs and assumption-handling capabilities, remains less accessible due to limited tool support, documentation, and community adoption.
In Table 1, we summarize the seven modeling approaches identified in our prior mapping study, along with their modeling paradigms, example tools or environments, and corresponding references. The overview highlights the diversity in tool support, ranging from well-established industrial platforms such as Cameo Systems Modeler for SysML, and Objectiver for KAOS, to academic or research-oriented prototypes like OpenOME, SEURAT, and RDAL’s Papyrus UML implementation. Some approaches, such as Failure Frames, remain primarily documented in the literature with limited prototype support, while others like RDAL have seen experimental extensions into Capella. This diversity illustrates that although environmental assumptions can be represented across a variety of paradigms, the extent and maturity of tool support remains uneven, motivating the need for a comparative evaluation in the subsequent sections.

3. Methodology

This study uses a structured, document-based comparative analysis to evaluate how existing modeling frameworks support the explicit treatment of environmental assumptions. The goal is to assess the capabilities of seven representative approaches in helping engineers model, trace, validate, and evolve environmental assumptions that influence system requirements. This work builds on a prior systematic mapping study [3] in which the selected frameworks were identified as having potential support for assumption modeling. In the study, we applied a structured search protocol across IEEE Xplore, ACM Digital Library, Scopus, and Web of Science using combinations of the terms “environmental assumptions”, “requirements”, “model*”, “tool*”, “represent”, and “user stories”. Inclusion required relevance to requirements/MBSE, peer reviewed venue, and conceptual clarity regarding environmental assumptions; we excluded non peer reviewed items and papers focused solely on mathematical validation or verification (for example, model checking) rather than modeling or representation. From the screened set, we retained approaches that provide constructs or practices to model environmental assumptions. This procedure resulted in seven frameworks: KAOS, i*, Obstacle Analysis, Failure Frames, Claims, SysML, and RDAL.

3.1. Evaluation Dimensions

The evaluation of goal-modeling approaches for handling environmental assumptions is organized around five key dimensions, adapted from prior literature on requirements modeling [11,41,42], assumption management and requirements volatility [5,43], and integration into model-based systems engineering workflows [44,45,46]. These dimensions balance rigor with practicality, making them suitable for a consistent comparison across diverse frameworks. Grounded in this literature, the five dimensions are defined as follows:
  • Modeling Focus
    • This dimension examines how clearly and explicitly a modeling approach represents environmental assumptions. Some frameworks treat assumptions as distinct constructs, e.g., domain properties, obstacles, or failure frames, while others capture them only indirectly within goals, dependencies, or requirement statements. High modeling focus ensures assumptions remain visible and analyzable throughout system development.
  • Tracing Support
    • Assumptions rarely exist in isolation. This dimension evaluates the extent to which assumptions can be linked to related artifacts such as goals, requirements, design components, or external actors. Strong traceability enables impact assessment, dependency analysis, and better alignment between assumptions and the system elements they influence.
  • Validation Capability
    • Since assumptions may or may not hold in practice, this dimension captures the support provided for reasoning about their validity. Approaches differ in whether they enable formal verification, e.g., through logic or model checking, informal reasoning, e.g., argumentation or structured claims, or scenario-based analysis, e.g., failure and obstacle propagation. Validation capability is crucial for reducing risks stemming from unrealistic or fragile assumptions.
  • Integration Readiness
    • Assumption modeling should not be an isolated activity. This dimension considers how each framework integrates into broader requirements engineering and system design workflows. Integration readiness is reflected in the ability to connect assumptions to requirements specifications, architectural models, verification artifacts, and industry practices such as Model-Based Systems Engineering (MBSE).
  • Support for Assumption Evolution
    • Because environments change, assumptions may also evolve or fail. This dimension assesses whether and how frameworks support assumption evolution, e.g., capturing uncertainty, versioning assumptions, monitoring them at runtime, or propagating changes through related artifacts. Strong support for evolution enables resilience in the face of uncertainty and system adaptation.

3.2. Scoring Method

Each modeling approach is scored on a 4-point ordinal scale for each dimension, defined as follows:
  • 0—No Support: The dimension is not addressed at all in the framework.
  • 1—Limited Support: The dimension is only weakly supported, or support is ad hoc and not systematic.
  • 2—Moderate Support: The dimension is supported in a well-defined but possibly indirect or limited manner.
  • 3—Strong Support: The framework provides explicit, integrated, and systematic support for the dimension.

3.3. Data Collection and Analysis

The evaluation is based on an in-depth review of the original literature for each framework, supplemented by official documentation and, where available, example models and use cases. For each modeling approach, representative scenarios were analyzed to assess how environmental assumptions are represented and managed in practice.
Scores were assigned through qualitative comparative analysis across the frameworks, using a consistent rubric applied to all five evaluation dimensions. Each rating is supported by evidence drawn from published sources or documented examples, and is accompanied by a descriptive justification. These justifications are intended to make the basis of each score transparent and to help readers understand how assumption-related features manifest across different modeling approaches. To mitigate subjectivity, scores were cross-checked against multiple sources where possible, and consistency was verified by applying the rubric iteratively across all frameworks.
As this is a solo-authored study, all evaluations were performed by the author. To reduce bias, the rubric was applied systematically according to the predefined criteria, and descriptive justifications are provided for each score to ensure transparency and reproducibility.

3.4. Case Study Scenario

To ensure a fair and consistent comparison across all seven modeling frameworks, a single reference scenario was selected and applied uniformly. The scenario involves the operational context of a small uncrewed aerial system (sUAS) operating autonomously in variable environmental conditions [4]. The sUAS’s mission requires reliable navigation, stable flight control, and safe landing, all of which depend critically on environmental assumptions. Four representative categories of assumptions were considered:
  • Infrastructure availability: e.g., “GNSS signal is available during flight.”
  • External system dependency: e.g., “Weather service provides accurate forecasts.”
  • Physical condition constraints: e.g., “Wind speed remains below 30 knots during mission.”
  • Human interaction reliability: e.g., “Operator correctly interprets warning alerts.”
These categories were chosen because they represent diverse environmental assumptions that are critical to both mission success and safety. The sUAS domain is particularly well suited for this analysis: it is safety-critical, highly sensitive to environmental variability, and relevant to both civilian and defense applications. Moreover, sUAS operations inherently combine technical, environmental, and human factors, providing a balanced and challenging basis for evaluating assumption management.
To illustrate this scenario in practice, we provide an example using KAOS. KAOS was selected because it is one of the most established goal-oriented modeling approaches in requirements engineering and provides explicit constructs for representing assumptions. Figure 1 shows a KAOS diagram snippet of the sUAS case study. The top-level requirement to maintain safe separation distance is refined into derived goals, e.g., geolocation accuracy. These goals are linked to explicit domain properties such as GNSS availability, communication timing, and system latency, as well as to potential obstacles, e.g., GPS error. This example demonstrates how environmental assumptions can be represented as domain properties in KAOS and how their violations obstruct safety-related goals.
Applying a single consistent scenario across all frameworks ensures that differences in evaluation outcomes arise from the modeling approaches themselves, rather than from variation in problem context. All four assumption categories could be represented in each framework, though with differing levels of explicitness, which enables a controlled and fair comparison. This design allows a direct evaluation of how each framework represents assumptions, traces them to requirements or goals, supports their validation, integrates with engineering workflows, and manages their evolution over time.
We acknowledge that relying on a single domain may limit external validity. However, the sUAS scenario is considered representative because it is a safety-critical, mission-oriented, and environment-sensitive system that combines technical, environmental, and human factors. These characteristics are common to many other domains, including medical, financial, and industrial systems, which also depend on critical environmental assumptions. While we focus on the sUAS case here for consistency, extending this analysis to additional domains remains an important direction for future work to strengthen generalizability.

4. Results and Analysis

This section presents the results of a comparative evaluation of seven modeling approaches with respect to their support for representing and managing environmental assumptions. The evaluation is structured around five dimensions introduced earlier, i.e., Modeling Focus, Tracing Support, Validation Capability, Integration Readiness, and Support for Assumption Evolution, and applied consistently across all frameworks using the sUAS case study scenario.
For each framework, we provide a descriptive analysis of how it addresses the five dimensions, supported by evidence from the literature and example models. A scoring scheme on a four-point scale (0 = not supported, 1 = weak, 2 = moderate, 3 = strong) summarizes each framework’s performance. Following the individual evaluations, a consolidated comparative table highlights cross-framework patterns and trade-offs, which are then discussed in a synthesis subsection.

4.1. KAOS

Environmental assumptions in KAOS are explicitly modeled as domain properties that represent invariant facts about the system’s operational context [47,48]. These assumptions are treated as first-class entities and systematically linked to the goals they support. In the sUAS scenario, for example, the domain property “GNSS signal is available during flight” can be associated with the goal “Maintain navigation accuracy,” enabling explicit traceability between the assumption and the goal’s fulfillment conditions. When such assumptions become questionable, due to signal degradation or environmental interference, KAOS supports goal revision through alternative strategies, such as introducing a fallback goal like “Switch to inertial navigation.”
Traceability is maintained throughout the goal refinement hierarchy, allowing domain properties to propagate their influence across sub-goals, operational requirements, and responsible agents. This makes assumption-related impact analysis feasible and systematic. KAOS also provides strong validation capabilities through its formal semantics. Domain properties and goals can be expressed in temporal logic, and model checking can be used to verify whether goal satisfaction is achievable under the specified environmental conditions [49,50]. However, applying these techniques requires significant formalization effort and specialized tooling.
Tool support is provided through Objectiver [35], which enables structured modeling but lacks native integration with standard MBSE environments. This limits the framework’s readiness for industrial workflows that require seamless interoperability with design and simulation tools. With respect to assumption evolution, KAOS offers reactive mechanisms: analysts can revise domain properties and adapt affected goals manually, but the framework lacks proactive support such as runtime assumption monitoring and automated change propagation. The assigned scores and justifications for each of the five evaluation dimensions are detailed in Table 2.

4.2. i Star (i*)

Environmental assumptions in i* are represented indirectly through actor dependencies, softgoals, and beliefs within the Strategic Dependency (SD) and Strategic Rationale (SR) models [11]. Rather than treating assumptions as first-class constructs, i* embeds them in the relationships between actors. In the sUAS scenario, for example, the UAV system may depend on a Weather Service actor to provide accurate forecasts. The environmental assumption “Weather service provides accurate forecast data” is reflected in the dependency link between the sUAS actor and the external actor, but the assumption itself is not modeled as a standalone element.
This approach results in limited modeling focus, as assumptions must be inferred from actor interactions and task dependencies rather than explicitly specified. Tracing support is moderate since assumptions can be linked to goals through dependencies and rationales, but these connections are not always systematically defined and often require manual documentation to maintain traceability [51]. In terms of validation capability, i* lacks native mechanisms for reasoning about whether assumptions hold or for verifying their impact on goal satisfaction [52]. External tools or extensions are needed for formal analysis.
Tool support for i* is provided through platforms like OpenOME [36], which offer graphical modeling environments but limited integration with MBSE workflows or system-level design tools. Integration is possible via custom mapping to SysML or other representations, but it is not standardized. Regarding assumption evolution, i* provides minimal support as updates to assumptions must be reflected manually in dependency models, and there is no built-in capability to monitor or manage assumption change over time. A summary of the scoring and rationale across the five evaluation dimensions for i* is provided in Table 3.

4.3. Obstacle Analysis

In Obstacle Analysis, environmental assumptions are captured primarily through the identification of obstacles, i.e., conditions that may prevent goals from being satisfied [37]. Assumptions are therefore modeled negatively, in terms of their possible violations, rather than as explicit positive invariants. In the sUAS scenario, for example, the assumption “GNSS signal is available during flight” would be reflected indirectly through the obstacle “Severe ionospheric disturbance blocks GNSS signal.” This modeling focus provides systematic means of identifying vulnerabilities in goals but makes it less straightforward to represent assumptions that hold under normal conditions.
Tracing support in Obstacle Analysis is strong, as obstacles are explicitly linked to the goals they obstruct, and their propagation through goal refinement structures can be analyzed systematically [41]. This enables clear impact analysis when assumptions fail, since every obstacle can be traced to the goals and requirements it threatens. Validation capability is also a key strength: obstacles can be formalized and analyzed through temporal logic, theorem proving, and model checking, allowing rigorous reasoning about system robustness under assumption failures [53]. However, the need for detailed formalization can impose high modeling effort.
Integration readiness is limited, as Obstacle Analysis has been mainly supported through research prototypes and extensions of KAOS-based tools such as Objectiver [48]. Interoperability with MBSE toolchains remains underdeveloped, making it challenging to connect obstacle models directly with system architectures or simulation environments. With respect to assumption evolution, Obstacle Analysis is primarily reactive since new or changing assumptions are handled by introducing corresponding obstacles and adapting the mitigation strategies linked to them [41]. While this provides a systematic process for dealing with assumption violations at design time, there is no built-in support for runtime monitoring or automated evolution. A summary of the scores and justifications for Obstacle Analysis across the five evaluation dimensions is presented in Table 4.

4.4. Failure Frames

In the Failure Frames approach, environmental assumptions are modeled explicitly through failure conditions that describe ways in which the system’s operating context may deviate from expectations [54,55]. Unlike obstacle analysis, which models the violation of goals, failure frames emphasize the interaction between assumptions about the environment and the system’s responsibilities, framing requirements in terms of how the system should respond to environmental failures. In the sUAS scenario, for example, the assumption “GNSS signal is available during flight” can be expressed as a failure frame when it is violated, requiring the system to adopt fallback behavior such as switching to inertial navigation.
The modeling focus is strong, as failure frames make assumptions explicit and systematically associate them with specific failure classes, e.g., omission, commission, timing, value. This classification provides a structured way to anticipate assumption breakdowns and ensures that environmental conditions are consistently represented. Tracing support is also strong, since failure frames connect environmental assumptions directly to goals, requirements, and mitigation behaviors, allowing systematic analysis of the impact of assumption violations. Validation capability is rated as moderate, while failure frames enable structured reasoning about the completeness of failure handling, the absence of formal semantics or automated verification limits them compared to approaches like KAOS or Obstacle Analysis. As such, validation relies on qualitative argumentation and systematic coverage checks rather than model checking.
Integration readiness is limited. Failure frames have been demonstrated in safety-critical domains such as avionics, but tool support is ad hoc and not well integrated into MBSE toolchains. Finally, support for assumption evolution is largely reactive: analysts can update or extend failure frames when new assumptions or failure modes are identified, but the method does not provide mechanisms for runtime monitoring or automated updates. The evaluation scores and justifications for Failure Frames across the five comparison dimensions are summarized in Table 5.

4.5. Claims

The Claims approach models environmental assumptions as part of an argumentation structure that justifies specific system properties [14]. Claims are supported by evidence and connected to underlying assumptions that must hold for the claim to remain valid.
In the sUAS scenario, an assurance case might include the following elements:
  • Claim: “The sUAS battery will not overheat during the mission.”
  • Assumption (context): “Ambient temperature remains below 35 °C.”
  • Argument: “Battery safety depends on thermal margins tested under operational conditions.”
  • Evidence: “Laboratory test data demonstrating battery performance below 35 °C.”
Here, the environmental assumption about ambient temperature is treated as a contextual condition that underpins the claim. If the assumption fails (e.g., during a heatwave), the validity of the claim may be undermined. Unlike Failure Frames, which categorize failures explicitly, Claims analysis emphasizes reasoning about why a requirement is believed to be satisfied and recording the assumptions that underpin this reasoning.
Table 6 presents the evaluation of Claims against the five comparison dimensions. This approach scores moderately on modeling focus, as assumptions are captured as contextual warrants rather than as first-class modeling constructs. Tracing support is strong, as claims, arguments, evidence, and their assumptions are systematically linked. Validation capability is moderate: argument structures enable qualitative validation, but formal rigor depends on integration with external verification methods. Integration readiness is also moderate: while assurance case tools such as SEURAT and ASCE provide strong support within safety analysis and certification, integration with broader MBSE workflows remains limited. Support for assumption evolution is weak, as there are no built-in mechanisms for updating or versioning assumptions when environmental conditions change. Updates are possible only through manual revisions to the assurance case.

4.6. SysML

In SysML, environmental assumptions are not modeled as first-class constructs but are usually represented informally in requirement diagrams, parametric constraints, or textual annotations [15,44,56]. This makes the modeling focus only moderate, since assumptions are not native constructs and must be represented indirectly, e.g., via requirements, constraint blocks, or annotations. While this is less explicit than in goal-oriented approaches, SysML’s extensibility through stereotypes and profiles still allows assumptions to be captured systematically. In the sUAS scenario, for example, the assumption “GNSS signal is available during flight” may appear as a requirement note or a constraint on navigation accuracy, but SysML lacks a dedicated construct for differentiating assumptions from requirements or design specifications.
Tracing support in SysML is moderate, as requirements diagrams, allocation relationships, and parametric links allow some connections to be established between assumptions, requirements, and system elements. However, because assumptions are not explicit, traceability requires disciplined modeling practices and often depends on conventions or extensions defined by the project team. Validation capability is also moderate, as parametric diagrams can support consistency checking and integration with simulation tools, but formal semantics for verifying environmental assumptions are limited [15]. This restricts the depth of reasoning compared to goal-oriented approaches.
SysML scores higher on integration readiness. It is natively aligned with MBSE practices, widely supported by industrial tools such as Cameo Systems Modeler and IBM Rhapsody, and can interoperate with requirement management (DOORS), simulation (Simulink), and safety analysis environments [57]. However, support for assumption evolution is weak. While assumptions can be updated through model edits or requirement changes, SysML does not offer systematic mechanisms for proactive monitoring or automated propagation of assumption changes across models. The evaluation scores and justifications for SysML across the five dimensions are summarized in Table 7.

4.7. RDAL

RDAL was introduced to strengthen requirements specification in MBSE environments by providing a structured language for defining requirements and linking them to system models [16]. Environmental assumptions are expressed as requirements, constraints, or contextual conditions but are not first-class constructs within the language. In the sUAS scenario, for example, an assumption and its associated requirement could be specified in RDAL as follows:
  • Assumption: “GNSS signal is available during flight.”
  • Linked Requirement: “The sUAV shall maintain navigation accuracy within 2 m.”
  • Risk Annotation: High
  • Confidence Level: Medium
This illustrates how RDAL allows engineers to capture assumptions in requirement form, link them to functional requirements, and enrich them with annotations such as risk or confidence levels. These extensions make assumptions more visible and analyzable, though they remain embedded in requirement constructs rather than being treated as first-class entities.
The modeling focus is therefore moderate, as RDAL allows explicit documentation of assumptions through requirement statements but lacks native constructs to distinguish them from system requirements. Tracing support is stronger: RDAL enables links between requirements, system model elements, and verification activities, so assumptions expressed as requirements can be systematically connected to design artifacts and test cases. Validation capability is also moderate: RDAL specifications can be linked to verification methods, enabling checks against system behavior, but the language does not provide formal semantics for reasoning about assumptions.
Integration readiness is one of RDAL’s strengths, since it was designed as an Eclipse-based profile and integrates with SysML and MBSE toolchains. This makes it possible to manage assumptions alongside requirements, architectures, and simulations within a consistent environment. With respect to assumption evolution, RDAL supports traceable updates when requirements change, but this process is reactive and design-time only. There is no explicit support for runtime monitoring of assumptions or proactive adaptation when assumptions evolve. The evaluation scores and justifications for RDAL across the five comparison dimensions are presented in Table 8.
Table 9 provides a comparative overview of how each of the seven modeling approaches represents environmental assumptions in practice. While all frameworks include some mechanism to encode assumptions, they differ significantly in modeling constructs, emphasis, and explicitness. For example, KAOS and Obstacle Analysis support systematic treatment of environmental constraints through domain properties and obstacle conditions, respectively, and provide structured reasoning about their potential violations. In contrast, frameworks like i* and Claims capture assumptions more implicitly, often through actor dependencies or justificatory elements in argumentation models, with validation relying heavily on external evidence or tools. SysML provides moderate support for explicit representation by allowing assumptions to be defined through user-defined stereotypes or profile extensions within standard requirement constructs, while RDAL offers dedicated modeling elements with targeted annotations that link assumptions directly to requirements and risks, and allow metadata such as confidence levels to be specified. At a glance, KAOS and Obstacle Analysis provide the strongest support for explicit assumption modeling, i* and Claims remain weakest, and RDAL and SysML occupy a middle ground. Their relative positioning, however, shifts once additional dimensions such as traceability, validation, integration, and assumption evolution are considered.
Table 10 and Figure 2 synthesize the evaluation results across the seven frameworks. To complement this comparison, Figure 3 provides a radar-style visualization of the same results, where each axis corresponds to an evaluation dimension and each polygon represents a framework. As an illustration of how the scoring was applied, SysML was assigned a score of 1 for assumption evolution because, while assumptions can be captured via parametric constraints and analyzed in simulations, explicit constructs for runtime adaptation are absent.
KAOS, Obstacle Analysis, and RDAL emerge as the strongest overall, though with different emphases. KAOS integrates assumptions as domain properties and obstacles, structurally tied to goals and agents. Its formal semantics enable rigorous validation, and its traceability to requirements is strong, although integration into MBSE workflows remains limited. Obstacle Analysis supports explicit modeling of conditions that obstruct goals and provides systematic resolution tactics, but its integration with broader toolchains and lifecycle support is weaker. RDAL provides native constructs for assumptions, complete with metadata such as risk and confidence levels, and explicit links to requirements and rationale. It also supports structured updating and versioning of assumptions, though this remains largely manual, and its industrial adoption and toolchain integration are weaker than SysML.
SysML scores highest in integration readiness, reflecting its widespread use in MBSE toolchains. Assumptions can be captured through stereotypes, tagged values, and parametric constraints, enabling traceability across requirements and design. However, effective validation typically requires significant customization or external analysis tools, and its evolution support is weak. Failure Frames emphasize distinguishing between system and environmental assumption violations, providing structured within-frame traceability and qualitative validation support, but they remain poorly integrated with mainstream MBSE environments. The i* framework and Claims approaches provide the weakest support for explicit assumption handling. i* embeds assumptions indirectly in actor dependencies and softgoals, offering moderate traceability but little validation or evolution support. Claims frameworks capture assumptions as contextual warrants in assurance arguments, achieving strong traceability across claims, arguments, and evidence, but offering only qualitative validation and minimal evolution support.
Overall, no single framework excels across all dimensions. KAOS, Obstacle Analysis, and RDAL provide the most comprehensive modeling and analysis capabilities, SysML offers the strongest MBSE integration, while i* and Claims remain limited in explicitness but valuable for early-phase socio-technical analysis and assurance reasoning. These findings suggest that combining complementary frameworks or extending existing ones may be necessary to ensure assumptions remain visible, analyzable, and evolvable in safety-critical systems.
The comparative results summarized in Table 10 and visualized in Figure 3 reveal a diverse landscape of strengths and weaknesses across the seven modeling frameworks. KAOS and Obstacle Analysis provide the strongest explicit constructs and systematic reasoning for handling environmental assumptions, while RDAL offers dedicated modeling elements with traceability and metadata, but less industrial adoption and weaker support for evolution. By contrast, i* and Claims handle assumptions more implicitly and provide limited support for validation and lifecycle management. SysML stands out in integration readiness, reflecting its central role in MBSE, and can capture assumptions through stereotypes or profiles, though explicit representation and evolution support remain limited without extensions. These findings suggest that framework selection should consider both the criticality of environmental assumptions and the engineering context. In particular, safety-critical domains may benefit from combining the explicit modeling strengths of goal- or requirements-oriented frameworks with the integration advantages of SysML.

5. Discussion

5.1. Implications for Requirements Engineering Practice

The comparative evaluation shows that while environmental assumptions are critical to system dependability, they are handled unevenly across modeling frameworks. In practice, this uneven treatment leads to trade-offs that requirements engineers must weigh when selecting or combining approaches. Goal-oriented methods such as KAOS and Obstacle Analysis make assumptions explicit through domain properties or obstacles, facilitating systematic reasoning and early identification of risks in safety-critical domains. In contrast, i* and Claims embed assumptions more implicitly in actor dependencies or justificatory structures. While useful for capturing stakeholder intentions and assurance arguments, this implicit treatment can reduce visibility and make assumption violations harder to trace. SysML occupies an intermediate position. As an industry-standard MBSE language, it integrates well with design and verification workflows but requires customization via stereotypes or profiles to represent assumptions, leading to potential inconsistency across projects. These findings suggest that effective assumption management cannot rely on a single framework; instead, engineers must balance the benefits of explicit assumption modeling with the realities of tool support and workflow integration.

5.2. Assumption Evolution and Technical Debt

A key finding is that most frameworks provide weak support for assumption evolution. In practice, assumptions such as signal availability, weather accuracy, or operator behavior rarely remain static. If not systematically tracked or updated, assumptions risk becoming sources of RTD as models continue to rely on conditions that no longer hold. This type of “assumption debt” has been noted in prior work, which advocates for autonomic mechanisms to align models with reality as assumptions change [58].
Recent work reinforces this point. For example, Letier and van Lamsweerde [59] revisit obstacle analysis and emphasize its role in identifying and monitoring violations of environmental assumptions, extending to runtime monitoring and operational design domains for autonomous systems. Similarly, recent studies [60,61] propose new approaches for assumption-based monitoring under partial observability and for runtime verification under imperfect information. These contributions highlight ongoing research efforts to systematically address assumption drift, aligning closely with the evolution challenges identified in our comparative study.
The surveyed frameworks vary in their handling of evolution. KAOS and Obstacle Analysis support reasoning about assumption violations at design time, but revisions rely on manual updates. Failure Frames explicitly categorize assumption-based failures, improving documentation, yet lack mechanisms for systematic lifecycle management. Claims allow updates to assurance cases with new evidence, but this process is ad hoc. SysML offers only manual updates through stereotypes or requirements, with no built-in monitoring. RDAL provides more direct support via versioning and metadata such as risk or confidence levels, but still falls short of automated propagation or runtime monitoring. Overall, the absence of lifecycle-driven mechanisms means assumptions risk becoming outdated and undermining model integrity. Extending frameworks with runtime monitoring, automated propagation of changes, and systematic versioning represents a critical research opportunity for mitigating assumption-related RTD.

5.3. Hybrid and Complementary Approaches

Because no single framework provides comprehensive support across all evaluation dimensions, hybrid or complementary use of frameworks may be the most practical strategy in safety- and mission-critical settings. Each approach offers distinctive strengths that, when combined, can offset the limitations of others. For instance, KAOS or Obstacle Analysis can be employed during early analysis to make assumptions explicit and identify threats to goal satisfaction, with these results linked to system-level models in SysML to maintain traceability across design and verification. RDAL’s ability to annotate assumptions with risk and confidence metadata can complement Claims frameworks, tightening the connection between explicit assumption models and assurance cases used in certification.
Such hybridization also points to opportunities for tool interoperability. Model transformations could allow assumptions captured in KAOS or RDAL to be exported into SysML requirement diagrams, preserving traceability in MBSE toolchains. Likewise, annotations could be fed into assurance case environments, reducing the risk of hidden or unverified assumptions undermining safety arguments. Progress in this direction will depend on both methodological guidance for aligning frameworks in practice and technical support for linking their artifacts across tools and modeling environments.
The evaluation highlights several opportunities for advancing assumption-aware modeling in both research and practice. First, the limited support for assumption evolution across existing frameworks suggests a need for new methods and tools that treat assumptions as dynamic, lifecycle-managed entities. Future research could explore integrating runtime monitoring, automated consistency checking, and change propagation mechanisms, ensuring that models remain aligned with evolving environmental assumptions. Embedding these capabilities within goal-oriented and requirements-driven approaches would help mitigate assumption-related technical debt and strengthen the resilience of safety-critical systems.

6. Threats to Validity

As with any comparative study, this work is subject to several threats to validity. We discuss these in terms of internal, external, and conclusion validity.
Internal Validity. Scoring of the modeling approaches was grounded in a review of the literature, examination of example models, and analysis of tool documentation, supplemented by the author’s judgment. The qualitative nature of this assessment inevitably introduces some subjectivity, but efforts were made to minimize bias by cross-checking findings against multiple sources wherever possible. Although the evaluation is qualitative, these measures enhance internal validity and provide a sound basis for the comparative analysis.
External Validity. The seven modeling approaches were chosen from a prior systematic mapping study to represent a diverse range of paradigms, e.g., goal-oriented, argumentation-based, MBSE. While these are representative of the field, there may be other relevant frameworks not included in this analysis. Additionally, the evaluation focuses on assumption modeling rather than the broader use of these tools in systems engineering, which may limit generalizability. External validity may also be affected by the fact that the scoring was performed by a single author. While explicit criteria were applied systematically to reduce bias, no inter-rater validation or practitioner involvement was conducted in this study. This limits the extent to which the results can be generalized across evaluators. Future work should address this by incorporating multiple independent raters, assessing agreement using inter-rater reliability measures such as Cohen’s kappa, and involving practitioners to validate the findings in real-world settings.
Conclusion Validity. The comparative results are intended to guide decision-making, not provide definitive rankings. The relative strengths of frameworks may vary depending on project constraints, domain-specific requirements, or integration environments. The radar chart and tables should be read as heuristics for guiding model selection and combination, not as absolute judgments.
In future work, we plan to validate and refine these findings through empirical studies involving practitioners, including structured interviews and modeling tasks using real-world case studies. This will help assess more directly the practical impact and usability of these modeling approaches in supporting assumption-aware requirements engineering.

7. Conclusions

This paper presented a comparative evaluation of seven modeling frameworks, i.e., KAOS, i*, Obstacle Analysis, Failure Frames, Claims, SysML, and RDAL, with respect to their support for representing and managing environmental assumptions. Using a five evaluation dimensions, i.e., modeling focus, tracing support, validation capability, integration readiness, and assumption evolution, we highlighted how each framework addresses assumption management and where limitations persist.
The analysis showed that goal-oriented frameworks such as KAOS and Obstacle Analysis provide strong modeling constructs, but moderate integration and evolution support. SysML excels in integration readiness but relies on stereotypes for assumption representation. RDAL achieves its strongest capability in tracing assumptions to requirements and verification conditions, while its modeling support remains more limited. i* and Claims model assumptions implicitly, offering weaker validation and evolution capabilities, while Failure Frames directly models how assumptions can fail, but lack broad tool support.
Overall, the findings confirm that no single framework provides comprehensive coverage of assumption-aware requirements engineering. Each framework exhibits distinct strengths and weaknesses, suggesting that effective practice will require either combining complementary approaches or extending existing methods to close identified gaps. This study clarifies the trade-offs among seven modeling frameworks in their treatment of environmental assumptions, showing that no single approach is sufficient on its own, but that their complementary strengths can be leveraged to mitigate assumption-related technical debt. Future work will focus on empirical validation with practitioners and real-world case studies to assess the practical impact of assumption-aware modeling, as well as exploring tool support and integration strategies for assumption evolution across the system lifecycle.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The evaluation was based on published literature, tool documentation, and representative modeling examples cited within the paper. All sources are publicly available through the referenced publications.

Conflicts of Interest

The author declares no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
RERequirements Engineering
RTDRequirements Technical Debt
KAOSKnowledge Acquisition in automated Specification
SysMLSystems Modeling Language
RDALRequirements Definition and Analysis Language
MBSEModel-Based Systems Engineering
sUASsmall Uncrewed Aerial System
GNSSGlobal Navigation Satellite System

References

  1. Jackson, M. The Meaning of Requirements. Ann. Softw. Eng. 1997, 3, 5–21. [Google Scholar] [CrossRef]
  2. Jackson, M. Problems and requirements (software development). In Proceedings of the Second IEEE International Symposium on Requirements Engineering, York, UK, 27–29 March 1995; IEEE: New York, NY, USA, 1995; pp. 2–9. [Google Scholar]
  3. Alenazi, M. Requirements Technical Debt Through the Lens of Environment Assumptions. In Proceedings of the 2025 IEEE/ACM International Conference on Technical Debt (TechDebt), Ottawa, ON, Canada, 27–28 April 2025; IEEE: New York, NY, USA, 2025; pp. 40–46. [Google Scholar]
  4. Granadeno, P.A.A.; Bernal, A.M.R.; Al Islam, M.N.; Cleland-Huang, J. An Environmentally Complex Requirement for Safe Separation Distance Between UAVs. In Proceedings of the 2024 IEEE 32nd International Requirements Engineering Conference Workshops (REW), Reykjavik, Iceland, 24–28 June 2024; IEEE: New York, NY, USA, 2024; pp. 166–175. [Google Scholar]
  5. Ernst, N.A. On the role of requirements in understanding and managing technical debt. In Proceedings of the 2012 Third International Workshop on Managing Technical Debt (MTD), Zurich, Switzerland, 5 June 2012; IEEE: New York, NY, USA, 2012; pp. 61–64. [Google Scholar]
  6. Brown, N.; Cai, Y.; Guo, Y.; Kazman, R.; Kim, M.; Kruchten, P.; Lim, E.; MacCormack, A.; Nord, R.; Ozkaya, I.; et al. Managing technical debt in software-reliant systems. In Proceedings of the FSE/SDP Workshop on Future of Software Engineering Research, Santa Fe, NM, USA, 7–8 November 2010; pp. 47–52. [Google Scholar]
  7. Perera, J.; Tempero, E.; Tu, Y.C.; Blincoe, K. Modelling the quantification of requirements technical debt. Requir. Eng. 2024, 29, 421–458. [Google Scholar] [CrossRef]
  8. Lann, G.L. An analysis of the Ariane 5 flight 501 failure-a system engineering perspective. In Proceedings of the 1997 Workshop on Engineering of Computer-Based Systems (ECBS ’97), Monterey, CA, USA, 24–28 March 1997; IEEE: New York, NY, USA, 1997; pp. 246–339. [Google Scholar]
  9. Leveson, N.G.; Turner, C.S. An investigation of the Therac-25 accidents. Computer 1993, 26, 18–41. [Google Scholar] [CrossRef]
  10. Cailliau, A.; Damas, C.; Lambeau, B.; van Lamsweerde, A. Modeling car crash management with KAOS. In Proceedings of the 2013 International Comparing Requirements Modeling Approaches Workshop (CMA@ RE), Rio de Janeiro, Brazil, 15–19 July 2013; IEEE: New York, NY, USA, 2013; pp. 19–24. [Google Scholar]
  11. Yu, E.S. Towards modelling and reasoning support for early-phase requirements engineering. In Proceedings of the ISRE’97: 3rd IEEE International Symposium on Requirements Engineering, Annapolis, MD, USA, 6–10 January 1997; IEEE: New York, NY, USA, 1997; pp. 226–235. [Google Scholar]
  12. Cailliau, A.; van Lamsweerde, A. Runtime Monitoring and Resolution of Probabilistic Obstacles to System Goals. In Proceedings of the IEEE/ACM 12th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS), Buenos Aires, Argentina, 22–23 May 2017; IEEE: New York, NY, USA, 2017; pp. 1–11. [Google Scholar]
  13. Tun, T.; Lutz, R.; Nakayama, B.; Yu, Y.; Mathur, D.; Nuseibeh, B. The role of environmental assumptions in failures of DNA nanosystems. In Proceedings of the IEEE/ACM 1st International Workshop on Complex Faults and Failures in Large Software Systems (COUFLESS), Florence, Italy, 23 May 2015; IEEE: New York, NY, USA, 2015; pp. 27–33. [Google Scholar]
  14. Welsh, K.; Bencomo, N.; Sawyer, P. Tracing requirements for adaptive systems using claims. In Proceedings of the 6th International Workshop on Traceability in Emerging Forms of Software Engineering, Honolulu, HI, USA, 23 May 2011; pp. 38–41. [Google Scholar]
  15. Friedenthal, S.; Moore, A.; Steiner, R. A Practical Guide to SysML: The Systems Modeling Language; Morgan Kaufmann: Burlington, MA, USA, 2014. [Google Scholar]
  16. Blouin, D.; Senn, E.; Turki, S. Defining an annex language to the architecture analysis and design language for requirements engineering activities support. In Proceedings of the Model-Driven Requirements Engineering Workshop, Trento, Italy, 29 August 2011; IEEE: New York, NY, USA, 2011; pp. 11–20. [Google Scholar]
  17. Kang, E. The role of environmental deviations in engineering robust systems. In Proceedings of the 2021 IEEE 29th International Requirements Engineering Conference Workshops (REW), Online, 20–24 September 2021; IEEE: New York, NY, USA, 2021; pp. 435–438. [Google Scholar]
  18. Alenazi, M. The Role of Environmental Assumptions in Shaping Requirements Technical Debt. Appl. Sci. 2025, 15, 8028. [Google Scholar] [CrossRef]
  19. Cailliau, A.; Van Lamsweerde, A. Handling knowledge uncertainty in risk-based requirements engineering. In Proceedings of the IEEE 23rd International Requirements Engineering Conference (RE), Ottawa, ON, Canada, 24–28 August 2015; IEEE: New York, NY, USA, 2015; pp. 106–115. [Google Scholar]
  20. Lewis, G.A.; Mahatham, T.; Wrage, L. Assumptions Management in Software Development; Technical Report; Carnegie Mellon University: Pittsburgh, PA, USA, 2004. [Google Scholar]
  21. Samin, H.; Walton, D.; Bencomo, N. Surprise! Surprise! Learn and Adapt. In Proceedings of the 24th International Conference on Autonomous Agents and Multiagent Systems (AAMAS ’25), Richland, SC, USA, 19–23 May 2025; pp. 1821–1829. [Google Scholar]
  22. Sawyer, P.; Bencomo, N.; Whittle, J.; Letier, E.; Finkelstein, A. Requirements-Aware Systems: A Research Agenda for RE for Self-adaptive Systems. In Proceedings of the RE 2010, 18th IEEE International Requirements Engineering Conference, Sydney, NSW, Australia, 27 September–1 October 2010; IEEE: New York, NY, USA, 2010; pp. 95–103. [Google Scholar]
  23. Ali, S.; Lu, H.; Wang, S.; Yue, T.; Zhang, M. Uncertainty-Wise Testing of Cyber-Physical Systems. Adv. Comput. 2017, 106, 23–94. [Google Scholar]
  24. Whittle, J.; Sawyer, P.; Bencomo, N.; Cheng, B.H.; Bruel, J.M. Relax: Incorporating uncertainty into the specification of self-adaptive systems. In Proceedings of the 2009 17th IEEE International Requirements Engineering Conference, Atlanta, GA, USA, 30 August–4 September 2009; IEEE: New York, NY, USA, 2009; pp. 79–88. [Google Scholar]
  25. Cheng, B.H.; Sawyer, P.; Bencomo, N.; Whittle, J. A goal-based modeling approach to develop requirements of an adaptive system with environmental uncertainty. In Proceedings of the International Conference on Model Driven Engineering Languages and Systems, Denver, CO, USA, 4–9 October 2009; Springer: Berlin/Heidelberg, Germany, 2009; pp. 468–483. [Google Scholar]
  26. Baresi, L.; Pasquale, L.; Spoletini, P. Fuzzy goals for requirements-driven adaptation. In Proceedings of the 2010 18th IEEE International Requirements Engineering Conference, Sydney, NSW, Australia, 27 September–1 October 2010; IEEE: New York, NY, USA, 2010; pp. 125–134. [Google Scholar]
  27. Clarke, E.M. Model checking. In Proceedings of the International Conference on Foundations of Software Technology and Theoretical Computer Science, Kharagpur, India, 18–20 December 1997; Springer: Berlin/Heidelberg, Germany, 1997; pp. 54–56. [Google Scholar]
  28. Gluck, P.R.; Holzmann, G.J. Using SPIN model checking for flight software verification. In Proceedings of the IEEE Aerospace Conference, Big Sky, MT, USA, 9–16 March 2002; IEEE: New York, NY, USA, 2002; Volume 1, p. 1. [Google Scholar]
  29. Gario, M.; Cimatti, A.; Mattarei, C.; Tonetta, S.; Rozier, K.Y. Model checking at scale: Automated air traffic control design space exploration. In Proceedings of the International Conference on Computer Aided Verification, Toronto, ON, Canada, 17–23 July 2016; Springer: Berlin/Heidelberg, Germany, 2016; pp. 3–22. [Google Scholar]
  30. Fifarek, A.W.; Wagner, L.G.; Hoffman, J.A.; Rodes, B.D.; Aiello, M.A.; Davis, J.A. SpeAR v2. 0: Formalized past LTL specification and analysis of requirements. In Proceedings of the NASA Formal Methods: 9th International Symposium, NFM 2017, Moffett Field, CA, USA, 16–18 May 2017; Springer: Berlin/Heidelberg, Germany, 2017; pp. 420–426. [Google Scholar]
  31. Chan, W.; Anderson, R.J.; Beame, P.; Burns, S.; Modugno, F.; Notkin, D.; Reese, J.D. Model checking large software specifications. IEEE Trans. Softw. Eng. 1998, 24, 498–520. [Google Scholar] [CrossRef]
  32. Gheorghiu Bobaru, M.; Păsăreanu, C.S.; Giannakopoulou, D. Automated assume-guarantee reasoning by abstraction refinement. In Proceedings of the International Conference on Computer Aided Verification, Princeton, NJ, USA, 7–14 July 2008; Springer: Berlin/Heidelberg, Germany, 2008; pp. 135–148. [Google Scholar]
  33. Pnueli, A. The temporal logic of programs. In Proceedings of the 18th Annual Symposium on Foundations of Computer Science (SFCS 1977), 30 September–31 October 1977; Providence, RI, USA, IEEE: New York, NY, USA, 1977; pp. 46–57. [Google Scholar]
  34. Holzmann, G.J. The model checker SPIN. IEEE Trans. Softw. Eng. 1997, 23, 279–295. [Google Scholar] [CrossRef]
  35. Objectiver. Available online: https://www.objectiver.com/index.php?id=25 (accessed on 1 August 2025).
  36. OpenOME, an Open-Source Requirements Engineering Tool. Available online: https://www.cs.toronto.edu/km/openome/ (accessed on 1 August 2025).
  37. van Lamsweerde, A.; Letier, E. Handling Obstacles in Goal-Oriented Requirements Engineering. IEEE Trans. Softw. Eng. 2000, 26, 978–1005. [Google Scholar] [CrossRef]
  38. ZOOM4PF. Available online: https://github.com/Wsfff-lf/ZOOM4PF/tree/main (accessed on 1 August 2025).
  39. SEURAT. Available online: https://github.com/burgeje/SEURAT (accessed on 1 August 2025).
  40. No Magic. Cameo Systems Modeler MagicDraw. Available online: https://www.nomagic.com/products/cameo-systems-modeler (accessed on 1 August 2025).
  41. Van Lamsweerde, A. Goal-oriented requirements engineering: A guided tour. In Proceedings of the Fifth IEEE International Symposium on Requirements Engineering, Toronto, ON, Canada, 27–31 August 2001; IEEE: New York, NY, USA, 2001; pp. 249–262. [Google Scholar]
  42. Bresciani, P.; Perini, A.; Giorgini, P.; Giunchiglia, F.; Mylopoulos, J. Tropos: An agent-oriented software development methodology. Auton. Agents Multi-Agent Syst. 2004, 8, 203–236. [Google Scholar] [CrossRef]
  43. Jureta, I.J.; Borgida, A.; Ernst, N.A.; Mylopoulos, J. Techne: Towards a new generation of requirements modeling languages with goals, preferences, and inconsistency handling. In Proceedings of the 2010 18th IEEE International Requirements Engineering Conference, Sydney, NSW, Australia, 27 September–1 October 2010; IEEE: New York, NY, USA, 2010; pp. 115–124. [Google Scholar]
  44. Object Management Group. Systems Modeling Language (SysML). Available online: http://www.omgsysml.org (accessed on 1 August 2025).
  45. Estefan, J.A. Survey of model-based systems engineering (MBSE) methodologies. Incose MBSE Focus Group 2007, 25, 1–12. [Google Scholar]
  46. Holt, J.; Perry, S. SysML for Systems Engineering; IET: London, UK, 2008; Volume 7. [Google Scholar]
  47. Lamsweerde, A. Kaos tutorial. Cediti, 5 September 2003. [Google Scholar]
  48. van Lamsweerde, A. Requirements Engineering: From System Goals to UML Models to Software Specifications; John Wiley & Sons, Ltd.: Hoboken, NJ, USA, 2009. [Google Scholar]
  49. Alrajeh, D.; Kramer, J.; van Lamsweerde, A.; Russo, A.; Uchitel, S. Generating obstacle conditions for requirements completeness. In Proceedings of the 34th International Conference on Software Engineering, ICSE 2012, Zurich, Switzerland, 2–9 June 2012; IEEE: New York, NY, USA, 2012; pp. 705–715. [Google Scholar]
  50. Alrajeh, D.; van Lamsweerde, A.; Kramer, J.; Russo, A.; Uchitel, S. Risk-driven revision of requirements models. In Proceedings of the 2016 IEEE/ACM 38th International Conference on Software Engineering (ICSE), Austin, TX, USA, 14–22 May 2016; IEEE: New York, NY, USA, 2016; pp. 855–865. [Google Scholar]
  51. Ernst, N.A.; Mylopoulos, J.; Wang, Y. Requirements evolution and what (research) to do about it. In Proceedings of the 2011 International Conference on Software Engineering, Honolulu, HI, USA, 21–28 May 2011; pp. 11–20. [Google Scholar]
  52. Wang, Q.; Yu, Y.; Mylopoulos, J. Formalizing and reasoning about goal models: A journey toward practical formal goal modeling. Requir. Eng. 2009, 14, 219–241. [Google Scholar]
  53. van Lamsweerde, A.; Letier, E. Integrating Obstacles in Goal-Driven Requirements Engineering. In Proceedings of the 1998 International Conference on Software Engineering, ICSE 98, Kyoto, Japan, 19–25 April 1998; IEEE: New York, NY, USA, 1998; pp. 53–62. [Google Scholar]
  54. Hall, J.G.; Rapanotti, L. A reference model for requirements engineering. In Proceedings of the 11th IEEE International Requirements Engineering Conference, 8–12 September 2003; IEEE: New York, NY, USA, 2003; pp. 181–187. [Google Scholar]
  55. Hall, J.G.; Rapanotti, L. Problem frames for sociotechnical systems. In Human Computer Interaction: Concepts, Methodologies, Tools, and Applications; IGI Global Scientific Publishing: Hershey, PA, USA, 2009; pp. 713–731. [Google Scholar]
  56. Hart, L.E. Introduction to model-based system engineering (MBSE) and sysml. In Proceedings of the Delaware Valley INCOSE Chapter Meeting, Ramblewood Country Club, Mount Laurel, NJ, USA, 30 July 2015. [Google Scholar]
  57. Weilkiens, T. Systems Engineering with SysML/UML: Modeling, Analysis, Design; Elsevier: Amsterdam, The Netherlands, 2011. [Google Scholar]
  58. Ali, R.; Dalpiaz, F.; Giorgini, P.; Souza, V.E.S. Requirements Evolution: From Assumptions to Reality. In Proceedings of the Enterprise, Business-Process and Information Systems Modeling—12th International Conference, BPMDS 2011, and 16th International Conference, EMMSAD 2011, held at CAiSE 2011, London, UK, 20–21 June 2011; Springer: Berlin/Heidelberg, Germany, 2011. Lecture Notes in Business Information Processing. Volume 81, pp. 372–382. [Google Scholar]
  59. Letier, E.; van Lamsweerde, A. Obstacle Analysis in Requirements Engineering: Retrospective and Emerging Challenges. IEEE Trans. Softw. Eng. 2025, 51, 795–801. [Google Scholar] [CrossRef]
  60. Cimatti, A.; Grosen, T.M.; Larsen, K.G.; Tonetta, S.; Zimmermann, M. Exploiting Assumptions for Effective Monitoring of Real-Time Properties Under Partial Observability. In Proceedings of the Software Engineering and Formal Methods, Aveiro, Portugal, 6–8 November 2024; Madeira, A., Knapp, A., Eds.; Springer: Cham, Switzerland, 2025; pp. 70–88. [Google Scholar]
  61. Ferrando, A.; Malvone, V. Runtime Verification via Rational Monitor with Imperfect Information. ACM Trans. Softw. Eng. Methodol. 2025. [Google Scholar] [CrossRef]
Figure 1. KAOS diagram snippet of the sUAS case study.
Figure 1. KAOS diagram snippet of the sUAS case study.
Applsci 15 10345 g001
Figure 2. Column-based comparison of the seven frameworks across the evaluation dimensions.
Figure 2. Column-based comparison of the seven frameworks across the evaluation dimensions.
Applsci 15 10345 g002
Figure 3. Radar chart comparing the seven modeling frameworks.
Figure 3. Radar chart comparing the seven modeling frameworks.
Applsci 15 10345 g003
Table 1. Overview of Assumption-Aware Modeling Approaches, with Example Tools and References.
Table 1. Overview of Assumption-Aware Modeling Approaches, with Example Tools and References.
Modeling ApproachParadigmExample Tool or EnvironmentTool Reference
KAOSGoal-oriented modelingObjectiver (requirements modeling tool supporting KAOS)[35]
i*Goal-oriented modelingOpenOME (Eclipse-based tool)[36]
Obstacle AnalysisRisk/goal analysis techniqueIntegrated with KAOS-based tools (e.g., Objectiver)[37]
Failure FramesProblem-oriented modelingExperimental prototypes
(e.g., Zoom4PF)
[38]
ClaimsArgumentation-based assuranceSEURAT (Eclipse plug-in)[39]
SysMLModel-based systems engineeringCameo Systems Modeler
(formerly MagicDraw)
[40]
RDALDomain-specific language for
requirements analysis
Papyrus UML prototype; later
Capella viewpoint extensions
[16]
Table 2. Evaluation of KAOS for Modeling Environmental Assumptions in sUAS Scenario.
Table 2. Evaluation of KAOS for Modeling Environmental Assumptions in sUAS Scenario.
DimensionScore (0–3)Justification
Modeling focus3Assumptions are first-class as domain properties and obstacles, directly supporting explicit modeling.
Tracing support3Strong goal-to-assumption and obstacle-to-goal links; propagation through refinement hierarchy is also well supported.
Validation capability3Formal semantics support validation via temporal logic and model checking, but require significant formalization effort.
Integration readiness2Limited out-of-the-box integration beyond Objectiver; exporting to other MBSE environments requires adaptation.
Assumption evolution2Provides obstacle resolution tactics (prevention, mitigation, restoration) but mainly manual; no native lifecycle or runtime monitoring.
Table 3. Evaluation of i* for Modeling Environmental Assumptions in sUAS Scenario.
Table 3. Evaluation of i* for Modeling Environmental Assumptions in sUAS Scenario.
DimensionScore (0–3)Justification
Modeling focus1Assumptions are not first-class elements; instead, they are embedded in dependencies, softgoals, or beliefs, which makes them less explicit.
Tracing support2Goal and dependency links provide moderate traceability, but linking assumptions to requirements often requires additional documentation or integration with other tools.
Validation capability1Limited native support for validating environmental assumptions; requires external analysis or tool extensions.
Integration readiness2i* is supported by tools, e.g., OpenOME that can integrate with other artifacts, but mapping to requirements/design models is largely manual. Integration with MBSE standards e.g., SysML, is weak.
Assumption evolution1No native mechanism for tracking changes in dependencies over time; evolution must be modeled manually.
Table 4. Evaluation of Obstacle Analysis for Modeling Environmental Assumptions in sUAS Scenario.
Table 4. Evaluation of Obstacle Analysis for Modeling Environmental Assumptions in sUAS Scenario.
DimensionScore (0–3)Justification
Modeling focus3Directly targets environmental threats and their impact on goals; strong fit for assumption-related risk identification.
Tracing support3Obstacles are explicitly linked to goals, allowing traceability of assumption failures.
Validation capability3Supports reasoning about likelihood and mitigation but relies on external methods (e.g., probabilistic models) for formal validation.
Integration readiness2Strong in requirements analysis, but limited integration with MBSE toolchains.
Assumption evolution2Assumptions can be revisited when new obstacles are identified, but updates are manual and not lifecycle-driven.
Table 5. Evaluation of Failure Frames for Modeling Environmental Assumptions in sUAS Scenario.
Table 5. Evaluation of Failure Frames for Modeling Environmental Assumptions in sUAS Scenario.
DimensionScore (0–3)Justification
Modeling focus3Explicitly models environmental assumptions and links their violations to specific failures.
Tracing support2Assumptions and failures are linked within frames, but tracing to requirements or design artifacts requires additional mapping beyond the frame structure.
Validation capability2Provides structured reasoning, but lacks formal/automated validation.
Integration readiness1Limited dedicated tooling; typically implemented through custom diagrams or adapted modeling tools.
Assumption evolution2Moderate—assumptions can be updated iteratively as new failures are identified, but no formal lifecycle or runtime mechanisms exist.
Table 6. Evaluation of Claims for Modeling Environmental Assumptions in sUAS Scenario.
Table 6. Evaluation of Claims for Modeling Environmental Assumptions in sUAS Scenario.
DimensionScore (0–3)Justification
Modeling focus1Environmental assumptions are recorded as warrants or justifications but are not treated as standalone modeling entities.
Tracing support3Assumptions are directly tied to claims about requirements or system properties.
Validation capability2Validation depends on external evidence and analysis, with no native automated checking of assumptions.
Integration readiness2Can be implemented using tools such as SEURAT, but integration with broader MBSE workflows is limited.
Assumption evolution1No built-in mechanism for updating assumptions when environmental conditions change.
Table 7. Evaluation of SysML for Modeling Environmental Assumptions in sUAS Scenario.
Table 7. Evaluation of SysML for Modeling Environmental Assumptions in sUAS Scenario.
DimensionScore (0–3)Justification
Modeling focus2Assumptions can be explicitly represented via stereotypes, tagged values, or custom profiles, but SysML lacks native constructs, making support less consistent than in goal-oriented approaches.
Tracing support3Assumptions embedded in requirements can be linked to design and verification artifacts through «satisfy» and «verify» relations.
Validation capability2Parametric diagrams and constraint blocks allow quantitative analysis and simulation, but SysML lacks dedicated semantics for assumption validity.
Integration readiness3SysML is the de facto MBSE standard, widely supported by industrial tools (MagicDraw, Cameo, Papyrus) and integrated into systems engineering workflows.
Assumption evolution1assumptions can be updated manually within requirements or profiles, but no native mechanisms exist for lifecycle management or runtime monitoring.
Table 8. Evaluation of RDAL for Modeling Environmental Assumptions in sUAS Scenario.
Table 8. Evaluation of RDAL for Modeling Environmental Assumptions in sUAS Scenario.
DimensionScore (0–3)Justification
Modeling focus2Provides constructs for requirements, refinements, and verification conditions; assumptions can be captured through annotations.
Tracing support3Strong traceability support: requirements, refinements, verification conditions, and assumptions can be linked explicitly.
Validation capability2RDAL supports consistency and rationale checking, but deeper validation requires integration with external tools (e.g., model checking, simulation).
Integration readiness2some tool support exists in research/European safety-critical contexts, but industrial uptake is limited compared to SysML.
Assumption evolution2assumptions can be updated and versioned with metadata, but there is no explicit lifecycle or runtime monitoring support.
Table 9. Modeling of Environmental Assumptions Across Frameworks.
Table 9. Modeling of Environmental Assumptions Across Frameworks.
FrameworkModeling ConstructsExample Representation
KAOSDomain properties, obstacles, softgoalsDomain Property: “GPS signal is always available.” Obstacle: “Cloud cover blocks GPS signal.”
i* (i-star)Actor dependencies, softgoals, beliefsDependency: “Weather service provides accurate forecast.” Softgoal: “Forecast must be timely and precise.”
Obstacle AnalysisObstacles tied to environmental conditions (threats)Goal: “Maintain stable flight.” Obstacle: “Wind gusts greater than 30 knots.”
Failure FramesEnvironment faults; assumption-violation links (problems)Assumption: “Operator understands UI alerts.” Failure: “Misses critical warning leading to crash.”
ClaimsArgumentation nodes; assumptions as warrants/justificationsClaim: “Battery will not overheat.” Assumption: “Ambient temperature is below 35 °C.”
SysMLRequirements with stereotypes (e.g., «assumption»), tagged notesRequirement: “sUAS shall operate reliably.” «assumption» “Temperature remains in range.”
RDALAnnotated assumptions with trace links to requirements and risksAssumption: “GNSS is available.” Risk: High; Confidence: Medium; linked to landing requirement.
Table 10. Comparative evaluation of modeling frameworks for environmental assumptions.
Table 10. Comparative evaluation of modeling frameworks for environmental assumptions.
FrameworkModeling FocusTracing SupportValidation CapabilityIntegration ReadinessAssumption Evolution
KAOS33322
i*12121
Obstacle Analysis33322
Failure Frames32212
Claims13221
SysML23231
RDAL23222
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alenazi, M. Uncovering the Implicit: A Comparative Evaluation of Modeling Approaches for Environmental Assumptions. Appl. Sci. 2025, 15, 10345. https://doi.org/10.3390/app151910345

AMA Style

Alenazi M. Uncovering the Implicit: A Comparative Evaluation of Modeling Approaches for Environmental Assumptions. Applied Sciences. 2025; 15(19):10345. https://doi.org/10.3390/app151910345

Chicago/Turabian Style

Alenazi, Mounifah. 2025. "Uncovering the Implicit: A Comparative Evaluation of Modeling Approaches for Environmental Assumptions" Applied Sciences 15, no. 19: 10345. https://doi.org/10.3390/app151910345

APA Style

Alenazi, M. (2025). Uncovering the Implicit: A Comparative Evaluation of Modeling Approaches for Environmental Assumptions. Applied Sciences, 15(19), 10345. https://doi.org/10.3390/app151910345

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop