Next Article in Journal
Outage Rates and Failure Removal Times for Power Lines and Transformers
Previous Article in Journal
Dual-Band Wearable Antenna Integrated with Glasses for 5G and Wi-Fi Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Role of Environmental Assumptions in Shaping Requirements Technical Debt

by
Mounifah Alenazi
Department of Science and Technology, University Colleges at Alkhafji, University of Hafr Al Batin, Alkafji 31971, Saudi Arabia
Appl. Sci. 2025, 15(14), 8028; https://doi.org/10.3390/app15148028
Submission received: 21 June 2025 / Revised: 14 July 2025 / Accepted: 17 July 2025 / Published: 18 July 2025

Abstract

Environmental assumptions, which are expectations about a system’s operating context, play a critical yet often underexplored role in the emergence of requirements technical debt (RTD). When these assumptions are incorrect, incomplete, or evolve over time, they can compromise the validity of system requirements and lead to costly rework in later stages of development. This paper investigates how environmental assumptions influence the identification of RTD through the analysis of a real-world case study in the domain of small uncrewed aerial systems (sUASs). A structured qualitative analysis of safety-related requirements and their associated assumptions was conducted to examine how deviations in these assumptions can introduce various forms of RTD. This work addresses a gap in the literature by explicitly focusing on the role of environmental assumptions in RTD identification. A classification framework is proposed, highlighting five distinct types of assumption-driven RTD. This framework serves as a foundation for supporting early detection of debt and improving the sustainability and resilience of software-intensive systems.

1. Introduction

Technical debt refers to the long-term consequences of making quick decisions during software development that prioritize short-term goals, such as fast delivery or tight deadlines, over long-term system quality and maintainability [1]. While these decisions may yield immediate benefits, they often incur “interest” in the form of maintenance overhead, quality degradation, or missed opportunities for architectural improvement [2,3]. Historically, research in technical debt has concentrated on its manifestations in code quality, testing, and architecture [4]. More recently, however, there has been a growing awareness that technical debt can originate earlier in the development process, particularly during the requirements engineering phase [2,4,5,6,7,8].
In real-world settings, teams can postpone or make approximate decisions on requirements in order to satisfy scheduling pressures or defer premature commitments in the presence of uncertainty. While such decisions may appear rational or even necessary, they often defer critical clarity and completeness, which must eventually be reconciled. This creates what is now known as requirements technical debt (RTD). Ernst [4] describes RTD as the gap between the ideal solution to a requirements problem and the solution that is actually implemented within a particular decision-making context. That “distance” can accumulate through vague requirements or the deferral of necessary but complex decisions. Moreover, failure to track requirement changes systematically over time can make it difficult to identify when and how debt has accumulated, as it often only becomes apparent when the system encounters failures or requires extensive revisions.
A major source of RTD stems from environmental assumptions [9], which can be defined as the implicit or explicit beliefs developers hold about the system’s operating context [10]. These assumptions may concern hardware limitations, sensor fidelity, communication latency, environmental variability (e.g., wind, temperature), user interaction models, or system interoperability. Yang et al. [11] note that many of these assumptions are embedded in requirements artifacts, which makes their early identification and management crucial. Yet in most real-world projects, assumptions are rarely treated as first-class citizens. They may be informally discussed, remain undocumented, or evolve alongside development without clear documentation or linkage to requirements. This creates significant risks. Developers may proceed under outdated or inaccurate assumptions that later undermine the correctness or applicability of the system behavior.
Jackson’s foundational distinction between the machine and the environment [10] offers a rigorous basis for reasoning about system behavior in context. In this model, the machine denotes the software system under construction, while the environment denotes the external conditions and entities with which the system interacts. Environmental assumptions, in this view, are propositions taken to be true about the real world, independently of whether the system is ever realized. If these assumptions are inaccurate, change over time, or remain unverified, then even a system that is correctly implemented according to its specification may fail to operate as intended in its deployed setting. Misunderstanding or neglecting environmental assumptions has historically led to costly failures. A well-known case is the Ariane 5 rocket failure [12], where assumptions from Ariane 4 about velocity thresholds were reused without validation. This led to an unhandled overflow and ultimately mission failure [13]. Such examples underscore how environmental mismatches can directly lead to technical debt by forcing costly redesigns or causing systems to fail once deployed in contexts that do not match design-time assumptions.
The literature increasingly emphasizes the importance of explicitly managing assumptions during requirements engineering, particularly in environments characterized by uncertainty and change. Steingruebl et al. [14] found that when developers face incomplete or ambiguous requirements, they frequently introduce implicit assumptions to fill knowledge gaps. However, these assumptions often remain undocumented and unvalidated, which contributes to rework and misalignment during later development stages. Samin et al. [15] emphasize the role of models at runtime in managing evolving environmental uncertainty in self-adaptive systems. The authors suggest that systems should continuously monitor and revise their assumptions to remain aligned with operational realities. Similarly, Sawyer et al. [16] and Lewis [17] argue that uncertainty stemming from assumptions must be addressed during development, as deferring resolution increases the risk of downstream failures and costly rework. Expanding this view, Yang et al. [18] provide a comprehensive mapping of challenges associated with architectural assumptions, showing that unmanaged assumptions often lead to flawed requirements and safety failures. In cyber-physical systems, where software interacts tightly with sensors, actuators, and physical processes, invalid or outdated assumptions about sensor accuracy, timing behavior, or hardware reliability can undermine system correctness and safety [19,20,21]. This body of work underscores that failure to document and manage assumptions early in the requirements phase not only increases the likelihood of design inconsistencies and integration issues but also leads to frequent requirements changes and costly rework as systems evolve.
This work examines how environmental assumptions contribute to the emergence of RTD, extending earlier work introduced at the TechDebt conference [9]. Building on that foundation, this article presents an in-depth analysis of a real-world case study and offers the following contributions:
  • A case study from the field of small uncrewed aerial systems (sUASs) is analyzed based on safety requirements and their associated environmental assumptions.
  • This analysis illustrates how missing or incorrect environmental assumptions lead to different types of RTD.
  • A new classification of RTD is proposed based on environmental assumptions. Specifically, various types of RTD that emerge due to incomplete, incorrect, or evolving environmental assumptions are highlighted.
  • An initial attempt is made to relate these RTD types to the RTD quantification model (RTDQM), in order to show how environmental assumptions can influence measurable dimensions of debt.
The remainder of the paper is organized as follows. Section 2 provides background information on environmental assumptions and RTD, including related work. Section 3 presents the methodology. The results and analysis are reported in Section 4. The discussion can be found in Section 5. Section 6 discusses threats to validity. Finally, Section 7 concludes the paper and outlines future work.

2. Background and Related Work

This section presents the necessary background information on environmental assumptions and RTD, as well as related work.

2.1. Environmental Assumptions

Michael Jackson’s foundational model [22] offers a conceptual distinction between the system to be built, i.e., the machine, and its surrounding context, i.e., the environment. This model introduces the problem domain, which encapsulates real-world entities and behaviors, and the machine domain, which refers to the software artifact under development. The key idea is to differentiate between what already exists independently of the system and what needs to be constructed to meet specific goals. Environmental assumptions, in this framework, refer to expected beliefs about the system’s operational context, i.e., conditions assumed to hold regardless of the machine’s presence. In contrast, the machine refers solely to the software system created to meet these goals.
The interaction between the environment and the machine occurs through shared observable events or phenomena. Figure 1 illustrates this interplay: Requirements (denoted as R) and environmental assumptions ( E ) reside in the problem domain, while implementation-specific components, i.e., the program (P) and the computing infrastructure (C), reside within the machine domain. The interface between these domains is governed by a specification (S) that defines how the machine must behave in response to environmental stimuli.
Requirements specify desired conditions over environmental events and states that the machine must ensure. To fulfill these requirements, the machine must conform to a defined specification (S). This specification outlines constraints solely over the shared phenomena that form the interface between the machine and the environment. The satisfaction of requirements is expressed as follows:
E , S R
This indicates that, given the environment behaves according to E and the machine behaves according to S, then the requirements R will be satisfied. Figure 1 also illustrates that the program (P) and the computing platform (C) belong exclusively to the machine domain. Here, P represents the software implementation of the specification, while C refers to the underlying hardware or execution platform. The correctness of the program with respect to the specification is given by [10,23]:
P , C S
Environmental assumptions have been a focus of extensive research across areas such as model checking [24,25,26,27,28], formal modeling of requirements [29,30,31], and testing approaches [32,33]. Recent efforts have emphasized the challenges of managing and validating these assumptions in increasingly dynamic, distributed, and uncertain systems [34]. A significant issue arises when environmental assumptions remain implicit or undocumented, which is a common source of system failures [23,32,35]. For instance, Granadeno et al. [34] propose the concept of environmentally complex requirements, where a single requirement depends on multiple interrelated environmental assumptions. Their work underscores the importance of explicitly addressing these assumptions in development processes that rely more on natural language specifications, iterative design, and empirical testing, rather than purely formal techniques.

2.2. Requirements Technical Debt

Technical debt, a concept introduced by Cunningham in 1992 [36], describes the long-term costs associated with short-term decisions in software development that prioritize speed over quality [37]. These decisions, e.g., bypassing design rigor or compromising code quality to meet a deadline, result in “debt” that must be addressed later to avoid system degradation [3]. Although the term originally referred to issues at the implementation level, it has since been broadened to include other development artifacts such as requirements. Technical debt at the requirements level, like its code-level counterpart, may arise intentionally, when decisions are consciously deferred or simplified, or unintentionally, due to evolving needs or limited expertise [6]. However, RTD differs in its origin, manifestation, and overall impact on the project, particularly because it occurs early in the software lifecycle and can influence foundational design and alignment with stakeholder goals [38].
To illustrate how requirements-related debt compares with implementation-level debt, Table 1 presents a summary of their distinguishing characteristics. For example, technical debt in the requirements phase can be attributed to the fact that requirements are not clear, complete, or well-defined. It usually occurs when teams rush through the planning phase or do not clearly understand the business needs [7]. This debt might also occur because of a lack of alignment between stakeholders and developers or because of delayed decisions about important system features. In contrast, technical debt in the implementation phase is directly attributed to code quality, system architecture, or technical decisions during development. This type of debt often comes from taking shortcuts in coding, not refactoring code, insufficient testing, or choosing quick but inefficient solutions [1,2,3].
While these types of debt arise at different stages, their long-term consequences can be equally significant. Technical debt at the requirements level often results in fundamental system misalignments. These may include missing features, misinterpretation of business goals, or failure to anticipate future needs. As a result, entire sections of the system may need to be redesigned later. In contrast, technical debt originating in the implementation phase primarily affects code quality and maintainability. It makes the system more prone to bugs and more difficult to scale or extend. Although it may not immediately affect the design of the core system, it often leads to higher maintenance costs and frequent bug fixes.
RTD has gained recognition as a unique category within the broader technical debt landscape, though it remains less extensively explored than code-related debt [6,38]. Initially conceptualized by Ernst [4], RTD refers to the discrepancy between the ideal solution to a requirements problem and the solution that is actually implemented. Ernst characterizes technical debt in this context as the result of trade-offs developers make when determining which requirements to fulfill. He also introduces a lightweight modeling tool, RE-KOMBINE, which enables comparisons between an existing implementation and alternative solutions, allowing the identification of technical debt as the gap between current implementations and revised requirement sets [4].
Building on this foundation, Lenarduzzi and Fucci [40] propose a classification of RTD into three types: Type 0, Type 1, and Type 2, based on factors such as unfulfilled user needs, indicators of poor requirements quality (i.e., smells), and inconsistencies between requirements and implementation. This taxonomy provides a basis for organizations to detect and address RTD in a structured way. In a broader effort to map the field, Melo et al. [6] present a systematic literature review that investigates the root causes of RTD, current mitigation approaches, and techniques for measuring it. Perera et al. [38] frame RTD as the outcome of decisions during requirements engineering that fall short of optimality. Their study surveys the existing body of RTD research, identifies knowledge gaps, and introduces the RTD Quantification Model [7], which evaluates RTD in terms of its cost implications, required effort, and potential system impact.
While previous studies have broadened our understanding of RTD, including our prior work [9], the role of environmental assumptions in shaping RTD remains largely unexplored. This paper extends that initial investigation by systematically analyzing how such assumptions contribute to the emergence of RTD and offering a classification framework based on their impact.

3. Methodology

The primary objective of this study is to explore how environmental assumptions contribute to the emergence of RTD. To achieve this, a qualitative analysis is conducted using a well-documented case study from the DroneResponse project [34]. The approach involves a structured review and interpretation of safety-related requirements and their associated environmental assumptions. Specifically, we assess the consequences of omitting, altering, or misrepresenting these assumptions and examine how such misalignments lead to various forms of RTD.

3.1. Case Study Context

Our analysis is based on the DroneResponse case study presented by Granadeno et al. [34]. The DroneResponse Project is a real-world initiative aimed at developing a robust system for managing sUASs operating in shared airspace. The project focuses on enabling drones to autonomously compute and maintain safe minimum separation distances while accounting for dynamic environmental factors. Unlike traditional centralized airspace management systems that enforce fixed, large separation distances, DroneResponse seeks to implement a more adaptive and real-time approach where each drone is responsible for its own separation calculations. The project integrates multiple technologies, including PX4 autopilot firmware, MAVROS communication protocols, and Jetson Xavier NX onboard computing, to facilitate autonomous flight operations. Drones exchange real-time status messages via an MQTT-based mesh radio network, which allows them to assess environmental conditions such as GPS accuracy, wind speed, and braking distance.
Each drone periodically broadcasts its state and receives updates from its neighbors. These updates help estimate external conditions such as GPS accuracy, wind forces, and braking capacity, which are critical for real-time collision avoidance. The system’s success depends on correctly identifying and validating key environmental assumptions, such as geolocation precision, communication latency, and the impact of wind on drone movement. These factors directly influence how drones determine their safe separation distances. In the case study, the authors break down the high-level requirement, i.e., “When in flight, drone d shall continually compute the minimum separation distance with each neighboring drone d within its region of interest,” into several derived requirements and systematically validate the related environmental assumptions using simulations, field data, and empirical models to improve the system’s reliability and safety.

3.2. Relevance to Environmental Assumptions

This case study is particularly well-suited for our analysis for several reasons. First, the requirement to maintain safe separation distances is inherently tied to environmental assumptions such as wind conditions, geolocation accuracy, and communication latency. This dependency makes the case highly relevant for exploring how unvalidated or incorrect assumptions can influence requirement fulfillment and lead to the accumulation of technical debt. Second, this case study offers explicit mapping between environmental assumptions (A5–A16) and derived requirements (DR1–DR5). This makes it easier to analyze the impact of omitting or introducing incorrect assumptions, as well as to classify the resulting RTD. Third, it spans multiple technical domains, including aerodynamics, communication systems, and real-time computing, which underscores the interdisciplinary complexity of validating assumptions and maintaining robust requirements across diverse operational contexts. Finally, although the case centers on sUASs, the principles and challenges discussed, such as managing environmental assumptions, validating requirements, and addressing technical debt, apply to other cyber-physical systems (CPS) and safety-critical systems. This makes the case study a valuable reference for broader research and practice [34]. To support our analysis, Table 2 provides a summary of the key functional requirements (DR1–DR5) and their associated environmental assumptions (A5–A16) identified in the DroneResponse case study [34]. We omit the environmental assumptions (A1–A4) because they serve as system-wide operational assumptions rather than being directly tied to a specific derived requirement (DR1–DR5).

3.3. Analytical Procedure

Building on the case context described earlier, we conducted a structured, theory-informed analysis of the requirements and their environmental assumptions. The steps below outline how we examined the potential for assumption-driven RTD:
1.
Document Review: A thorough examination of the DroneResponse case study presented by Granadeno et al. [34] was conducted, with particular attention to how environmental assumptions were linked to derived safety requirements.
2.
Scenario Manipulation: For each requirement–assumption pair, hypothetical scenarios were explored in which assumptions were omitted, altered, or allowed to evolve. These variations reflect plausible conditions such as increased communication latency, sensor inaccuracies, or changes in wind dynamics.
3.
RTD Impact Evaluation: Using well-established concepts from the RTD literature [4,6,7], the potential consequences of each assumption breakdown were analyzed. The evaluation focused on how these assumption failures could introduce issues into the requirement specifications, such as ambiguity, misalignment, or degradation over time.
4.
Interpretive Analysis: The analysis leveraged domain knowledge in cyber-physical systems and requirements engineering to interpret how assumption failures contribute to RTD formation.
This methodology supports a structured investigation into the influence of environmental assumptions on RTD formation. By grounding the analysis in an existing case study and synthesizing theoretical and practical knowledge, this study offers a foundation for deeper empirical and automated techniques in future work.

4. Results and Analysis

In order to illustrate how environmental assumptions contribute to the emergence of RTD, we analyze the assumptions provided in Table 2, by intentionally omitting them or introducing incorrect ones.

4.1. Geolocation Uncertainty (DR1)

The first requirement states that drones must determine their geolocation and neighboring drones’ positions with at least 99% confidence within a 3D region. This requirement is based on three environmental assumptions, i.e., EA5–EA7. One key assumption is that the extended Kalman filter (EKF) provides accurate geolocation estimates. EKF is a state estimation algorithm that fuses data from multiple sensors, such as GPS and IMU, to estimate position and velocity. The assumption implies that EKF provides sufficiently reliable and accurate estimates under expected operational conditions.
If this assumption is omitted or incorrect (e.g., due to sensor noise or GPS errors), then the requirement itself becomes unrealistic because it is difficult to guarantee 99% confidence in geolocation accuracy. The RTD that could emerge in this scenario is that the requirement relies on faulty assumptions. Consequently, the system may miscalculate separation distances, which would increase collision risks. If identified earlier, this would require modifying the requirement (DR1) to either accept a lower confidence level or impose constraints on operating conditions, for example, that drones can only operate in clear weather or in areas with strong GPS coverage. However, real-world conditions may not consistently support these constraints, which makes assumption validation essential before including them in the requirement. If these assumptions are not properly tested earlier, issues may not surface until later stages of testing or post-deployment, when the assumptions are exercised under real conditions. This form of RTD necessitates late-stage requirements revisions and incurs consequences commonly described as “interest”, such as additional testing cost and increased processing overhead. Developers may also need to modify algorithms, increase safety margins, and introduce backup localization methods.
RTD: Failure to validate these assumptions before encoding them into the requirement (e.g., through designing and executing a series of field tests with specific physical drones could lead to an incorrect or unrealistic requirement (DR1). This type of debt is classified as unvalidated assumption debt, which refers to requirements that are based on unverified assumptions.

4.2. Stopping Distance Without Wind (DR2)

The second derived requirement states that each drone must compute its maximum stopping distance under non-windy conditions, considering its velocity. This requirement is based on three environmental assumptions, namely A8 to A10. The first assumption states: “All drones operate autonomously, eliminating human reaction delays.” This implies that drones are capable of making real-time decisions independently, without relying on manual input. For instance, if a drone detects a nearby vehicle, it is expected to adjust its flight path immediately without human intervention.
The requirement presumes predictable braking responses. However, if the assumption about instant reactivity is inaccurate, the requirement becomes flawed because it neglects possible delays introduced by sensor processing time, actuator latency, or variations in drone payload. The RTD that results in this scenario would require modifying the requirement to account for real-world delays and to explicitly define an acceptable latency threshold, both of which are addressed by environmental assumptions A9 and A10.
RTD: When these assumptions are not properly validated, the requirement may remain incomplete, potentially leading to incorrect stopping distance calculations. This type of debt is classified as incomplete assumption debt, referring to technical debt caused by the absence or incompleteness of essential environmental assumptions. Although this issue is also partially related to the unvalidated assumption debt category, the more appropriate classification is incomplete assumption debt, since the initial requirement fails to account for critical factors such as delay in response time, sensor lag, or actuator performance.

4.3. Impact of Wind on Stopping Distance (DR3)

This requirement states that drones must account for wind effects when computing stopping distances and is associated with four environmental assumptions, i.e., A11–A14. Because wind conditions are inherently variable and can change rapidly, requirements grounded in these assumptions must be continuously validated and updated to remain accurate.
The first assumption (A11) states that operators do not launch drones in unsafe wind conditions. If this assumption is omitted or turns out to be incorrect, drones may be deployed in high winds, leading to unstable flight dynamics that violate safe separation requirements. To address this, developers must either equip drones with onboard wind sensors capable of detecting unsafe conditions in real-time or implement strict pre-flight checks to enforce launch constraints.
Assumptions A12 and A13 concern the drone’s lateral drift due to wind. These assumptions imply that any sideways displacement will remain within controllable margins. However, if actual drift exceeds these thresholds, due to stronger-than-expected crosswinds or aerodynamic instability, the separation requirements must be revised to prevent mid-air collisions. For instance, if wind patterns grow more extreme over time due to broader environmental changes, original assumptions may no longer hold, and the requirements may become outdated.
Assumption A14 states that the Gazebo simulation environment accurately reflects drone performance under windy conditions. If this assumption is either invalid or missing, then any requirement derived from simulation data becomes unreliable. A mismatch between simulated and real-world drone behavior introduces significant RTD, as the requirements will not reflect actual performance.
Because wind conditions, drift tolerances, and stopping distances can evolve due to environmental variability, the requirements informed by A11–A14 are inherently unstable. Without regular reassessment and validation of these assumptions, drones may behave unpredictably in the field, which requires urgent and costly design revisions.
RTD: This type of debt is classified as outdated assumption debt. It refers to requirements that become obsolete or misaligned due to changes in the environment over time. If these assumptions are not actively monitored and updated, the system may fail to adapt to real-world operating conditions, leading to late-stage corrections and increased maintenance overhead.

4.4. Projected Distance During Communication Delays (DR4)

This requirement ensures that drones estimate their projected travel distance between communication updates to maintain safe separation. It relies on a key environmental assumption (A15), which must be validated to prevent miscalculations that could compromise operational safety.
Assumption A15 states that each drone is aware of its physical dimensions and communicates this information at startup. If this assumption is missing or incorrect, drones may lack accurate awareness of their own size, leading to miscalculations in separation distances. This can result in drones flying closer than intended, thereby increasing the risk of mid-air collisions. To mitigate this, developers may need to implement periodic rebroadcasting of size information or design automated validation routines to ensure that each drone has access to reliable size data throughout the mission.
Additionally, the requirement implicitly assumes predictable communication latency. In real-world deployments, however, communication delays can vary significantly due to network interference, congestion, or hardware limitations. If the system assumes minimal and constant latency, drones may base their projections on outdated positional data, potentially violating safe separation thresholds. In such cases, the requirement must be revised to incorporate latency bounds and adaptive error correction mechanisms that adjust projected travel distances dynamically.
RTD: If these assumptions are not validated or regularly reassessed, the requirement may become based on flawed operational premises, resulting in misalignment with actual drone behavior. This type of debt is classified as misconstrained assumption debt, as the requirement imposes fixed constraints that do not hold under variable real-world conditions such as communication lag or missing dimension data. This can lead to late-stage reengineering to correct unsafe behavior or redefine system boundaries.

4.5. Safe Minimum Separation (DR5)

This requirement ensures that drones maintain a safe minimum separation distance at all times. It is based on a single environmental assumption (A16), which states that the built-in tolerances for separation distance calculations are sufficiently large to accommodate uncertainties and nonlinear interactions.
If this assumption is omitted or incorrect, the safety margins may be too narrow to handle real-world variability such as geolocation errors, wind-induced drift, or communication latency. In such cases, drones may unintentionally violate separation constraints, thereby increasing the risk of mid-air collisions.
To mitigate these risks, developers may need to adapt separation tolerances dynamically based on real-time environmental conditions. A fixed safety margin, although easier to implement, may become insufficient as environmental uncertainties evolve. For example, under turbulent conditions or high latency, the original separation parameters may no longer ensure safety. If the system assumes static tolerances that cannot be adjusted, the requirement must be revised to support dynamic safety margin computations.
RTD: If the assumption regarding fixed tolerances is not validated early or does not account for dynamic factors, the requirement becomes rigid and misaligned with operational needs. This leads to a form of technical debt that requires late-stage revisions to increase flexibility or re-engineer safety logic. This is classified as Overconstrained Assumption Debt, which refers to requirements that are over-constrained due to rigid assumptions, reducing adaptability to real-world variability.

4.6. RTD Classification Summary

The RTD observed in the DroneResponse case study stems from the dependencies between environmental assumptions (A5–A15) and derived requirements (DR1–DR5). Based on the qualitative analysis conducted, the technical debt of requirements caused by environmental assumptions is classified into five categories:
  • Unvalidated assumption debt: This type of debt refers to requirements grounded in assumptions that were never verified or tested during development.
  • Incomplete assumption debt: This type arises from the omission or lack of necessary environmental assumptions during the formulation of requirements.
  • Outdated assumption debt: This type refers to the technical debt that accumulates when requirements are not updated to reflect changes in environmental assumptions.
  • Misconstrained assumption debt: This type refers to technical debt arising from requirements that are based on environmental assumptions imposing incorrect or overly rigid constraints.
  • Overconstrained assumption debt: This refers to overly detailed requirements because they are based on rigid assumptions, leaving no flexibility to adapt to changing conditions.
This classification establishes a structured framework for better understanding RTD and implementing more effective strategies to identify and mitigate its risks. Table 3 summarizes the five categories of assumption-driven RTD, highlighting their core conditions, how they differ, and examples from the case study. To strengthen the theoretical foundation of this work, we relate the proposed classification of RTD, driven by environmental assumptions, to the RTDQM developed by Perera et al. [38]. RTDQM is a conceptual framework that enables the quantification of RTD based on three main dimensions, as follows: cost, effort, and system impact.
Whereas RTDQM focuses on quantifying observable consequences, this classification highlights the causal origins rooted in assumption failures. For example, unvalidated assumptions may lead to increased verification costs and rework, while evolving assumptions can cause system instability, thus impacting all three RTDQM dimensions. Figure 2 visualizes this relationship, showing how these five assumption-driven RTD categories intersect with RTDQM’s dimensions through representative RTD items.
This integration shows that the proposed classification not only extends RTD literature in a novel direction but also provides a practical foundation for enhancing quantification models like RTDQM. Future work will focus on mapping specific instances of the proposed classification to quantitative metrics in RTDQM, enabling more nuanced measurement and prioritization strategies for managing RTD in safety-critical systems.

5. Discussion and Implications

The findings reveal that environmental assumptions play a crucial role in the emergence of RTD. The classification framework proposed in this study provides a new lens through which RTD can be identified and mitigated early in the software lifecycle. This work also complements the RTDQM model proposed by Perera et al. [38]. While RTDQM focuses on quantifying RTD through measurable dimensions such as cost, effort, and system impact, this classification offers a causal perspective by tracing the origins of RTD to flawed, evolving, or missing environmental assumptions. Integrating our classification into RTDQM can support more nuanced assessments of RTD severity and help prioritize mitigation efforts based on their environmental root causes.
This classification also complements existing RTD taxonomies, such as the Type 0–2 categorization by Lenarduzzi and Fucci [40], by focusing on the Underlying environmental assumptions involved in system modeling. While previous taxonomies emphasized smells, user needs, or mismatches, this work surfaces how the source of RTD may lie in unvalidated or evolving environmental assumptions.
Although this classification was developed through a case study in the sUAS domain, we believe it applies to a wide range of cyber-physical and safety-critical systems, such as autonomous vehicles, industrial IoT, and medical devices. These domains similarly depend on assumptions about dynamic, uncertain, and evolving environments, which make them susceptible to the same forms of assumption-driven RTD.
For example, in the medical domain, the control software of a personal insulin pump [41], which helps regulate blood glucose levels in diabetic patients, often relies on several implicit environmental assumptions about its operational environment. One requirement states that insulin should be delivered only when glucose levels are rising and the rate of increase is accelerating. This requirement depends on several key environmental assumptions, as follows: (1) the blood glucose sensor maintains high accuracy under varying conditions, (2) the insulin pump reliably actuates each time a dosage is triggered, and (3) sensor performance remains consistent over time without significant drift or degradation [41]. When such assumptions prove incorrect, are not adequately validated, or gradually break down over time, different forms of RTD can accumulate.
Take Assumption 1, for example. If the accuracy of the glucose sensor is not properly verified, perhaps due to sensor wear, contamination, or calibration drift, the system may either falsely detect a rise in glucose or fail to catch a critical spike. In such cases, the requirement becomes effectively invalid, as it no longer reflects the behavior of the system in real-world conditions. Addressing this kind of issue may require the introduction of recalibration procedures or the adjustment of threshold parameters in the requirements. This situation represents unvalidated assumption debt, where the reliance on unverified environmental assumptions leads to costly and delayed requirement corrections.
A similar issue results from Assumption 2; if Assumption 2 is incomplete or not explicitly specified, the requirement implicitly assumes flawless hardware performance. If mechanical faults disrupt actuation, insulin may not be delivered even when triggered, which could create a silent failure. This is an instance of incomplete assumption debt, where the omission of important environmental assumptions creates hidden vulnerabilities. In fact, failures of this nature have led to the recall of medical devices in the past [42].
Finally, Assumption 3 concerns the long-term stability of sensor performance. Over time, sensors may degrade due to environmental exposure, for example, yet the original requirement may continue to assume stable operation. As the mismatch between assumption and reality grows, system behavior becomes increasingly unreliable. This introduces outdated assumption debt, where obsolete assumptions persist in the specification, requiring changes such as updating the requirement to include periodic sensor recalibration or fallback mechanisms in case of degraded accuracy. These examples show how even well-defined, safety-critical requirements can accumulate technical debt when environmental assumptions are not explicitly documented, validated, or adjusted over time.
The proposed classification categories capture fundamental patterns of misalignment between requirements and real-world conditions, regardless of the specific application. As such, this framework offers a generalizable lens for identifying and managing RTD in diverse engineering contexts.
A key practical insight from this study is that documenting and continuously validating environmental assumptions is not just a best practice; it is essential for sustainable requirements engineering. Several well-known system failures illustrate the risks of overlooking environmental assumptions during system development. The Ariane 5 rocket explosion [12] occurred due to the reuse of software from Ariane 4, which assumed lower horizontal velocity values; this led to a fatal overflow error [43]. The Uber self-driving car incident in 2018 [44] involved a fatal crash caused by a combination of system design flaws and incorrect environmental assumptions. The system failed to accurately and consistently classify a pedestrian walking a bicycle across the road, leading to delayed recognition of the need to brake. It also relied on a safety driver for emergency intervention, based on the flawed assumption that the driver would be fully attentive. In the case of the Therac-25 radiation therapy machine [45], several flawed and unvalidated assumptions contributed to fatal overdoses. These included the belief that hardware-safety interlocks were present and functional, that reused software would behave reliably in the new system, and that operators could correctly interpret and respond to ambiguous system error messages. These incidents underscore how unverified, outdated, or implicit environmental assumptions can contribute to system failures and lead to technical debt that originates as early as the requirements engineering phase. When such assumptions are not explicitly captured, validated, or updated, they can persist throughout the development lifecycle, creating long-term risks and hidden costs.
Overall, our findings suggest that environmental assumptions, if left undocumented or invalidated, can significantly contribute to the accumulation of RTD. This study contributes to the sustainability of software-intensive systems by exposing how unvalidated or evolving environmental assumptions introduce hidden forms of RTD, ultimately affecting maintainability and adaptability in safety-critical domains. By understanding these dynamics, requirements engineers can better anticipate long-term consequences and build more resilient and future-proof systems.

6. Threats to Validity

Several threats to validity may influence the findings of this study. In terms of construct validity, the classification of RTD types is based on the interpretation of environmental assumptions drawn from a specific case study; although bias was mitigated by grounding the analysis in well-established literature, subjective interpretation remains a potential limitation. Internal validity may be affected by dependencies on the accuracy and completeness of the DroneResponse case study; any overlooked or misrepresented assumption within the original documentation could lead to misclassification or missed instances of RTD. External validity is a lesser concern, as our study, while focused on the sUAS domain, is supported by detailed analysis that also includes an illustrative example from the medical domain, i.e., insulin pump. This cross-domain perspective increases the plausibility that our classification can generalize to other safety-critical systems. However, further validation in domains such as automotive or industrial IoT would strengthen confidence in its broader applicability. Lastly, reliability is addressed through the systematic and traceable methodology applied in our analysis; however, variations in how different analysts interpret assumptions and RTD manifestations may affect reproducibility. These threats highlight the need for a continued evaluation of the framework in varied contexts and with broader expert participation.

7. Conclusions

This study sheds light on the critical role of environmental assumptions in the emergence of RTD. By analyzing a real-world case study from the DroneResponse project, the analysis demonstrates how unvalidated, incorrect, evolving, or rigid environmental assumptions can lead to various forms of RTD, ultimately impacting system reliability, safety, and maintainability. We propose a novel classification framework that identifies five distinct types of RTD rooted in environmental assumptions. This classification offers a structured approach to understanding how assumptions, when left unchecked, contribute to technical debt early in the software lifecycle. Moreover, this study contributes to the sustainability of software-intensive systems by revealing how flawed or evolving assumptions introduce latent forms of RTD that impact long-term maintainability and system adaptability.
Future work will focus on validating this classification across diverse domains such as automotive systems and industrial IoT, where environmental assumptions are similarly complex and volatile. In particular, we plan to empirically validate the classification of assumption-driven RTDs through controlled experiments and real-world case studies. We will conduct simulation-based experiments using cyber-physical testbeds (e.g., drone simulators like Gazebo or AirSim) to systematically vary environmental assumptions, measure their effects on system behavior, and categorize RTDs. We also plan to perform quantitative analyses (e.g., in autonomous vehicles, medical devices, and industrial automation) to test the generalizability of the framework. These studies will demonstrate the practical relevance and robustness of the classification for use in safety-critical engineering contexts.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The case study analyzed in this work is based on publicly available documentation of the DroneResponse system, which is publicly documented in [34]. All data used for analysis, including system requirements and environmental assumptions, are derived from open-source materials or previously published literature. In this study, no proprietary or confidential data were used.

Conflicts of Interest

The author declares no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
RTDRequirements Technical Debt
sUASsmall Uncrewed Aerial System
RTDQMRequirements Technical Debt Quantification Model
DRDerived Requirement
ERRequirements Engineering
CPSCyber-Physical Systems
EKFExtended Kalman Filter
GPSGlobal Positioning System
IMU       Inertial Measurement Unit
IoT       Internet of Things

References

  1. McConnell, S. Managing Technical Debt; Technical Report; Construx Software Builders: Bellevue, WA, USA, 2008. [Google Scholar]
  2. Brown, N.; Cai, Y.; Guo, Y.; Kazman, R.; Kim, M.; Kruchten, P.; Lim, E.; MacCormack, A.; Nord, R.; Ozkaya, I.; et al. Managing technical debt in software-reliant systems. In Proceedings of the FSE/SDP Workshop on Future of Software Engineering Research (FoSER ’10), Santa Fe, NM, USA, 7–8 November 2010; pp. 47–52. [Google Scholar] [CrossRef]
  3. Kruchten, P.; Nord, R.L.; Ozkaya, I. Technical Debt: From Metaphor to Theory and Practice. IEEE Softw. 2012, 29, 18–21. [Google Scholar] [CrossRef]
  4. Ernst, N.A. On the role of requirements in understanding and managing technical debt. In Proceedings of the 2012 Third International Workshop on Managing Technical Debt (MTD), Zurich, Switzerland, 5 June 2012; pp. 61–64. [Google Scholar]
  5. Abad, Z.S.H.; Ruhe, G. Using real options to manage technical debt in requirements engineering. In Proceedings of the 2015 IEEE 23rd International Requirements Engineering Conference (RE), Ottawa, ON, Canada, 24–28 August 2015; pp. 230–235. [Google Scholar]
  6. Melo, A.; Fagundes, R.; Lenarduzzi, V.; Santos, W.B. Identification and measurement of Requirements Technical Debt in software development: A systematic literature review. J. Syst. Softw. 2022, 194, 111483. [Google Scholar] [CrossRef]
  7. Perera, J.; Tempero, E.; Tu, Y.C.; Blincoe, K. Modelling the quantification of requirements technical debt. Requir. Eng. 2024, 29, 421–458. [Google Scholar] [CrossRef]
  8. Robiolo, G.; Scott, E.; Matalonga, S.; Felderer, M. Technical debt and waste in non-functional requirements documentation: An exploratory study. In Proceedings of the Product-Focused Software Process Improvement: 20th International Conference, PROFES 2019, Barcelona, Spain, 27–29 November 2019; Springer: Cham, Switzerland, 2019; pp. 220–235. [Google Scholar]
  9. Alenazi, M. Requirements Technical Debt Through the Lens of Environment Assumptions. In Proceedings of the 2025 IEEE/ACM International Conference on Technical Debt (TechDebt), Ottawa, ON, Canada, 27–28 April 2025; pp. 40–46. [Google Scholar]
  10. Jackson, M. The Meaning of Requirements. Ann. Softw. Eng. 1997, 3, 5–21. [Google Scholar] [CrossRef]
  11. Yang, C.; Liang, P.; Avgeriou, P. Assumptions and their management in software development: A systematic mapping study. Inf. Softw. Technol. 2018, 94, 82–110. [Google Scholar] [CrossRef]
  12. Lann, G.L. An analysis of the Ariane 5 flight 501 failure-a system engineering perspective. In Proceedings of the 1997 Workshop on Engineering of Computer-Based Systems (ECBS ’97), Monterey, CA, USA, 24–28 March 1997; pp. 339–346. [Google Scholar]
  13. Knight, J. Safety critical systems: Challenges and directions. In Proceedings of the 24th International Conference on Software Engineering (ICSE 2002), Orlando, FL, USA, 25 May 2002; pp. 547–550. [Google Scholar]
  14. Steingruebl, A.; Peterson, G. Software Assumptions Lead to Preventable Errors. IEEE Secur. Priv. 2009, 7, 84–87. [Google Scholar] [CrossRef]
  15. Samin, H.; Walton, D.; Bencomo, N. Surprise! Surprise! Learn and Adapt. In Proceedings of the 24th International Conference on Autonomous Agents and Multiagent Systems (AAMAS ’25), Detroit, MI, USA, 19–23 May 2025; pp. 1821–1829. [Google Scholar]
  16. Sawyer, P.; Bencomo, N.; Whittle, J.; Letier, E.; Finkelstein, A. Requirements-Aware Systems: A Research Agenda for RE for Self-adaptive Systems. In Proceedings of the RE 2010, 18th IEEE International Requirements Engineering Conference, Sydney, NSW, Australia, 27 September–1 October 2010; pp. 95–103. [Google Scholar]
  17. Lewis, G.A.; Mahatham, T.; Wrage, L. Assumptions Management in Software Development; Technical Report; Software Engineering Institute, Carnegie Mellon University: Pittsburgh, PA, USA, 2004. [Google Scholar]
  18. Yang, C.; Liang, P.; Avgeriou, P.; Eliasson, U.; Heldal, R.; Pelliccione, P. Architectural Assumptions and Their Management in Industry—An Exploratory Study. In Proceedings of the Software Architecture—11th European Conference (ECSA 2017), Canterbury, UK, 11–15 September 2017; Lecture Notes in Computer Science. Springer: Cham, Switzerland, 2017; Volume 10475, pp. 191–207. [Google Scholar]
  19. Ali, S.; Lu, H.; Wang, S.; Yue, T.; Zhang, M. Uncertainty-Wise Testing of Cyber-Physical Systems. Adv. Comput. 2017, 106, 23–94. [Google Scholar]
  20. Zhang, M.; Selic, B.; Ali, S.; Yue, T.; Okariz, O.; Norgren, R. Understanding Uncertainty in Cyber-Physical Systems: A Conceptual Model. In Proceedings of the European Conference on Modelling Foundations and Applications (ECMFA), Vienna, Austria, 6–7 July 2016; pp. 247–264. [Google Scholar]
  21. Liu, C.; Zhang, W.; Zhao, H.; Jin, Z. Analyzing Early Requirements of Cyber-physical Systems through Structure and Goal Modeling. In Proceedings of the 20th Asia-Pacific Software Engineering Conference (APSEC 2013), Bangkok, Thailand, 2–5 December 2013; Volume 1, pp. 140–147. [Google Scholar]
  22. Jackson, M. Problems and requirements (software development). In Proceedings of the Second IEEE International Symposium on Requirements Engineering, York, UK, 27–29 March 1995; pp. 2–9. [Google Scholar]
  23. Peng, Z.; Rathod, P.; Niu, N.; Bhowmik, T.; Liu, H.; Shi, L.; Jin, Z. Testing software’s changing features with environment-driven abstraction identification. Requir. Eng. 2022, 27, 405–427. [Google Scholar] [CrossRef] [PubMed]
  24. Ghezzi, C.; Sharifloo, A.M. Quantitative verification of non-functional requirements with uncertainty. In Proceedings of the Sixth International Conference on Dependability and Computer Systems DepCoS-RELCOMEX 2011, Wroclaw, Poland, 27 June–11 July 2011; Springer: Berlin/Heidelberg, Germany, 2011; pp. 47–62. [Google Scholar]
  25. Bloem, R.; Chockler, H.; Ebrahimi, M.; Strichman, O. Specifiable robustness in reactive synthesis. Form. Methods Syst. Des. 2022, 60, 259–276. [Google Scholar] [CrossRef]
  26. Mohammadinejad, S.; Deshmukh, J.V.; Puranic, A.G. Mining environment assumptions for cyber-physical system models. In Proceedings of the 2020 ACM/IEEE 11th International Conference on Cyber-Physical Systems (ICCPS), Sydney, NSW, Australia, 21–25 April 2020; pp. 87–97. [Google Scholar]
  27. Fifarek, A.W.; Wagner, L.G.; Hoffman, J.A.; Rodes, B.D.; Aiello, M.A.; Davis, J.A. SpeAR v2.0: Formalized past LTL specification and analysis of requirements. In Proceedings of the NASA Formal Methods: 9th International Symposium (NFM 2017), Moffett Field, CA, USA, 16–18 May 2017; Springer: Cham, Switzerland, 2017; pp. 420–426. [Google Scholar]
  28. Yang, X.; Chen, X.; Wang, J. A Model Checking Based Software Requirements Specification Approach for Embedded Systems. In Proceedings of the 2023 IEEE 31st International Requirements Engineering Conference Workshops (REW), Hannover, Germany, 4–5 September 2023; pp. 184–191. [Google Scholar]
  29. van Lamsweerde, A.; Letier, E. Integrating Obstacles in Goal-Driven Requirements Engineering. In Proceedings of the 1998 International Conference on Software Engineering (ICSE 98), Kyoto, Japan, 19–25 April 1998; pp. 53–62. [Google Scholar]
  30. Alrajeh, D.; Kramer, J.; van Lamsweerde, A.; Russo, A.; Uchitel, S. Generating obstacle conditions for requirements completeness. In Proceedings of the 34th International Conference on Software Engineering (ICSE 2012), Zurich, Switzerland, 2–9 June 2012; pp. 705–715. [Google Scholar]
  31. Alrajeh, D.; Cailliau, A.; van Lamsweerde, A. Adapting requirements models to varying environments. In Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering (ICSE ’20), Seoul, Republic of Korea, 5–11 October 2020; pp. 50–61. [Google Scholar]
  32. Sturmer, S.; Niu, N.; Bhowmik, T.; Savolainen, J. Eliciting environmental opposites for requirements-based testing. In Proceedings of the 2022 IEEE 30th International Requirements Engineering Conference Workshops (REW), Melbourne, VIC, Australia, 15–19 August 2022; pp. 10–13. [Google Scholar]
  33. Amin, M.R.; Bhowmik, T.; Niu, N.; Savolainen, J. Environmental Variations of Software Features: A Logical Test Cases’ Perspective. In Proceedings of the 2023 IEEE 31st International Requirements Engineering Conference Workshops (REW), Hannover, Germany, 4–5 September 2023; pp. 192–198. [Google Scholar]
  34. Granadeno, P.A.A.; Bernal, A.M.R.; Al Islam, M.N.; Cleland-Huang, J. An Environmentally Complex Requirement for Safe Separation Distance Between UAVs. In Proceedings of the 2024 IEEE 32nd International Requirements Engineering Conference Workshops (REW), Reykjavik, Iceland, 24–25 June 2024; pp. 166–175. [Google Scholar]
  35. van Lamsweerde, A. Requirements Engineering: From System Goals to UML Models to Software Specifications, 1st ed.; Wiley Publishing: Hoboken, NJ, USA, 2009. [Google Scholar]
  36. Cunningham, W. The WyCash portfolio management system. ACM Sigplan Oops Messenger 1992, 4, 29–30. [Google Scholar] [CrossRef]
  37. Rios, N.; Spínola, R.O.; Mendonça, M.; Seaman, C. The most common causes and effects of technical debt: First results from a global family of industrial surveys. In Proceedings of the 12th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement, Oulu, Finland, 11–12 October 2018; pp. 1–10. [Google Scholar]
  38. Perera, J.; Tempero, E.; Tu, Y.C.; Blincoe, K. Quantifying requirements technical debt: A systematic mapping study and a conceptual model. In Proceedings of the 2023 IEEE 31st International Requirements Engineering Conference (RE), Hannover, Germany, 4–8 September 2023; pp. 123–133. [Google Scholar]
  39. Fowler, M. Technical Debt Quadrant. 2009. Available online: https://martinfowler.com/bliki/TechnicalDebtQuadrant.html (accessed on 17 July 2025).
  40. Lenarduzzi, V.; Fucci, D. Towards a holistic definition of requirements debt. In Proceedings of the 2019 ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM), Porto de Galinhas, Brazil, 19–20 September 2019; pp. 1–5. [Google Scholar]
  41. Sommerville, I. Software Engineering, 9th ed.; Pearson Education Inc.: London, UK, 2011. [Google Scholar]
  42. AACE. RECALL: Medtronic MiniMed Insulin Pumps Recalled for Incorrect Insulin Dosing. 2020. Available online: https://pro.aace.com/recent-news-and-updates/recall-medtronic-minimed-insulin-pumps-recalled-incorrect-insulin-dosing (accessed on 17 July 2025).
  43. Tirumala, A.S. An Assumptions Management Framework for Systems Software. Ph.D. Dissertation, University of Illinois at Urbana-Champaign, Champaign, IL, USA, 2006. [Google Scholar]
  44. Kohli, P.; Chadha, A. Enabling pedestrian safety using computer vision techniques: A case study of the 2018 Uber Inc. self-driving car crash. In Proceedings of the Future of Information and Communication Conference, San Francisco, CA, USA, 14–15 March 2019; Springer: Cham, Switzerland, 2019; pp. 261–279. [Google Scholar]
  45. Leveson, N.G.; Turner, C.S. An investigation of the Therac-25 accidents. Computer 1993, 26, 18–41. [Google Scholar] [CrossRef]
Figure 1. The Machine and the Environment, where R and E are private to the environmental domain, P and C are private to the machine domain, and S resides in the shared phenomena; (adapted from [10,23]).
Figure 1. The Machine and the Environment, where R and E are private to the environmental domain, P and C are private to the machine domain, and S resides in the shared phenomena; (adapted from [10,23]).
Applsci 15 08028 g001
Figure 2. Assumption-driven RTD categories mapped to RTDQM dimensions through RTD items.
Figure 2. Assumption-driven RTD categories mapped to RTDQM dimensions through RTD items.
Applsci 15 08028 g002
Table 1. Key differences between technical debt in the requirements phase and the implementation phase [1,2,3,7,38,39].
Table 1. Key differences between technical debt in the requirements phase and the implementation phase [1,2,3,7,38,39].
AspectRequirement PhaseImplementation Phase
OriginIncomplete, vague, or misunderstood requirements; missing or incorrect assumptions; neglecting user needs; failure to capture user feedback.Poor coding practices; shortcuts; lack of refactoring, testing, architecture planning; outdated dependencies, insufficient modularization
Impact on DevelopmentIncomplete functionality, safety risks, rework, misalignment with business goals, or large-scale redesigns.Poor code quality, increased maintenance, localized bugs, and higher refactoring costs.
VisibilityOften unnoticed until late-stage testing or post-deployment; typically emerges during integration.Usually detected during late development, testing, or feature addition.
Cost of ResolutionHigh as it may require redesign, extensive rework, or addressing regulatory compliance.Moderate, as it typically resolved through targeted refactoring or added tests.
Risk LevelStrategic risk (e.g., functional failure, safety, or compliance issues).Technical or operational risk (e.g., performance degradation, maintainability issues).
Table 2. Summary of the requirements (DR1–DR5) and their associated environmental assumptions (A5–A16) identified in the DroneResponse project, adapted from [34].
Table 2. Summary of the requirements (DR1–DR5) and their associated environmental assumptions (A5–A16) identified in the DroneResponse project, adapted from [34].
RequirementDescriptionEnvironmental AssumptionsType
DR1: Geolocation AccuracyDrones must determine their geolocation and neighboring drones’ positions with at least 99% confidence within a 3D region.
-
A5:The extended Kalman filter (EKF) correctly estimates the drone’s latitude/longitude with a 68% confidence interval.
-
A6: The EKF estimates altitude with similar accuracy.
-
A7: Geolocation errors follow a normal distribution and can be scaled to 99% confidence.
Adjacent System
DR2: Stopping Distance Without WindEach drone must compute its maximum stopping distance under non-windy conditions, considering its velocity.
-
A8: All drones operate autonomously, eliminating human reaction delays.
-
A9: Communication latency over the Doodle Labs Smart Radio is between 6 and 60 ms.
-
A10: Environmental factors affecting communication delays are negligible.
Operational environment (A8), Adjacent System (A9, A10)
DR3: Impact of Wind on Stopping DistanceDrones must account for wind effects when computing stopping distance.
-
A11: Operators do not launch drones in unsafe wind conditions.
-
A12: Sideways drift is minimal and remains within acceptable limits.
-
A13: Tailwinds have a greater impact on stopping distance than other wind directions.
-
A14: The Gazebo simulation model accurately represents drone behavior in wind conditions.
Process (A11), Physical environment (A12, A13), Process (A14)
DR4: Projected Distance During Communication DelaysDrones must estimate the distance they and their neighbors will travel between status updates.
-
A15: Every drone knows its physical dimensions, broadcasts this information upon startup, and notifies new drones upon their entry to the airspace.
Operational environment
DR5: Maintaining Safe Minimum SeparationDrones must maintain a safe minimum separation distance at all times.
-
A16: The tolerances in separation distance calculations are large enough to account for uncertainties and non-linear interactions.
Operational environment
Table 3. Summary of Assumption-Driven RTD Categories.
Table 3. Summary of Assumption-Driven RTD Categories.
Assumption-Driven RTDDefining ConditionDistinguishing FeatureExample from Case Study
Unvalidated Assumption DebtThe assumption is stated, but not verified or tested under real-world or expected conditions.The assumption is explicit, but its correctness is uncertain.Assuming GPS accuracy is always within 2 m without field validation.
Incomplete Assumption DebtKey environmental factors are omitted entirely from the requirement specification.The assumption is absent or only partially captured.Not specifying wind variability in the separation distance requirements.
Outdated Assumption DebtThe assumption was once valid but has changed due to external/system evolution.Reflects temporal decay or shifts in operational conditions.Assuming reliable cloud connectivity that degrades as deployment context changes.
Incorrect Assumption DebtThe assumption is explicitly stated but factually wrong.The assumption is invalid in context or leads to faulty behavior.Assuming fixed communication latency when network delays fluctuate.
Overconstrained Assumption DebtThe system lacks flexibility to adapt when assumptions break or vary.No alternative or adaptive mechanism is provided.Assuming static geofence boundaries, with no support for dynamic airspace constraints.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alenazi, M. The Role of Environmental Assumptions in Shaping Requirements Technical Debt. Appl. Sci. 2025, 15, 8028. https://doi.org/10.3390/app15148028

AMA Style

Alenazi M. The Role of Environmental Assumptions in Shaping Requirements Technical Debt. Applied Sciences. 2025; 15(14):8028. https://doi.org/10.3390/app15148028

Chicago/Turabian Style

Alenazi, Mounifah. 2025. "The Role of Environmental Assumptions in Shaping Requirements Technical Debt" Applied Sciences 15, no. 14: 8028. https://doi.org/10.3390/app15148028

APA Style

Alenazi, M. (2025). The Role of Environmental Assumptions in Shaping Requirements Technical Debt. Applied Sciences, 15(14), 8028. https://doi.org/10.3390/app15148028

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop