Next Article in Journal
Artificial Intelligence for Detecting Electoral Disinformation on Social Media: Models, Datasets, and Evaluation
Previous Article in Journal
Information Recovery Under Partial Observation: A Methodological Analysis of Multi-Informant Questionnaire Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Empirical Validation of Software Engineering Deadpoints: An Expert Practitioner Survey

by
Abdullah A. H. Alzahrani
Computers Department, Engineering and Computers College—Alqunfuda, Umm Al Qura University, Makkah 24382, Saudi Arabia
Information 2026, 17(3), 291; https://doi.org/10.3390/info17030291
Submission received: 22 February 2026 / Revised: 12 March 2026 / Accepted: 14 March 2026 / Published: 17 March 2026

Abstract

A state of terminal stagnation is often reached by software projects despite the presence of advanced tools, and these occurrences are defined within this study as software engineering deadpoints, where the cost of system recovery is frequently found to be higher than the actual value of the software. While many factors are seen to lead toward project failure, it is suggested by the evidence that technical debts are the main cause of such failures. A significant number (23.5%) of these fatal issues is created during the early architectural phases of development, and it is noted that these problems often remain hidden until they become unrecoverable. The data collected during this research show that projects facing technical obstacles (Recovery Score: 4.24) are much harder to save than those suffering with process obstacles (Recovery Score: 5.38). It was also observed that a steady reluctance to refactor old logic and an excessive number of code revisions are seen as the most reliable signs that a project is approaching a point of no return. Because these warning signs are often overlooked by management, the eventual failure of the system is often viewed as an unexpected event rather than a predictable outcome of poor early choices. By defining these terminal states, this work provides those in leadership roles with a method to differentiate between minor delays and total failure, thereby assisting teams in avoiding the heavy economic losses associated with unproductive development paths.

1. Introduction

1.1. Overview

Current software development environments are characterized by a pace and level of intricacy that frequently outstrip existing management capabilities. Although sophisticated frameworks such as the Project Management Body of Knowledge (PMBOK) [1,2] are widely used, the inability to prevent project failure continues to be a persistent challenge for the industry. Historically, academic study has interpreted project results through a simple binary of success or failure [3,4]; however, individuals working within the software field recognize that the actual status of a project is rarely so clearly divided. Professional developers frequently observe a ‘gray zone’ defined by terminal stagnation, in which initiatives continue to absorb financial and human resources even when technical advancement has stopped. Within this study, these conditions are described as software engineering deadpoints, which are reached when a system becomes functionally restricted or when the total expense required for restoration outweighs the actual utility of the software. Lehman’s laws of software evolution, with a particular focus on the law of increasing complexity, are expanded upon by this view, and it is suggested that a deadpoint is defined as the specific limit where structural instability is depicted [5,6,7].

1.2. Motivation

The motivation for this investigation was generated from the enormous financial losses attributed to poor software quality, an issue that was valued at approximately $2.08 trillion in the United States in 2020 [8]. Although the topics of technical debt [9,10] and requirement unpredictability [11,12,13,14,15,16,17] have been explored by a wide body of research, there is a clear absence of practical techniques for identifying the exact time or event where a software project becomes unmaintainable. It was observed by those professionals in the field that existing studies fail to clarify the shift from a struggling effort to a non-feasible one, even though unused code and technical errors are frequently discussed [18].
The establishment of these boundaries is viewed as essential for industry because of the high failure rates connected to structural instability and the dangers of restricted knowledge sharing [19]. These restraints are intended to protect organizations from the extended and unproductive work cycles that are commonly seen before a failing project is finally abandoned [20]. By defining these thresholds, a more formal understanding of project feasibility is provided to assist in the prevention of wasted resources.

1.3. Research Questions

The investigation is organized around three central questions to examine these conditions, and the classification of the specific traits of project stagnation is aimed at through RQ1: What elements form the classification of software engineering deadpoints as observed by experienced professionals in software engineering field?
The underlying causes of these terminal conditions are analyzed through RQ2: In what ways do influences such as the unpredictability of requirements and the accumulation of technical debt relate to the appearance at a terminal deadpoint?
The importance of early detection is highlighted by the study to allow timely adjustments through RQ3: Which observable warning signs, such as a reluctance to restructure code or high rates of revision, function as the most consistent indicators of an approaching deadpoint? This part of the inquiry is intended to identify reliable signals that warn those responsible for a project of a coming failure before the financial requirements for a recovery grow too large to be justified.

1.4. Research Contributions

This paper adds to the field primarily by defining specific limits for project failure. To start, it creates and verifies a unified three-tier classification of software engineering deadpoints that consists of technical, process, and organizational categories. This structure is applied consistently throughout the study to provide a clear framework for identifying project failure [21]. This work offers those in the field a consistent set of terms to help identify the instant an initiative transitions from being difficult to maintain to being functionally impossible to save.
In addition, a structured ranking of causes is presented by evaluating the relative influence of various pressures on the overall stability of a project. Frequent changes in requirements and the buildup of technical debt are identified as the main causes of permanent stagnation, which gives management a clear order of risks to address. This part of the work moves scholarly conversation away from vague reasons for failure and toward a more detailed grasp of the specific forces that result in a total breakdown.
Additionally, a method for forecasting future states is established through the assessment of stability indicators, which are used to pinpoint where the accumulation of technical debt begins to hinder technical advancement. Reliable early signs of failure are identified by checking the events such as frequent code revisions and the noticeable tendency for developers to avoid certain parts of the system [22]. The adoption of a preventative maintenance strategy into project management is supported by this observation, and it is ensured that necessary changes occur while the possibility of fixing the system still falls within a technically and financially practical range. It is suggested that the early detection of these habits will allow for the correction of structural flaws before the project enters a state of deadpoints.
The understanding of how projects withstand challenges is improved through the study of recovery possibilities, and a clear line is drawn between process-related obstacles that can be fixed and technical failures that are likely terminal by measuring the difficulty of reversing different project conditions [23]. This evidence-based approach helps those in charge of making decisions determine whether attempting to restore a system is a sensible use of funds or if it is more appropriate to officially close the project.

1.5. Paper Outline

The following four sections are organized to present a thorough examination of terminal project states by beginning with a theoretical grounding before moving into the specific results of the study. Within Section 2, the underlying theory for this work is established by integrating previous scholarly work regarding software entropy and the buildup of technical debt in order to explain how structural problems accumulate over time. Section 3 outlines the practical research design by describing the way participants were chosen and describing the approach in which the survey of experienced professionals was conducted to ensure a wide range of industry perspectives. In Section 4, a detailed evaluation of the information collected is presented to verify the categories within the deadpoint classification while assessing the strength of various warning signals that indicate a project is nearing failure. Lastly, the broader importance of this work is summarized in Section 5, where the specific findings are brought together to offer a final perspective on the project outcomes and to suggest how these results can be used to improve future management practices.

2. Background and Related Work

Research into software project outcomes, specifically failures [24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42], has largely focused on binary classifications of success or failure states and causes. However, the identification of deadpoints indicates a non-linear sequence of systemic decline that remains unaddressed by these models. In order to provide a theoretical framework for this concept, this section integrates literature on the economic costs of software quality, the principles of software entropy, and the specific pressures created by technical debt and requirement volatility.

2.1. The Economic and Structural Impact of Software Quality

The economic impact resulting from poor software engineering practices [43,44,45] is seen as significant. Recent estimates indicate that poor software quality cost the U.S. economy approximately $2.08 trillion in 2020 [8]. This problem is not merely attributed to post-release defects, but it is instead largely driven by failed projects and legacy system debt, and this decline is often framed structurally by the laws of software evolution as proposed by Lehman, specifically the law of increasing complexity. It is suggested by this principle that the complexity of a software system rises naturally over time in the absence of efforts toward improvement [4]. If this growth outpaces the developers’ cognitive capacity, the system enters a state of entropy, leading to an exponential increase in remediation costs.

2.2. Technical Debt and the “Point of No Return”

Technical debt [46,47,48,49,50,51,52,53,54,55] can be defined as the future cost of additional rework imposed by choosing an easier approach as a short-term solution instead of employing difficult way to solve the problem in the long-term [9]. While technical debt is often considered manageable during its initial stages, a “tipping point” is identified in the literature where the interest on this debt consumes the entirety of development velocity.
In addition, research regarding “fearful refactoring” suggests that architectural fragility is signaled when developers actively avoid the modification of specific modules due to an elevated risk of regression [56,57]. Such avoidance is posited as a primary indicator that a potential deadpoint has been reached.

2.3. Requirement Volatility and Process Stagnation

Beyond technical constraints, requirement volatility is regarded as a primary reason for project failure. Requirement volatility can be regarded as the amount of changes in software requirements during the development lifecycle, and is frequently observed that high volatility leads to requirement circularity, which is a condition in which the constant change of objectives prevents the establishment of a stable architectural baseline [11,12,13,14,15,16,17].
It is indicated in various studies that when requirement volatility is coupled with high turnover or “knowledge silos”, the resulting loss of project vision creates a process deadpoint. This is characterized as a state where resources continue to be consumed without the project moving closer to a functional release [58,59].

2.4. Identifying the Literature Gap

While the causes of failure are covered extensively in the existing literature, a notable deficit is observed in formal frameworks that identify the specific “point of no return” for a project. Most research has focused on the identification of “code smells” or general mismanagement [18], yet the transition from a recoverable state of debt to a terminal deadpoint remains inadequately categorized. This study addresses this gap by the validation of a taxonomy that distinguishes between different types of terminal stagnation and provides empirical markers for their identification.

3. Methodology

The primary objective of this research was to empirically validate the deadpoint taxonomy and investigate the causal relationships between project stressors and terminal stagnation. To achieve this, a descriptive research design was employed by the researchers, where an expert practitioner survey was employed in order to obtain data from senior professionals within the software engineering industry. This approach was selected so that the theoretical framework of deadpoints is grounded in the lived experience of those managing high complexity systems, and it is intended that the collective knowledge of these individuals be used to define the boundaries of project feasibility. Because the insights of these experts are considered highly valuable, the validity of the study is increased by the inclusion of diverse industrial perspectives.

3.1. Survey Design and Instrumentation

The survey instrument was developed so that software engineering concepts are translated into measurable indicators. Structure was provided through four distinct sections: professional demographics, validation of the unified three-tier taxonomy (technical, process, and organizational), causal factor ranking, and predictive signal assessment. To ensure construct validity, deadpoints were defined for participants as project states that are functionally irreversible or where the cost of recovery exceeds the system value. The survey asked participants to consider projects where the recovery cost was higher than the system value. In this research, recovery cost was defined as the total engineering hours and resources required to fix structural issues, whereas system value was defined as the utility and financial gain the software provides to the user. In addition, we relied on expert judgment to find the point where the effort to fix a project is no longer worth the return as these amounts of resources vary. Moreover, while this approach is descriptive by nature, it accounts for how recovery levels vary across different project scales and organizational contexts, providing a grounded criterion for more rigid mathematical validation in future studies. The survey used five-point scales to measure how much the experts agreed with each point and how often they observed these events. In order to lower the risk of central bias, a forced-ranking system was also used to find the order of cause factors. This method ensures that the data are more reliable by making participants choose a clear order for the risks they see in their work.

3.2. Participant Selection and Professional Profile

A purposive sampling strategy was applied to target expert practitioners, and these individuals were defined for this study as those holding senior technical or leadership roles with significant industry experience. This criterion was considered essential to ensure that sufficient exposure to the long-term lifecycle of software projects and the stalling phenomena under investigation had been gained by the respondents. A diverse cross-section of the industry was represented by the cohort, and positions such as Software Architects, Technical Leads, CTOs, and Project Managers were included in the survey group. The majority of participants reported over a decade of professional experience. In conclusion, participants were selected based on specific professional benchmarks: they must hold senior technical or leadership roles (e.g., Software Architect, CTO, or Technical Lead) and possess over a decade of industry experience to ensure familiarity with long-term project lifecycles.

3.3. Data Collection and Analysis Procedures

Data collection was conducted via Google Forms, which can be considered as a secure and anonymous digital platform to be employed, to encourage participants to anonymously report project failures and organizational challenges. In addition, as detailed in Appendix A, the survey was structured into four thematic sections: professional demographics, taxonomy validation, causal factor ranking, and predictive signal assessment, allowing for a systematic translation of qualitative professional experiences into measurable quantitative data. Quantitative data were analyzed using descriptive statistics to determine the mean agreement scores for the taxonomy and the frequency of deadpoint encounters, and causal factors were analyzed using mean rank analysis so that a definitive hierarchy of contributors could be established. Additionally, a comparative analysis was made on the recovery feasibility scores for technical versus process deadpoints to quantify the perceived irreversibility of these states.

4. Results and Discussion

The empirical findings derived from the practitioner survey are presented and discussed in this section. A quantitative basis for the deadpoint concept is provided by the data, and the specific triggers and indicators characterizing terminal project stagnation are highlighted by these statistics. Additionally, by reviewing these findings, a more formal method for the assessment of project health is established for use by organizations.

4.1. Participant Profile and Expertise

A diverse cross-section of the software engineering industry was represented by the group of 34 individuals (n = 34) who participated in this research. It is shown in Table 1 that the majority category of participants was formed by software architects and technical leads at 47.1 percent, while project or product managers were found to present 26.5 percent of the total number of participants. This specific distribution is worth noting as the results will be influenced by the viewpoints of both technical authorities and leadership within organizations.
A remarkable finding was regarding the years of experiences of the participants. After collecting the data, it was observed that while 55.9% of the respondents were found to hold between 5 and 10 years of experience, 44.1% was formed by senior experts who had spent more than 11 years within the software industry. This specific collective was further divided into those with 11 to 15 years of background at 23.5% and those with over 16 years of work history at 20.6%, which ensured that the results are shaped by deeply rooted knowledge and long-term involvement in the field. Significant weight was lent to the identification of project distress signals by this concentration of long-term experience, as multiple project lifecycles and death march scenarios have been navigated by these practitioners.

4.2. Expert Perceptions of Deadpoint Scenarios

Five distinct scenarios were evaluated to see whether they aligned with the deadpoint idea among the 34 participants (n = 34). As shown in Table 2, an outdated underlying stack (mean: 2.76, SD: 1.13) and a velocity (mean: 2.68, SD: 1.09) were the top signs. While these scores were near the neutral mark on a five-point scale, they showed which events were seen as the most likely to lead to a state where a project could not be saved. In contrast, the codebase complexity and the critical bus factor both shared a lower score (mean: 2.47). Furthermore, the lowest score for requirement circularity (mean: 2.15, SD: 0.96) suggests that staff did not see this as a terminal state on their own. In addition, these data serve as an initial exploration of how experts perceive these risks. Moreover, while it provides a foundation for the deadpoint concept, it is intended to be a cornerstone to formal mathematical validation rather than a final proof of the model’s predictive accuracy. As a result of a gap in the current research regarding the shift from a stalled project to a terminal state was noted by participants with a mean score of 2.85 (SD: 1.13), this finding invites further academic work to define the “deadpoint” idea more clearly to help fill this perceived void in the field.

4.3. Causal Hierarchy and SDLC Impact

The ranking of factors contributing to the achievement of a project deadpoint revealed a critical emphasis on the early stages of the software development lifecycle. As shown in Table 3, the 34 participants (n = 34) ranked five primary factors using a system where 1 is the top cause and 5 is the lowest. Using this forced-ranking method helped create a clear list of risks for leaders to handle. It was observed from the mean ranks that resulted from the 34 participants’ (n = 34) choices that requirement volatility, holding a mean rank of 2.50, and technical debt, holding a mean rank of 2.65, were identified as the main causes for the state of terminal deadpoints, which means that these are the top reasons for a project hitting a point where it cannot be saved.
In addition, it can be observed from the results that a variety of perspectives can be examined regarding the factors that contribute to project failure. For instance, knowledge silos (mean rank = 2.82), recognized as a notable threat of the instability of the definition of the product, and the accumulation of poor architectural choices was viewed by the experts as the definitive reasons for the lack of success. Moreover, the data indicate data that deadpoints are frequently established long before the maintenance phase as they can exist as foundational errors rather than software defects. This means that these terminal states often start long before the maintenance phase. This finding aligns with Lehman’s law of increasing complexity, where the lack of early care leads to a state of entropy. When these structural errors build up during the early stages, the system reaches a “tipping point”, where the cost to fix the logic is more than the value of the software. Linking worker views to these software rules showed that terminal stagnation is a result of poor early choices rather than random defects. This suggests a need for more careful consideration by management.
Additionally, as shown in Figure 1, design/architecture was identified by 23.5% of practitioners as the most frequent terminal point, which was followed closely by requirements/analysis (20.6%) and implementation/coding (20.6%). By the time a project reaches maintenance/evolution (17.6%), the system is often rendered functionally irreversible by increasing structural- and process-related damage, yet the necessity of early-stage intervention is a point that is often missed by the typical observer. The formalization of architectural health checks is highlighted by this distribution to prevent the ongoing destruction of the system.

4.4. Early Warning Signals and Recovery Potential

A collection of specific indicators was evaluated by the expert participants (n = 34) to predict the appearance of a deadpoint. These indicators were strictly classified into the three unified dimensions of the taxonomy: technical, process, and organizational. By aligning these signals with our primary categories, we ensure that the framework remains reusable for practitioners seeking to identify specific types of project stagnation. The ability for these signals to forecast future states was evaluated using a Likert scale where 1 represents the lowest predictive and 5 represents the highest predictive. Moreover, as can be seen in the findings in Table 4, among the participants, the frequent code revisions (code churn mean = 3.00) were identified as the primary early signal for future deadpoints. This observation was followed by the fearful refactoring documentation lag (both mean = 2.68), which were highlighted by the participants as the main signs used to evaluate the ongoing destruction of the system.
In addition, the role of requirement circularity (mean = 2.09) was considered to be a finding of great importance within this analysis, as it was shown to be a significant factor. Although requirement volatility was ranked as the primary causal factor in Section 4.3, the specific signal of circularity, characterized by the repetitive discussion of identical requirements across multiple sprints without definitive sign-off, was rated as the predictor with the lowest intensity. A temporal distinction in the process of project collapse is suggested by these observations. While the root cause of failure is often found in requirement volatility, more definitive evidence that the threshold into a deadpoint has already been crossed is provided by technical symptoms such as fearful refactoring and code churn.
Finally, the irreversibility of project states was investigated, with the likelihood of successful intervention once a deadpoint is reached being quantified. This was measured on a scale where 1 represents not recoverable and 10 represents highly recoverable, as shown in Table 5. It is indicated by the results that technical deadpoints (mean = 4.24) are perceived as significantly more terminal and difficult to reanimate when compared to process deadpoints (mean = 5.38). Therefore, it can be noted that this difference in recovery scores is used to evidence the view that once a project reaches a deadpoint, it is perceived as functionally dead with a recovery possibility well below the midpoint. Additionally, in contrast to the firm nature of failure, process deadpoints are regarded as holding a greater degree of flexibility because it was assumed by many participants that even though progress is paused by such obstacles, they represent fixable challenges.

5. Conclusions and Future Work

In conclusion, the formalization of the software deadpoints concept was achieved via this study, providing a ground for the observation of how high levels of maintenance within technical debt shift into a state of total project stagnation. In addition, a varied classification of irreversibility was developed by the inclusion of software engineering expert viewpoints, which led the discussion beyond the simple binary models of success or failure.

5.1. Conclusions

Three main research questions were addressed and discussed. In order to answer those questions, a framework for the identification and prediction of terminal project states and a taxonomy consisting of technical, process, and operational deadpoints were designed and established in relation to the RQ1. With regard to the evaluations, the participants’ responses to the research questionnaire support that technical deadpoints are the most critical, representing specific thresholds of irreversibility rather than standard development challenges, as technical deadpoints often result from structural obsolescence and outdated technology stacks, which frequently leads to a state of functional failure, and was reflected in a low recovery feasibility score of 4.24.
Furthermore, with regard to RQ2, a hierarchy as established where it was determined that requirement volatility and technical debt serve as the major causes for reaching a deadpoint. In addition, the findings indicate that these failures are not defects but are instead foundational, as they are often found to be originated during the design/architecture phase of the development life cycle. This stage was identified by 23.5% of respondents as the most common origin of terminal failure. Moreover, the findings suggest that terminal stagnation is a structural outcome of early lifecycle instability, where it was surveyed that damage is often already present by the time a project reaches the maintenance phase.
Finally, with regard to RQ3, reliable early signals were recognized in order to warn those supervising a project when a state of irreversible failure is forthcoming. The most consistent signs were frequent code revisions along with an evident reluctance among developers to refactor the code, which yielded mean values of 3.00 and 2.68, respectively. It is widely understood that these behaviors indicate a loss of system control. In addition, a foundation for making proactive adjustments was established through these interpretations, which allows those in leadership roles to differentiate between temporary challenges and a total breakdown. Consequently, this work offers the software engineering field with the specific terminology and evaluative tools required to choose between expensive attempts at system recovery and the formal closing of a project.

5.2. Future Work

Although a cross-sectional basis for deadpoints was established through this investigation, several paths for following examination are still available for investigation. One such path involves a longitudinal study that monitors software initiatives in real-time, which would allow for the refinement of velocity drop limits and the identification of more exact mathematical triggers for each specific project state. In addition, a deeper understanding of whether modern technical solutions can return a project from a terminal status to a recoverable one might be gained by examining how automated restructuring tools and newer code generation technologies influence the viability of restoring a system. Finally, the group of individuals participating in future studies could be broadened to encompass a wider variety of global locations and business fields, a move that would help determine whether the way these terminal states are perceived changes when shifting between different professional traditions or legal frameworks.
In addition, future work should focus on expanding the sample size and exploring different types of software projects. In order to transition from descriptive perceptions to reproducible empirical validation, future research should implement objective measurement frameworks. For example, tracking the ‘maintenance-to-innovation ratio’ provides a quantifiable trigger for identifying a deadpoint, because the volume of effort dedicated to sustaining legacy code can be compared against the delivery of new features. In addition, establishing these mathematical levels will allow for the cross-project benchmarking necessary to move the field toward a predictive approach of project failure. By tracking these data points alongside technical debt levels, teams can find a more exact way to spot an approaching deadpoint before the project reaches a terminal state.

Funding

This research received no external funding.

Institutional Review Board Statement

This research was conducted using a fully anonymous survey where no personally identifiable information was collected, as specified in the study’s ‘Confidentiality and Participation’ section. Consequently, the study is exempt from formal ethical review under Article 14.1 of the Implementing Regulations of the Law of Ethics of Research on Living Creatures (Royal Decree No. M/59) and complies with the Saudi Personal Data Protection Law (PDPL) regarding anonymized datasets.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The author declares no conflicts of interest.

Appendix A

Information 17 00291 i001
Information 17 00291 i002
Information 17 00291 i003
Information 17 00291 i004
Information 17 00291 i005
Information 17 00291 i006
Information 17 00291 i007

References

  1. Project Management Institute. A Guide to the Project Management Body of Knowledge (PMBOK Guide), 6th ed.; Project Management Institute: Newtown Square, PA, USA, 2017; Available online: https://www.pmi.org/pmbok-guide-standards/foundational/pmbok (accessed on 16 January 2026).
  2. Rose, K.H. A Guide to the Project Management Body of Knowledge (PMBOK Guide) Fifth Edition. Proj. Manag. J. 2013, 44, e1. [Google Scholar] [CrossRef]
  3. Cerpa, N.; Verner, J.M. Why did your project fail? Commun. ACM 2009, 52, 130–134. [Google Scholar] [CrossRef]
  4. Lehtinen, T.O.; Mäntylä, M.V.; Vanhanen, J.; Itkonen, J.; Lassenius, C. Perceived causes of software project failures–An analysis of their relationships. Inf. Softw. Technol. 2014, 56, 623–643. [Google Scholar] [CrossRef]
  5. Lehman, M.M. Program evolution. Inf. Process. Manag. 1984, 20, 19–36. [Google Scholar] [CrossRef]
  6. Lehman, M.M.; Ramil, J.F. Software Evolution and Software Evolution Processes. Ann. Softw. Eng. 2022, 14, 275–309. [Google Scholar] [CrossRef]
  7. Lehman, M.M.; Ramil, J.F. Software evolution—Background, theory, practice. Inf. Process. Lett. 2003, 88, 33–44. [Google Scholar] [CrossRef]
  8. Krasner, H. The Cost of Poor Software Quality in the US: A 2020 Report; Consortium for Information & Software Quality CISQ: Boston, MA, USA, 2021; Volume 2, p. 3. [Google Scholar]
  9. Ahmed, I.; Brindescu, C.; Mannan, U.A.; Jensen, C.; Sarma, A. An Empirical Examination of the Relationship between Code Smells and Merge Conflicts. In Proceedings of the 2017 ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM); IEEE: New York, NY, USA, 2017; pp. 58–67. [Google Scholar] [CrossRef]
  10. de Araújo, M.H.M.; Costa, C.; Fontão, A. Technical Debt of Software Projects Based on Merge Code Comments. J. Softw. Eng. Res. Dev. 2025, 13, 13–19. [Google Scholar] [CrossRef]
  11. Dasanayake, S.; Aaramaa, S.; Markkula, J.; Oivo, M. Impact of requirements volatility on software architecture: How do software teams keep up with ever-changing requirements? J. Softw. Evol. Process 2019, 31, e2160. [Google Scholar] [CrossRef]
  12. Ferreira, S. Measuring the Effects of Requirements Volatility on Software Development Projects. Ph.D. Thesis, Arizona State University, Tempe, AZ, USA, 2002. Available online: https://search.proquest.com/openview/272741b2a4a644fb0dd75186bbef7664/1?pq-origsite=gscholar&cbl=18750&diss=y&casa_token=NbxDdWpNjzwAAAAA:xucLALZmpEpD-XUSiymJBDevvhFiX4dSrwi3lzxga27P04X7BgdKMWiOTMJ3Yu9t-vZvWTmdNX8 (accessed on 16 January 2026).
  13. Ferreira, S.; Collofello, J.; Shunk, D.; Mackulak, G. Understanding the effects of requirements volatility in software engineering by using analytical modeling and software process simulation. J. Syst. Softw. 2009, 82, 1568–1577. [Google Scholar] [CrossRef]
  14. Nurmuliani, N.; Zowghi, D.; Powell, S. Analysis of requirements volatility during software development life cycle. In 2004 Australian Software Engineering Conference. Proceedings; IEEE: New York, NY, USA, 2004; pp. 28–37. [Google Scholar] [CrossRef]
  15. Nurmuliani, N.; Zowghi, D.; Williams, S. Requirements volatility & its impact on change effort: Evidence based research n software development projects. In Verified OK; University of South Australia: Adelaide, Australia, 2006. [Google Scholar]
  16. Pasham, S.D. Managing Requirements Volatility in Software Quality Standards: Challenges and Best Practices. Int. J. Mod. Comput. 2024, 7, 123–140. [Google Scholar]
  17. Zowghi, D.; Nurmuliani, N. A study of the impact of requirements volatility on software project performance. In Ninth Asia-Pacific Software Engineering Conference, 2002; IEEE: New York, NY, USA, 2002; pp. 3–11. [Google Scholar] [CrossRef]
  18. Gold-Veerkamp, C.; Saray, N.; Lavazza, L.; Oberhauser, R.; Herwig, M.; Kavi, K. A Systematic Literature Review on Misconceptions in Software Engineering. In Proceedings of the Fifteenth International Conference on Software Engineering Advances (ICSEA), Porto, Portugal, 18–22 October 2020; pp. 1–8. Available online: https://www.researchgate.net/profile/Luigi-Lavazza/publication/346965175_ICSEA_2020_The_Fifteenth_International_Conference_on_Software_Engineering_Advances/links/5fd4d34045851553a0af3f64/ICSEA-2020-The-Fifteenth-International-Conference-on-Software-Engineering-Advances.pdf#page=12 (accessed on 16 January 2026).
  19. Ewusi-Mensah, K. Software Development Failures; MIT Press: Cambridge, MA, USA, 2003. [Google Scholar]
  20. Yourdon, E. Death March; Prentice Hall Professional: Westford, MA, USA, 2004. [Google Scholar]
  21. Marshall, J. Will Your Next Mistake Be Fatal? Avoiding the Chain of Mistakes that Can Destroy Your Organization. Financ. Exec. 2006, 22, 14–15. [Google Scholar]
  22. Hunt, A.; Thomas, D.D.; Yitbarek, S. The Pragmatic Programmer: Your Journey to Mastery; Addison-Wesley: Reading, MA, USA, 2020; Available online: https://www.pearson.com/en-us/subject-catalog/p/pragmatic-programmer-the-your-journey-to-mastery-20th-anniversary-edition/P200000000337/9780135956915 (accessed on 16 January 2026).
  23. Maidan, A. Software Project Rescue: Methods That Work. Software Project Rescue Methods Teams Can Rely on. Available online: https://andersenlab.com/blueprint/software-project-rescue-methods (accessed on 17 January 2026).
  24. Nizam, A. Software Project Failure Process Definition. IEEE Access 2022, 10, 34428–34441. [Google Scholar] [CrossRef]
  25. Polu, O.R. Machine Learning for Predicting Software Project Failure Risks. Int. J. Comput. Eng. Technol. 2024, 15, 950–959. [Google Scholar] [CrossRef]
  26. Ibraigheeth, M.A.; Abu Eid, A.I.; Alsariera, Y.A.; Awwad, W.F.; Nawaz, M. A New Weighted Ensemble Model to Improve the Performance of Software Project Failure Prediction. Int. J. Adv. Comput. Sci. Appl. 2024, 15, 2. [Google Scholar] [CrossRef]
  27. Ibraigheeth, M. Software project failure avoiding through risk analysis and management. J. Intell. Syst. Appl. Data Sci. 2024, 2, 7–17. [Google Scholar]
  28. Ibraigheeth, M. Assessment Tool for Software Failure and Risk Mitigation in Project Management. In 2024 IEEE International Conference on Data and Software Engineering (ICoDSE); IEEE: New York, NY, USA, 2024; pp. 49–54. [Google Scholar] [CrossRef]
  29. Anjum, A.N.; Alam, A.; Islam, K.J.; Saifuzzaman, M. Analyzing the Influence of Stakeholder Misalignment on Software Project Failures. Int. J. Innov. Res. Sci. Eng. Technol. (IJIRSET) 2023, 12, 15008–15016. [Google Scholar] [CrossRef]
  30. Nanwin, D.N.; Agaji, I.; Ogali, E.; Gbaden, T. A Framework for Early Detection of Agile Software Development Project Failures. Fac. Nat. Appl. Sci. J. Comput. Appl. 2025, 2, 1–9. [Google Scholar]
  31. Money, W.H.; Kaisler, S.H.; Cohen, S.J. Understanding Failed Software Projects through Forensic Analysis. J. Comput. Inf. Syst. 2022, 62, 940–953. [Google Scholar] [CrossRef]
  32. Alghamdi, A.A.N.; Al Jedaibib, W.; Alghamdi, A.S.A. Factors Affecting Success or Failure in Software Projects in the Kingdom of Saudi Arabia. Cah. Magellanes-NS 2024, 6, 783–803. [Google Scholar]
  33. Nanwin, D.N.; Agaji, I.; Ogala, E.; Gbaden, T. Strategies for Early Identification of Failures in Agile Software Development Projects–A Review. Fac. Nat. Appl. Sci. J. Comput. Appl. 2025, 2, 10–18. [Google Scholar]
  34. Bhadauria, K. From Analysis to Action: Categorizing and Addressing IT Project Failures for Enhanced Success Rates. In Proceedings of International Conference on Computing Systems and Intelligent Applications; Jaiswal, A., Anand, S., Hassanien, A.E., Azar, A.T., Eds.; Lecture Notes in Networks and Systems; Springer Nature: Singapore, 2026; Volume 1501, pp. 65–77. [Google Scholar] [CrossRef]
  35. Attarzadeh, I.; Ow, S.H. Project management practices: The criteria for success or failure. Commun. IBIMA 2008, 1, 234–241. [Google Scholar]
  36. Monserrat, M.; Mas, A.; Mesquida, A. A Study of Factors That Influence the Software Project Success. J. Softw. Evol. Process 2025, 37, e2735. [Google Scholar] [CrossRef]
  37. Taye, G.D.; Feleke, Y.A. Prediction of failures in the project management knowledge areas using a machine learning approach for software companies. SN Appl. Sci. 2022, 4, 165. [Google Scholar] [CrossRef]
  38. Zhang, M.; Antwi-Afari, M.F.; Wang, C.; Sun, W.; Mohandes, S.R.; Abdulai, S.F. Uncertainty in Software Development Projects: A Review of Causes, Types, Challenges, and Future Research Directions. Systems 2025, 13, 650. [Google Scholar] [CrossRef]
  39. Ockiya, T.F.; Lock, R. A Review of Human Factors in Remote Software Project Management: A Progressive Look at Human Based Issues in Remote Software Development Environments. In Proceedings of the 2023 12th International Conference on Software and Information Engineering; ACM: Sharm El-Sheikh, Egypt, 2023; pp. 15–21. [Google Scholar] [CrossRef]
  40. Joblin, M.; Apel, S. How Do Successful and Failed Projects Differ? A Socio-Technical Analysis. ACM Trans. Softw. Eng. Methodol. 2022, 31, 1–24. [Google Scholar] [CrossRef]
  41. Shokrizadeh, J.; Shokrizadeh, Z. Identifying and Ranking Key Failure Factors in Software Projects (Case Study: Software Services of MMTE Company). SSRN J. 2024. [Google Scholar] [CrossRef]
  42. Frank, J.; Jung, D. Why IT-Projects Fail: A Meta-Analysis of the Construct ‘Escalating Commitment’ in Information Systems Research; Karlsruhe Institute of Technology: Karlsruhe, Germany, 2018. [Google Scholar] [CrossRef]
  43. Nabot, A.; Al-Qerem, A. Impact of software quality on organizational performance. Array 2025, 27, 100476. [Google Scholar] [CrossRef]
  44. Elhetsh, A.H.A.; El-marghine mana Gannan, M.; Shagan, A.M.; Elyounnss, M.A. Software Quality Measurement: User-based assessment. Arraid J. Sci. Technol. AJST 2025, 2, 32–46. [Google Scholar]
  45. Nascimento, A.; De Melo, V.V.; Basgalupp, M.; Dias, L.A.V. A Two-Step Approach to Boost Neural Network Generalizability in Predicting Defective Software. In ITNG 2023 20th International Conference on Information Technology-New Generations; Latifi, S., Ed.; Advances in Intelligent Systems and Computing; Springer International Publishing: Cham, Switzerland, 2023; Volume 1445, pp. 37–43. [Google Scholar] [CrossRef]
  46. Borowa, K.; Ratkowski, A.; Verdecchia, R. The Technical Debt Gamble: A Case Study on Technical Debt in a Large-Scale Industrial Microservice Architecture. arXiv 2025, arXiv:2506.16214. [Google Scholar] [CrossRef]
  47. Besker, T.; Ghanbari, H.; Martini, A.; Bosch, J. The influence of Technical Debt on software developer morale. J. Syst. Softw. 2020, 167, 110586. [Google Scholar] [CrossRef]
  48. Vidoni, M.; Cunico, M.L. On technical debt in mathematical programming: An exploratory study. Math. Program. Comput. 2022, 14, 781–818. [Google Scholar] [CrossRef]
  49. Aldaeej, A.; Seaman, C. The Negative Implications of Technical Debt on Software Startups: What They Are and When They Surface. In Proceedings of the 5th International Workshop on Software-Intensive Business: Towards Sustainable Software Business, Pittsburgh, PA, USA, 18 May 2022; pp. 23–30. [Google Scholar] [CrossRef]
  50. Binamungu, L.P.; Phiri, D.E.; Simba, F. An Empirical Study of the Extent and Causes of Technical Debt in Public Organizations Software Systems. SSRN J. 2022. [Google Scholar] [CrossRef]
  51. Cassee, N.; Zampetti, F.; Novielli, N.; Serebrenik, A.; Di Penta, M. Self-Admitted Technical Debt and comments’ polarity: An empirical study. Empir. Softw. Eng. 2022, 27, 139. [Google Scholar] [CrossRef]
  52. Finke, M.; Neff, T.; Reichl, T. How to introduce TD Management into a Software Development Process—A Practical Approach. In 2023 ACM/IEEE International Conference on Technical Debt (TechDebt); IEEE: New York, NY, USA, 2023; pp. 52–61. [Google Scholar] [CrossRef]
  53. Rantala, L.; Mäntylä, M.; Lenarduzzi, V. Keyword-labeled self-admitted technical debt and static code analysis have significant relationship but limited overlap. Softw. Qual. J. 2024, 32, 391–429. [Google Scholar] [CrossRef]
  54. Gorripati, S.M.C.; Altalbe, A.; Rangarajan, P.K. Advances in Managing Self-Admitted Technical Debt: A Review of NLP and Machine Learning Approaches. JCCE, 2025; online first. [Google Scholar]
  55. Ahmad, M.O.; Gustavsson, T. Nexus Between Psychological Safety and Non-Technical Debt in Large-Scale Agile Enterprise Resource Planning Systems Development. In Software, System, and Service Engineering; Jarzębowicz, A., Luković, I., Przybyłek, A., Staroń, M., Ahmad, M.O., Ochodek, M., Eds.; Lecture Notes in Business Information Processing; Springer Nature: Cham, Switzerland, 2024; Volume 499, pp. 63–81. [Google Scholar] [CrossRef]
  56. Malavolta, I.; Lago, P.; Muccini, H.; Pelliccione, P.; Tang, A. What industry needs from architectural languages: A survey. IEEE Trans. Softw. Eng. 2012, 39, 869–891. [Google Scholar] [CrossRef]
  57. Malavolta, I.; Nirghin, K.; Scoccia, G.L.; Romano, S.; Lombardi, S.; Scanniello, G.; Lago, P. Javascript dead code identification, elimination, and empirical assessment. IEEE Trans. Softw. Eng. 2023, 49, 3692–3714. [Google Scholar] [CrossRef]
  58. Hussain, A.; Mkpojiogu, E.O.; Kamal, F.M. The role of requirements in the success or failure of software projects. Int. Rev. Manag. Mark. 2016, 6, 306–311. [Google Scholar]
  59. Mahmood, F.; Khan, A.Z.; Bokhari, R.H. ERP issues and challenges: A research synthesis. Kybernetes 2019, 49, 629–659. [Google Scholar] [CrossRef]
Figure 1. SDLC stages where deadpoints become terminal.
Figure 1. SDLC stages where deadpoints become terminal.
Information 17 00291 g001
Table 1. Professional role distribution (n = 34).
Table 1. Professional role distribution (n = 34).
Professional RolePercentage (%)
SoftwareArchitect/Technical Lead47.1%
Project/Product Manager26.5%
CTO/Engineering Director14.7%
Senior Software Engineer11.7%
Table 2. Mean agreement scores for deadpoint scenarios (n = 34).
Table 2. Mean agreement scores for deadpoint scenarios (n = 34).
Deadpoint ScenarioMean Agreement (1–5)Standard Deviation (SD)
Outdated Underlying Stack2.761.13
Velocity Drop (Bureaucracy/Indecision)2.681.09
Codebase Complexity (Regression Risk)2.471.08
Critical “Bus Factor”2.470.99
Requirement Circularity2.150.96
Table 3. Causal factors ranking (n = 34; 1 = highest contributor, 5 = lowest contributor).
Table 3. Causal factors ranking (n = 34; 1 = highest contributor, 5 = lowest contributor).
Causal FactorMean Rank (Lower = Higher Impact)
Requirement Volatility2.50
Technical Debt2.65
Knowledge Silos2.82
Tooling/Infrastructure Inadequacy2.94
Business/Technical Misalignment3.18
Table 4. Predictive strength of distress signals (n = 34).
Table 4. Predictive strength of distress signals (n = 34).
Predictor SignalMean Strength (1–5)Primary Dimension
Code Churn3.00Technical
“Fearful” Refactoring2.68Technical
Documentation Lag2.68Process/Technical
Turnover Intention2.50Organizational
Meeting Density2.24Process
Requirement Circularity2.09Process
Table 5. Perceived recovery feasibility by deadpoint classification (n = 34).
Table 5. Perceived recovery feasibility by deadpoint classification (n = 34).
Deadpoint TypeMean
Recovery Score
(1–10)
Perception of State
Technical Deadpoint4.24Terminal/Near-Irreversible
Process Deadpoint5.38Moderately Recoverable
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alzahrani, A.A.H. Empirical Validation of Software Engineering Deadpoints: An Expert Practitioner Survey. Information 2026, 17, 291. https://doi.org/10.3390/info17030291

AMA Style

Alzahrani AAH. Empirical Validation of Software Engineering Deadpoints: An Expert Practitioner Survey. Information. 2026; 17(3):291. https://doi.org/10.3390/info17030291

Chicago/Turabian Style

Alzahrani, Abdullah A. H. 2026. "Empirical Validation of Software Engineering Deadpoints: An Expert Practitioner Survey" Information 17, no. 3: 291. https://doi.org/10.3390/info17030291

APA Style

Alzahrani, A. A. H. (2026). Empirical Validation of Software Engineering Deadpoints: An Expert Practitioner Survey. Information, 17(3), 291. https://doi.org/10.3390/info17030291

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop