Towards Software Architecture as an Auditable Practice
Abstract
1. Introduction
- What are the essential elements that need to be considered when evaluating a software architecture?
- What are the discernible progression states of these essential elements that can be used to assess the progression of the evaluation of software architectures?
2. Background and Related Work
2.1. Software Architecture Evaluation
2.2. The SEMAT Kernel and the Essence Standard
3. Research Method
- Criticality of the system for the organization: the authors believe that if a system is critical for the organization, the experience of evaluating their architectures would be closer to real-world situations. In the experience of the authors, stakeholders take a more serious stance in architecture evaluation if the system is critical to the organization. Both systems evaluated and reported in this article are considered critical to their organizations and therefore were chosen to be presented in the article.
- Size of the software system: the size of a software system is correlated with the size and complexity of the architecture. A bigger architecture gives more space for evaluation activities, and therefore, the authors expected more issues to be observed and learned from evaluating bigger architectures. Both architecture evaluations presented in the article are considered big in terms of subsystems (both systems exhibiting more than 15 subsystems; most of these subsystems are software systems in themselves).
- Diversity of stakeholders: evaluating a software architecture is a stakeholder-intensive endeavor. The authors believe that a greater diversity of stakeholders in an evaluation is closer to the context where the proposal is expected to be used (i.e., with many people not being experts in architecture evaluation). Both experiences presented in the article exhibit this characteristic: a very diverse set of stakeholders.
4. The Essentials of Software Architecture Evaluations
4.1. SEMAT Kernel Definitions
4.2. An Argument in Favor of the SEMAT Kernel
- A software architecture evaluation is an endeavor, that is, a conscious and concerted effort devoted to the assessment of a software architecture fitness for the purpose the system was conceived for. This is the approach that was embraced by the SEMAT Kernel [24,25], that is, the representation of software engineering practices that run as endeavors.
4.3. Identifying the Essential Elements of a Software Architecture Evaluation
- The essentials are represented as Alphas, that is, key elements with which software practitioners work in a software architecture evaluation effort.
- These Alphas comprise a set of predefined States that represent the progression path that a healthy software architecture evaluation should follow.
- The graphical notation for the Alphas and their States is adopted. Figure 4 shows the Alphas proposed in this work using the SEMAT’s graphical notation. The black diamond indicates the subordination of the proposed Alphas to the existing Alphas in the SEMAT Kernel.
4.4. Alpha: Quality Attributes
- Identified: Reaching this state means that quality attributes are explicitly identified and recorded. In practice, this might imply that the organization is signing off the acceptance and use of a specific quality model. Quality attributes are characterized as requirements, scenarios, goals, or other approaches.
- Tradeoffs understood: Reaching this state means that quality attributes have been prioritized (i.e., their relative importance in regard to the software system whose architecture is being evaluated). This prioritization has been used to understand tradeoffs between quality attributes.
- Addressed: At this state, prioritized quality attributes have been considered in the architecture evaluation. Tradeoffs have been explained and recorded.
4.5. Alpha: Architecture Decisions
- Tacit identification: Reaching this state means that architecture decisions are identified at least in a tacit form, that is, no explicit definition or recording is observed. In this state, some knowledge about the rationale behind those decisions is expected (some of them very old as in the case of legacy systems). Some knowledge of who made one or more of these decisions is also expected.
- Explicit identification: At this state, architecture decisions are observed in explicit form. This means that decisions are characterized, although not necessarily recorded, to facilitate clear and concise communication. Some decisions are being (or have been) evaluated according to the evaluation approach. There are observable efforts for recording decisions, although this is still done in an ad hoc manner. In this state, it is also clear who made the decisions and the rationale behind those decisions. Risks related to design decisions are also being observed as a result of the evaluation and the impact on quality attributes is understood. In this state, relationships between decisions are also expected to be identified.
- Recorded and addressed: In this state, design decisions have been recorded in a specific format for this purpose (for example, the Architecture Decision Record or ADR (“Documenting Architecture Decisions”, Michael Nygard, 2011, accessed on 8 March 2026, https://www.cognitect.com/blog/2011/11/15/documenting-architecture-decisions)). Decisions have been reviewed according to the evaluation approach, and the reporting of such reviews presents risks identification and the impact on quality attributes. Due to that, at this state, it is highly expected that some decisions are added, deprecated, or improved, it is also expected that the use of a specific version control system, ideally supported by a specific-purpose tool.
4.6. Alpha: Architecture Description
- General overview: At this state, a general overview of the architecture exists. This overview might be composed of one or more diagrams that adapt the abstraction to the stakeholders’ concerns. Typically, this overview depicts a high-level view of the architecture components and their deployments. For this state to be reached, a specific configuration of components and connectors is not required.
- Models developed: Reaching this state implies that the architecture description is now encompassing a set of several models presenting more specific system configurations (components-connectors). The description includes views and viewpoints, and they are justified for evaluation purposes and the concerns of stakeholders. In this state, an architecture description is expected to be expressed informally (e.g., diagrams on a whiteboard complimented with natural language).
- Completed: To reach this state, the architecture description is considered complete for the software system whose architecture is under evaluation. At this state, a description in a specific language such as ArchiMate, UML, SysML, and ADL, among other alternatives, is expected. The use of such languages allows the evaluation team to adopt a modeler tool, which in turn enables the adoption of more systematic practices such as configuration management and version control.
4.7. Alpha: Business Goals
- Identified: At this state, the organization’s business goals (or mission goals) are explicitly identified. This means that the business goals are characterized in an appropriate form so that they can be used for the architecture analysis. The business drivers for the architecture are also identified.
- Principles established: At this point, statements about the definition of the software architecture are clear and explicitly identified. Again, this means that these definitions are expressed and characterized appropriately for architecture analysis purposes. The principles that guide the architecture being evaluated are also identified, and they are being used to evaluate the adequacy for the business (or mission) goals.
- Addressed: Reaching this state means that the architecture evaluation has answered how the evaluated software architecture aids in and/or risks meeting the business (or mission) goals that justify the software existence. In addition, a discussion of the tradeoffs between the business (or mission) goals in relation to the reviewed architecture is expected.
4.8. Alpha: Evaluation Adoption
- Method or approach chosen: Reaching this state means that the evaluation methods or approaches have been reviewed, the criteria for the selection of one method or approach have been agreed upon, and as a consequence, the method or approach has been chosen. The authors have observed that not all stakeholders understand the benefits of running an architecture review, and thus, in this state, it is also expected that stakeholders have agreed that the architecture evaluation is justified.
- Method or approach integrated: At this state, it is expected that the method or approach has been explained to the evaluation stakeholders. Steps, activities, roles, work products, and expected responsibilities and effort are understood and agreed upon. If for any reason one or more stakeholders find that the method or approach should be re-assessed in terms of its suitability for the case, then it is expected that at this point a discussion has already occurred.
- Working well: Reaching this state means that deviations between planned and actual activities have been corrected and the evaluation team and stakeholders are working in a seamless way. The results being delivered (partial or final) are expected to be consistent. The use of sound tools (e.g., software modelers, configuration management software) is expected, and stakeholders agree that the evaluation goals are being met.
- Organizational memory accrued: At this point, the architecture evaluation inherent tasks are finished. Reaching this state means that the organization has accrued in some form the significant knowledge learned from the evaluation experience. The organization might have started accruing evaluation experience knowledge long before reaching this state (for example, in Scrum, it is reasonably that this “knowledge accruement” started as soon as the team starts reflecting about the process in the retrospectives). For this state to be reached, the focus is on architecture evaluation process-related knowledge, rather than architecture-related knowledge (e.g., architecture decisions, a matter of the Alpha “Architecture Decisions”). Knowledge might be non-articulated (i.e., tacit knowledge) or articulated and recorded. The Alpha in this state does not prescribe any concrete mechanism for knowledge management, as this is organization-dependent. What is important in this state is to understand that process-related memory is an important element to improve organization performance [96].
4.9. Alphas and Relationships
- “Business Goals” drive “Quality Attributes.”
- “Architecture Description” express “Architecture Decisions.”
- “Architecture Decisions” influence “Quality Attributes.”
- “Evaluation Adoption” reviews “Architecture Decisions.”
- “Evaluation Adoption” improves “Architecture Description.”
- “Evaluation Adoption” examines “Business Goals,” and “Quality Attributes.”
4.10. A How-To Discussion
- How Alpha State Cards are used.
- What about Stakeholders (and other “essentials” in software engineering).
4.10.1. How to Use the Alpha State Cards to Guide and Assess a Software Architecture Evaluation
4.10.2. What About Stakeholders?
5. Case Studies
5.1. Case Study Design Aspects
- Define the case using a real-world problem: A key recommendation for the design of case studies is to avoid the use of “toy programs” or “toy examples [53,55,97]. In both case studies, the presented case is defined as the use of the proposed Alphas to guide a real-world software product architecture evaluation.
- Use more than one data collection approach: Both case studies relied on:
- –
- Observation: this is a first-degree data collection strategy [97] that implies a careful watching of the progression of the architecture evaluation. In both cases, the emphasis was put in “events”. For this work, an “event” is a fact that occurs in the context of the evaluation endeavor that affects a change in an invariant condition of the evaluation. An example of an observable event is a “disagreement between participants in the way the evaluation tasks are run”. This event makes a change in an invariant condition, for example, the “scenario brainstorming” changing to “scenario brainstorming stopped”. Avoiding losing focus on the research approach is critical, and thus only events that have a potential effect in the evaluation effort are considered. For example, given the previous definition, a “phone call” is also an event. However, unless the “phone call” has an effect of research interest, it will not be considered.
- –
- Retrospectives: Brief retrospectives (15–20 min) were used as needed as the formal instance to discuss the progression of the evaluation, as well as impediments that can hinder the evaluation objectives. The retrospectives followed an unstructured approach [97] to allow the evaluation leader to develop the conversation based on the research interest. Nevertheless, the following issues (or issue questions) were always on the mind of the evaluation leader when running the conversation:
- *
- Are the evaluation participants understanding the current state in the progression of the architecture evaluation?
- *
- Are the evaluation participants clear on what has already been done in the architecture evaluation?
- *
- Are the evaluation participants clear on what the next expected activities are going to be done in the evaluation effort?
- *
- Does the evaluation team have an overview of the expected work products in the evaluation effort?
In some retrospectives, interesting questions appeared that arose from the stakeholders. For example, in one of the retrospectives in Case Study 2 (i.e., in the military institution), an interesting question was asked by one of the military leaders: How much of the discovered architecture issues and risks were already known by the software and infrastructure teams? Although such questions are more related to the evaluation itself rather than to the use of the proposed Alphas, the discussion is always illuminating to understand the case.The retrospectives were also used to discuss the classic “retrospective questions” such as what are the identified impediments to doing the work and what the team plans to do in the next days. - –
- Archival data: A third data collection approach is the use of archival data [97]. Archival data are coincidentally necessary in an architecture evaluation because it allows us to define a starting point for eventual analysis in light of potential new architecture decisions. As noted in [97], the main problem with archival data is that despite being critical for the case, the data are already created and thus there is no control over how the data are generated.
- The use of triangulation to improve the precision of case study research: the use of triangulation, that is, broadening the viewpoints for approaching the studied object [97,98] is suggested to increase the precision of empirical case study research. In this work, two kinds of triangulation were used:
- –
- Source triangulation: this kind of triangulation promotes the use of more than one data source [97], where the same insights and issues are collected in different contexts. In this paper, two data sources are presented as Case Study 1 and Case Study 2.
- –
- Methodological triangulation: According to [97], this triangulation focuses on diversifying the data collection methods. In this work, observation, retrospectives, and archival data contributed to diversifying data collection methods.
5.2. Case Study 1: Human Resources Software System in a Military Institution
- Present the method: the method is presented to the evaluation team.
- Present the business drivers: the business drivers that explain the opportunity that justifies a software system are presented.
- Present the architecture: a first architecture description is presented, where the emphasis is on explaining how the current architecture follows business drivers.
- Identify the architectural approaches: current architectural decisions are identified.
- Generate the quality attribute tree: the quality attribute tree is an artifact that traces quality attributes to stakeholders’ concerns and then to concrete scenarios. The quality attribute tree is prioritized.
- Analyze architectural approaches: the identified architectural approaches are discussed in light of the quality attribute tree. In this step, architectural decisions are studied to determine if they favor or hinder achieving hig-priority scenarios.
- Brainstorm and prioritize scenarios: more stakeholders join the architecture evaluation, and scenario elicitation is open to a wider discussion.
- Analyze architectural approaches: the architectural approaches are again analyzed, now considering the new scenarios elicited.
- Present results: the execution of the method ends with a presentation of the results of the architecture evaluation.
- Evaluation method/approach selection: ATAM is an evaluation method that assumes that the method is already chosen. In turn, Alpha “Evaluation Adoption” begins with asking stakeholders to analyze evaluation methods/approaches before choosing one (State 1 in Alpha “Evaluation Adoption”). In this Case, this item of the checklist motivated us to better understand the rationale behind the selection of ATAM.
- Concrete check points for determining the current status of an architecture evaluation endeavor: while ATAM prescribes steps that can be used to globally understand the progression, the Alphas provide, by means of their States, concrete items to check for progression and auditing purposes. The team used these checkpoints for auditing progression.
- Concrete expected results for each State: the checklists are defined in a way that specifies concrete results that are needed before advancing to a next State in the progression of the evaluation. For example, the Alpha “Architecture Description” explicitly asks for specific models when advancing to State 2. The ATAM only requires one to present the architecture as a step to be conducted, leaving the evaluators to their experience to decide on the kind of architecture description to elaborate.
- The introduction of the Architecture Decisions Records (ADRs), making the team aware of this technique for recording decisions.
- The introduction of a modeler software.
- Present the method:
- Related Alpha/State: Evaluation Adoption (achieving State 1).
- Main Value Added: Concrete guidance: find arguments for justifying the use of ATAM.
- Evidence: Recorded criteria for the use of ATAM: the main recorded criteria is the availability of reported uses of ATAM in military institutions. The availability of examples of the use of ATAM was also noted as an important aspect to motivate the selection of ATAM.
- Present the business drivers:
- Related Alpha/State: Business Goals (State 1 to State 2).
- Main Value Added: No specific value added: State 1 was achieved by preparing a business goals and architecture drivers presentation, which is already called by ATAM. Although a recommendation was made by the evaluation leader, this recommendation is not attributable to the use of the Alpha.
- Evidence: N/A.
- Present the architecture:
- Related Alpha/State: Architecture Description (State 1).
- Main Value Added: Concrete guidance: leverage (already prepared) deployment diagrams (at service-level), avoiding specific configurations.
- Evidence: Two deployment diagrams (no specific modeling language used), presented in PowerPoint slides.
- Identify the architectural approaches:
- Related Alpha/State: Architecture Decisions (State 1 to State 2) and Architecture Description (State 1 to State 2).
- Main Value Added: Concrete guidance: articulate/model specific configurations and consider appropriate views.
- Evidence: First specific configuration modeled: staff career system modules and database with views for components/connectors and interoperability with other systems.
- Generate the quality attribute tree:
- Related Alpha/State: Quality Attributes (achieving State 1) and Business Goals (State 1 to State 2).
- Main Value Added: Concrete guidance: adopt a quality model to avoid lack of and over-representation of quality attributes. Business Goals advancement is considered an effect of the discussion given by the first State in Alpha Quality Attributes.
- Evidence: ISO 25010:2011 quality model adopted.
- Analyze architectural approaches:
- Related Alpha/State: Evaluation Adoption (State 2 to State 3).
- Main Value Added: Concrete guidance: enforce the delivery of partial results; enforce the use of tools for tracking progress.
- Evidence: Articulation of Quality Attributes into concrete requirements delivered to the officer in command; use of spreadsheets to track the progress of the evaluation.
- Brainstorm and prioritize scenarios:
- Related Alpha/State: Quality Attributes (State 2) and Evaluation Adoption (State 3).
- Main Value Added: Concrete checklists to audit progression and to determine (1) what has been achieved and (2) what is left for the activity (related Alpha: Quality Attributes).
- Evidence: Spreadsheet with four columns: Alpha, State, Checklist, and Done (per checklist item). When activities were carried out to achieve the checklist items, the achieved item was marked as Done.
- Analyze architectural approaches:
- Related Alpha/State: Architecture Decisions (State 2 to State 3).
- Main Value Added: Concrete guidance: recording structured decisions subject to version control.
- Evidence: Git-based repository containing ADRs (two folders: one for approved decisions and another for rejected/deprecated decisions).
- Present results:
- Related Alpha/State: All Alphas and primarily the final States of each Alpha.
- Main Value Added: Reports structured following the argument given by the progression of each Alpha, with special emphasis on the last States of each Alpha (the last States represent the desired status of the results of each Alpha).
- Evidence: Both presentation and written report with the following sections: introducing the approach, describing the architecture, describing the business and quality goals, and the approved, revised, or rejected architecture decisions and the rationale behind their status.
5.3. Case Study 2: Health Record System in a Public Cancer-Treatment Oriented Hospital
6. Case Studies Discussion and Key Insights
- Alpha: Evaluation Adoption
- –
- State 1: Discussion about the evaluation approach done, and agreement on the approach to be used reached.
- –
- State 2: The approach has already been presented to all participants, and partial results from the evaluation execution are being delivered (especially to high management stakeholders).
- –
- State 3: Stakeholders agreed that the evaluation met their expectations. Tools for writing down partial results and archiving artifacts are shown to be consistent. When a critical deviation from the planned activities appeared, actions were taken to correct the course of the evaluation.
- –
- State 4: Retrospectives carried out after each in-person evaluation meeting. Tips for the next architecture evaluation written down.
- Alpha: Quality Attributes
- –
- State 1: First set of quality attributes elicited. Quality model in use is agreed.
- –
- State 2: Quality attributes prioritized and articulated into explicit requirements and tradeoffs determined.
- –
- State 3: Priority quality attributes and their respective requirements analyzed, considering their support by the architecture.
- Alpha: Architecture Description
- –
- State 1: First architecture overview presented. No details of the components are shown.
- –
- State 2: Relevant architecture configuration views created and stored in version control system.
- –
- State 3: Complete description (for evaluation purposes) created. Models created using UML and stored in the version control system. In addition, the models created are being used to document the system being evaluated.
- Alpha: Architecture Decisions
- –
- State 1: Architecture decisions identified, some rationale identified. Written on the blackboard.
- –
- State 2: Architecture decisions explicitly identified and written down in version control system.
- –
- State 3: Architecture Decision Records created for each decision. Also maintained in the version control system.
- Alpha: Business Goals
- –
- State 1: Business goals and architecture drivers determined and written on the whiteboard.
- –
- State 2: Business goals articulated into concrete quality goals and used in the evaluation.
- –
- State 3: Stakeholders agreed that the architecture is supporting the business goals to a high degree.
- Observed effect: Actionable guidance and progression auditing
- Involved Alpha(s): Evaluation Adoption, State 3, especially items EA.S3.1 and EA.S3.2.
- Evidence Type: Creation and adoption of a simple spreadsheet with four columns (Alpha, State, Checklist Item, and Done).
- Timing: In both cases, State 3 was achieved in session 4 of the architecture evaluation (this is dependent on the particular architecture evaluation).
- Observed effect: Common ground for reporting
- Involved Alpha(s): Quality Attributes (especially State 3, item QA.S3.2), Architecture Decisions (especially State 3, item AD.S3.1), Architecture Description (especially State 3, item ADS.S3.2), Business Goals (State 3, item BG.S3.2), and Evaluation Adoption (especially State 1, items EA.S1.1 and EA.S1.2, and State 3, item EA.S3.1).
- Evidence Type: Written reports and presentation (in Case 1) structured in the following form: introducing the approach (explaining activities done and justification for the evaluation method/approach), describing the architecture (a selection of the models developed and related decisions), describing the business and quality goals (prioritized quality and business goals with tradeoff analysis result and how the architecture supports them), and the approved, revised, or rejected architecture decisions and the rationale behind their status. Each section revolved around the specific items of the involved Alphas.
- Timing: The written reports and the presentation in Case 1 were iterated from the early stages of the architecture evaluation, as the proposed mechanism instructs to deliver partial results to maintain stakeholders engaged. However, the finalization of the reports was made in the last days of the architecture evaluations. In Case 1, this work can be considered as part of the evaluation (Step 9 in ATAM); in Case 2, this work can be considered outside the evaluation as the Alphas do not explicitly mandate a “final report”.
- Observed effect: Regain focus in scenario brainstorming (only observed in Case 1)
- Involved Alpha(s): Quality Attributes (especially States 2 and 3, items QA.S2.3 and QA.S3.1) and Evaluation Adoption (especially State 2, item EA.S2.2).
- Evidence Type: The most important artifact is the spreadsheet that was used to check the progression regarding the Alpha “Quality Attributes”.
- Timing: In Case 1, this happened in the scenario brainstorming ATAM step, between sessions 4 and 5 of the architecture evaluation.
7. Threats to Validity
7.1. Conclusion Validity (Reliability)
7.2. Internal Validity
7.3. Construct Validity
7.4. External Validity
8. Conclusions
8.1. Better Understand Guidance of Software Architecture Evaluations in Agile Environments
8.2. Define Concrete Essentials for the Architectural Technical Debt Identification
8.3. Identification and Definition of Indicators to Assess How Well an Architecture Evaluation Ends
8.4. Explore the Suitability of the Proposal for Teaching and Learning About Software Architecture Evaluations
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Appendix A
Appendix A.1. The AlphaState Cards
Appendix A.2. Architecture Decision Records
- Title: the name of the architecture decision written as a noun phrase.
- Context: a brief itemized explanation of the context in which the architecture decision was made. Decision forces can be mentioned here.
- Decision: a brief itemized explanation of the architecture decision itself.
- Status: an indicator that tells if the architecture decision is implemented, deprecated, or subject to improvement.
- Consequences: an itemized explanation of the consequences of the architecture decision.
Appendix A.3. Architecture Evaluation Report Structure
- An Introduction or Preamble section, describing the circumstances in which the architecture evaluation takes place, the motivation for evaluating the architecture, the approach or approaches used, and the main findings.
- A Business or Mission Motivation section, describing the business or mission context in which the software product has been conceived. This section also describes the goals the organization has with the software system (for example, if the organization wants to reduce costs from a database system contract).
- A Requirements/Decisions mapping section, presenting the qualities (e.g., from a quality model), their prioritization, and their relation to the scenarios or architecturally significant requirements.
- A Decisions section, presenting the decisions analyzed in relation to the software qualities, including their tradeoffs, the decision status, and the main related risks. In this section, it is key to argue how such requirements are supported or inhibited by the architecture.
- A final section summarizing the findings and presenting the main conclusions of the evaluation.
Appendix B
Appendix B.1. Alpha: Quality Attributes
Appendix B.1.1. State: Identified
- QA.S1.1— Quality attributes are explicitly identified.
- QA.S1.2—Some form of quality attributes operationalization is observed (e.g., scenarios).
- QA.S1.3—Quality attributes are recorded.
Appendix B.1.2. State: Tradeoffs Understood
- QA.S2.1—The relative importance or priority of quality attributes is established.
- QA.S2.2—The determined prioritization is used to focus on key attributes.
- QA.S2.3—Tradeoffs between quality attributes are analyzed and understood.
Appendix B.1.3. State: Addressed
- QA.S3.1—Prioritized quality attributes were taken into account in the evaluation.
- QA.S3.2—Tradeoffs recorded and explained.
Appendix B.2. Alpha: Architecture Decisions
Appendix B.2.1. State: Tacit Identification
- AD.S1.1—Decisions are identified in tacit form.
- AD.S1.2—Decisions are not required to be recorded or explicitly defined.
- AD.S1.3—There is some knowledge about the rationale of the decision.
- AD.S1.4—There is some knowledge about who made the decision.
Appendix B.2.2. State: Explicit Identification
- AD.S2.1—Decisions are communicated in a clear and explicit characterized way.
- AD.S2.2—There are efforts to record decisions, although records can be ad-hoc.
- AD.S2.3—It is clear both who made the decision and the rationale behind it.
- AD.S2.4—The relationships between decisions are identified.
Appendix B.2.3. State: Recorded and Addressed
- AD.S3.1—Decisions are recorded in a specific format such as ADRs.
- AD.S3.2—Decisions are subject to version control/configuration management.
- AD.S3.3—Key decisions have been reviewed according to the chosen evaluation approach.
Appendix B.3. Alpha: Architecture Description
Appendix B.3.1. State: General Overview
- ADS.S1.1—An architecture description exists in the form of a general overview (most of the time, this overview shows the component deployment).
- ADS.S1.2—The architecture description does not present any specific architecture component-connector configuration.
Appendix B.3.2. State: Models Developed
- ADS.S2.1—The architecture description presents specific (ad-hoc) configurations for the system.
- ADS.S2.2—The description includes important views, and the viewpoints are justified.
- ADS.S2.3—The description can be expressed by means of informal language (e.g., whiteboard diagrams).
Appendix B.3.3. State: Completed
- ADS.S3.1—The architecture description has reached a state that can be considered complete for the system.
- ADS.S3.2—The description is expected to be expressed using description-oriented languages such as Archi, UML, SysML, ADLs, among other alternatives.
Appendix B.4. Alpha: Business Goals
Appendix B.4.1. State: Identified
- BG.S1.1—The business or mission goals are explicitly identified.
- BG.S1.2—Architecture drivers (both business and mission) are identified.
Appendix B.4.2. State: Principles Established
- BG.S2.1—Statements for the definition of the architecture are clear and explicitly identified.
- BG.S2.2—The principles are used to analyze part or all of the architecture.
Appendix B.4.3. State: Addressed
- BG.S3.1—Business goals are used in the evaluation of the architecture.
- BG.S3.2—The results explain the justification for how the architecture would help meet the goals.
Appendix B.5. Alpha: Evaluation Adoption
Appendix B.5.1. State: Method or Approach Chosen
- EA.S1.1—Evaluation methods have been reviewed.
- EA.S1.2—The criteria for choosing an evaluation method have been signed off.
- EA.S1.3—The selection of an evaluation method has been signed off.
Appendix B.5.2. State: Method or Approach Integrated
- EA.S2.1—The method chosen has been explained to the evaluation stakeholders.
- EA.S2.2—Activities or steps defined by the evaluation method are in execution according to the plan and, if required, tailored activities.
- EA.S2.3—Partial results are being delivered to stakeholders.
Appendix B.5.3. State: Working Well
- EA.S3.1—The planned activities of the evaluation are carried out with controlled deviation from the plan.
- EA.S3.2—Consistent use of tools.
- EA.S3.3—Consistent results from the evaluation initiative are being delivered.
- EA.S3.4—Stakeholders agree that the evaluation goals are being met.
Appendix B.5.4. State: Organizational Memory Accrued
- EA.S4.1—A retrospective analysis has been performed.
- EA.S4.2—Effort, practices, and what worked well have been recorded.
- EA.S4.3—The organizational memory has been updated.
References
- Taylor, R.N.; Medvidovic, N.; Dashofy, E.M. Software Architecture: Foundations, Theory and Practice; Addison-Wesley: Boston, MA, USA, 2007. [Google Scholar]
- Bass, L.; Clements, P.; Kazman, R. Software Architecture in Practice, 4th ed.; SEI Series in Software Engineering; Addison-Wesley Professional: Boston, MA, USA, 2021. [Google Scholar]
- Sievi-Korte, O.; Beecham, S.; Richardson, I. Challenges and recommended practices for software architecting in global software development. Inf. Softw. Technol. 2019, 106, 234–253. [Google Scholar] [CrossRef]
- Rozanski, N.; Woods, E. Software Systems Architecture: Working with Stakeholders Using Viewpoints and Perspectives, 2nd ed.; Addison Wesley: Boston, MA, USA, 2011. [Google Scholar]
- Bosch, J. Design and Use of Software Architectures: Adopting and Evolving a Product-Line Approach; ACM Press: New York, NY, USA; Addison-Wesley Publishing Co.: Boston, MA, USA, 2000. [Google Scholar]
- Clements, P.; Kazman, R.; Klein, M. Evaluating Software Architectures: Methods and Case Studies; SEI Series in Software Engineering; Addison-Wesley: Boston, MA, USA, 2001. [Google Scholar]
- Klotins, E.; Gorschek, T.; Wilson, M. Continuous Software Engineering: Introducing an Industry Readiness Model. IEEE Softw. 2023, 40, 77–87. [Google Scholar] [CrossRef]
- Ciceri, C.; Farley, D.; Ford, N.; Harmel-Law, A.; Keeling, M.; Lilienthal, C.; Rosa, J.; von Zitzewitz, A.; Weiss, R.; Woods, E. Software Architecture Metrics; O’Reilly Media: Sebastopol, CA, USA, 2022. [Google Scholar]
- Fitzgerald, B.; Stol, K.J. Continuous software engineering: A roadmap and agenda. J. Syst. Softw. 2017, 123, 176–189. [Google Scholar] [CrossRef]
- Soares, R.C.; Capilla, R.; dos Santos, V.; Nakagawa, E.Y. Trends in continuous evaluation of software architectures. Computing 2023, 105, 1957–1980. [Google Scholar] [CrossRef]
- Soares, R.C.; Santos, V.d.; Nakagawa, E.Y. Continuous evaluation of software architectures: An overview of the state of the art. In SAC ’22: Proceedings of the 37th ACM/SIGAPP Symposium on Applied Computing; Association for Computing Machinery: New York, NY, USA, 2022; pp. 1425–1431. [Google Scholar] [CrossRef]
- Dobrica, L.; Niemela, E. A survey on software architecture analysis methods. IEEE Trans. Softw. Eng. 2002, 28, 638–653. [Google Scholar] [CrossRef]
- Hofmeister, C.; Kruchten, P.; Nord, R.L.; Obbink, H.; Ran, A.; America, P. A general model of software architecture design derived from five industrial approaches. J. Syst. Softw. 2007, 80, 106–126. [Google Scholar] [CrossRef]
- Taylor, R.N.; van der Hoek, A. Software Design and Architecture The once and future focus of software engineering. In FOSE ’07: 2007 Future of Software Engineering; IEEE Computer Society: Washington, DC, USA, 2007; pp. 226–243. [Google Scholar] [CrossRef]
- Cervantes, H.; Kazman, R. Designing Software Architectures: A Practical Approach, 2nd ed.; Addison-Wesley Professional: Boston, MA, USA, 2024. [Google Scholar]
- Madison, J. Agile Architecture Interactions. IEEE Softw. 2010, 27, 41–48. [Google Scholar] [CrossRef]
- Bellomo, S.; Gorton, I.; Kazman, R. Toward Agile Architecture: Insights from 15 Years of ATAM Data. IEEE Softw. 2015, 32, 38–45. [Google Scholar] [CrossRef]
- Cruz, P.; Astudillo, H.; Hilliard, R.; Collado, M. Assessing Migration of a 20-Year-Old System to a Micro-Service Platform Using ATAM. In Proceedings of the 2019 IEEE International Conference on Software Architecture Companion (ICSA-C); IEEE Computer Society: Washington, DC, USA, 2019; pp. 174–181. [Google Scholar] [CrossRef]
- Cruz, P.; Salinas, L.; Astudillo, H. Quick Evaluation of a Software Architecture Using the Decision-Centric Architecture Review Method: An Experience Report. In Proceedings of the Software Architecture; Jansen, A., Malavolta, I., Muccini, H., Ozkaya, I., Zimmermann, O., Eds.; ACM Digital Library: Cham, Switzerland, 2020; pp. 281–295. [Google Scholar]
- Cruz, P.; Ulloa, G.; Martin, D.S.; Veloz, A. Software Architecture Evaluation of a Machine Learning Enabled System: A Case Study. In Proceedings of the 2023 42nd IEEE International Conference of the Chilean Computer Science Society (SCCC), Concepcion, Chile, 23–26 October 2023; pp. 1–8. [Google Scholar] [CrossRef]
- Cruz, P.; Astudillo, H. Positive Side-Effects of Evaluating a Software Architecture. In Proceedings of the Software Architecture. ECSA 2024 Tracks and Workshops; Ampatzoglou, A., Pérez, J., Buhnova, B., Lenarduzzi, V., Venters, C.C., Zdun, U., Drira, K., Rebelo, L., Di Pompeo, D., Tucci, M., et al., Eds.; ACM Digital Library: Cham, Switzerland, 2024; pp. 167–177. [Google Scholar]
- Schon, D. The Reflective Practitioner. How Professionals Think in Action; Temple Smith: London, UK, 1983. [Google Scholar]
- Jacobson, I.; Ng, P.W.; McMahon, P.; Spence, I.; Lidman, S. The Essence of Software Engineering: The SEMAT Kernel: A thinking framework in the form of an actionable kernel. Queue 2012, 10, 40–51. [Google Scholar] [CrossRef]
- Object Management Group, OMG. Essence—Kernel and Language for Software Engineering Methods, version 1.2; Object Management Group (OMG): Milford, MA, USA, 2018.
- Jacobson, I.; Lawson, H.B.; Ng, P.W.; McMahon, P.E.; Goedicke, M. The Essentials of Modern Software Engineering: Free the Practices from the Method Prisons! Association for Computing Machinery and Morgan & Claypool: New York, NY, USA, 2019. [Google Scholar]
- Cruz, P.; Astudillo, H.; Zapata-Jaramillo, C.M. Extending the SEMAT Kernel to Represent and Assess Software Architecture Evaluations. In Proceedings of the 2023 XLIX Latin American Computer Conference (CLEI), La Paz, Bolivia, 16–20 October 2023; pp. 1–8. [Google Scholar] [CrossRef]
- Ernst, N.A.; Klein, J.; Bartolini, M.; Coles, J.; Rees, N. Architecting complex, long-lived scientific software. J. Syst. Softw. 2023, 204, 111732. [Google Scholar] [CrossRef]
- Kazman, R.; Abowd, G.; Bass, L.; Clements, P. Scenario-based analysis of software architecture. IEEE Softw. 1996, 13, 47–55. [Google Scholar] [CrossRef]
- Cámara, J.; de Lemos, R.; Vieira, M.; Almeida, R.; Ventura, R. Architecture-based resilience evaluation for self-adaptive systems. Computing 2013, 95, 689–722. [Google Scholar] [CrossRef]
- Kazman, R.; Klein, M.; Clements, P. ATAM: Method for Architecture Evaluation. Technical Report CMU/SEI-2000-TR-004. 2000. Available online: https://www.sei.cmu.edu/library/atam-method-for-architecture-evaluation/ (accessed on 25 April 2024).
- Bengtsson, P.; Bosch, J. Scenario-based software architecture reengineering. In Proceedings of the Fifth International Conference on Software Reuse (Cat. No.98TB100203), Victoria, BC, Canada, 5 June 1998; pp. 308–317. [Google Scholar] [CrossRef]
- van Heesch, U.; Eloranta, V.P.; Avgeriou, P.; Koskimies, K.; Harrison, N. Decision-Centric Architecture Reviews. IEEE Softw. 2014, 31, 69–76. [Google Scholar] [CrossRef]
- Harrison, N.; Avgeriou, P. Pattern-Based Architecture Reviews. IEEE Softw. 2011, 28, 66–71. [Google Scholar] [CrossRef]
- Reijonen, V.; Koskinen, J.; Haikala, I. Experiences from Scenario-Based Architecture Evaluations with ATAM. In Proceedings of the Software Architecture; Babar, M.A., Gorton, I., Eds.; Springer: Berlin/Heidelberg, Germany, 2010; pp. 214–229. [Google Scholar]
- Ford, N.; Richards, M.; Sadalage, P.; Dehghani, Z. Software Architecture: The Hard Parts; O’Reilly Media: Sebastopol, CA, USA, 2021. [Google Scholar]
- Becker, S.; Trifu, M.; Reussner, R. Towards supporting evolution of service-oriented architectures through quality impact prediction. In Proceedings of the 2008 23rd IEEE/ACM International Conference on Automated Software Engineering—Workshops, L’Aquila, Italy, 15–16 September 2008; pp. 77–81. [Google Scholar] [CrossRef]
- Knodel, J.; Naab, M. Software Architecture Evaluation in Practice: Retrospective on More Than 50 Architecture Evaluations in Industry. In Proceedings of the 2014 IEEE/IFIP Conference on Software Architecture, Sydney, NSW, Australia, 7–11 April 2014; pp. 115–124. [Google Scholar] [CrossRef]
- Maranzano, J.; Rozsypal, S.; Zimmerman, G.; Warnken, G.; Wirth, P.; Weiss, D. Architecture reviews: Practice and experience. IEEE Softw. 2005, 22, 34–43. [Google Scholar] [CrossRef]
- Kazman, R.; Bass, L. Making architecture reviews work in the real world. IEEE Softw. 2002, 19, 67–73. [Google Scholar] [CrossRef]
- Ferber, S.; Heidl, P.; Lutz, P. Reviewing Product Line Architectures: Experience Report of ATAM in an Automotive Context. In Proceedings of the Software Product-Family Engineering; van der Linden, F., Ed.; Springer: Berlin/Heidelberg, Germany, 2002; pp. 364–382. [Google Scholar]
- Ali Babar, M.; Bass, L.; Gorton, I. Factors Influencing Industrial Practices of Software Architecture Evaluation: An Empirical Investigation. In Proceedings of the Software Architectures, Components, and Applications; Overhage, S., Szyperski, C.A., Reussner, R., Stafford, J.A., Eds.; Springer: Berlin/Heidelberg, Germany, 2007; pp. 90–107. [Google Scholar]
- Knodel, J.; Naab, M. Pragmatic Evaluation of Software Architectures, 1st ed.; Springer Publishing Company, Incorporated: Berlin/Heidelberg, Germany, 2016. [Google Scholar]
- Obbink, H.; Kruchten, P.; Kozaczynski, W.; Postema, H.; Ran, A.; Dominick, L.; Kazman, R.; Hilliard, R.; Tracz, W.; Kahane, E. Software Architecture Review and Assessment (SARA) Report, Version 1.0. 2002. Available online: https://philippe.kruchten.com/wp-content/uploads/2011/09/sarav1.pdf (accessed on 8 March 2026).
- ISO/IEC/IEEE 42030:2019; Software, Systems and Enterprise—Architecture Evaluation Framework. ISO/IEC/IEEE: Geneva, Switzerland, 2019.
- ISO/IEC/IEEE 42020:2019; Software, Systems and Enterprise—Architecture Processes. ISO/IEC/IEEE: Geneva, Switzerland, 2019.
- Jacobson, I.; Spence, I.; Johnson, P.; Kajko-Mattsson, M. Re-founding software engineering—SEMAT at the age of three (keynote abstract). In Proceedings of the 2012 27th IEEE/ACM International Conference on Automated Software Engineering, Essen, Germany, 3–7 September 2012; pp. 15–19. [Google Scholar] [CrossRef]
- Päivärinta, T.; Smolander, K. Theorizing about software development practices. Sci. Comput. Program. 2015, 101, 124–135. [Google Scholar] [CrossRef]
- Jacobson, I.; Huang, S.; Kajko-Mattsson, M.; Mcmahon, P.; Seymour, E. Semat–Three Year Vision. Program. Comput. Softw. 2012, 38, 1–12. [Google Scholar] [CrossRef]
- Ray, P.; Pal, P. Extending the SEMAT Kernel for the Practice of Designing and Implementing Microservice-Based Applications using Domain Driven Design. In Proceedings of the 2020 IEEE 32nd Conference on Software Engineering Education and Training (CSEE&T), Munich, Germany, 9–12 November 2020; pp. 1–4. [Google Scholar] [CrossRef]
- Savić, V.; Varga, E. Extending the SEMAT Kernel with the TDD practice. IET Softw. 2018, 12, 85–95. [Google Scholar] [CrossRef]
- Nørbjerg, J.; Dittrich, Y. The never-ending story–How companies transition to and sustain continuous software engineering practices. J. Syst. Softw. 2024, 213, 112056. [Google Scholar] [CrossRef]
- Stake, R. Multiple Case Study Analysis; Guilford Publications: New York, NY, USA, 2013. [Google Scholar]
- Wohlin, C.; Runeson, P.; Hst, M.; Ohlsson, M.C.; Regnell, B.; Wessln, A. Experimentation in Software Engineering; Springer Publishing Company, Incorporated: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
- Abma, T.A.; Stake, R.E. Science of the Particular: An Advocacy of Naturalistic Case Study in Health Research. Qual. Health Res. 2014, 24, 1150–1161. [Google Scholar] [CrossRef] [PubMed]
- Wohlin, C.; Rainer, A. Is it a case study?—A critical analysis and guidance. J. Syst. Softw. 2022, 192, 111395. [Google Scholar] [CrossRef]
- Wallace, D.R.; Zelkowitz, M.V. Experimental Models for Validating Technology. Computer 1998, 31, 23–31. [Google Scholar] [CrossRef]
- Yin, R. Case Study Research and Applications: Design and Methods; SAGE Publications: Thousand Oaks, CA, USA, 2017. [Google Scholar]
- Robson, C. Real World Research; Wiley: Hoboken, NJ, USA, 2024. [Google Scholar]
- Silverman, D. Doing Qualitative Research: A Practical Handbook/David Silverman, 3rd ed.; SAGE: London, UK, 2010. [Google Scholar]
- Shull, F. Sharing Your Story. IEEE Softw. 2013, 30, 4–7. [Google Scholar] [CrossRef]
- CMMI Product Team. CMMI for Development, version 1.3. Technical Report CMU/SEI-2010-TR-033. SEI/CMU: Pittsburgh, PA, USA, 2010.
- Herbsleb, J.; Zubrow, D.; Goldenson, D.; Hayes, W.; Paulk, M. Software quality and the Capability Maturity Model. Commun. ACM 1997, 40, 30–40. [Google Scholar] [CrossRef]
- van Lamsweerde, A. Goal-oriented requirements engineering: A guided tour. In Proceedings of the Fifth IEEE International Symposium on Requirements Engineering, Toronto, ON, Canada, 27–31 August 2001; pp. 249–262. [Google Scholar] [CrossRef]
- Harrison, N.B.; Avgeriou, P. Leveraging Architecture Patterns to Satisfy Quality Attributes. In Proceedings of the Software Architecture; Oquendo, F., Ed.; Springer: Berlin/Heidelberg, Germany, 2007; pp. 263–270. [Google Scholar]
- Gorton, I. Essential Software Architecture, 2nd ed.; Springer Publishing Company, Incorporated: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
- ISO/IEC 25010:2023; Systems and Software Engineering—Systems and Software Quality Requirements and Evaluation (SQuaRE)—Product Quality Model. ISO/IEC: Geneva, Switzerland, 2023.
- Axelsson, J.; Skoglund, M. Quality assurance in software ecosystems: A systematic literature mapping and research agenda. J. Syst. Softw. 2016, 114, 69–81. [Google Scholar] [CrossRef]
- Bachmann, F.; Bass, L.; Klein, M.; Shelton, C. Designing software architectures to achieve quality attribute requirements. Softw. IEE Proc. 2005, 152, 153–165. [Google Scholar] [CrossRef]
- Dutoit, A.H.; McCall, R.; Mistrik, I.; Paech, B. Rationale Management in Software Engineering; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
- Perry, D.E.; Wolf, A.L. Foundations for the study of software architecture. SIGSOFT Softw. Eng. Notes 1992, 17, 40–52. [Google Scholar] [CrossRef]
- Bosch, J. Software Architecture: The Next Step. In Proceedings of the Software Architecture; Oquendo, F., Warboys, B.C., Morrison, R., Eds.; Springer: Berlin/Heidelberg, Germany, 2004; pp. 194–199. [Google Scholar]
- Che, M.; Perry, D.E.; Yang, G. Evaluating Architectural Design Decision Paradigms in Global Software Development. Int. J. Softw. Eng. Knowl. Eng. 2015, 25, 1677–1692. [Google Scholar] [CrossRef]
- Me, G.; Calero, C.; Lago, P. Architectural Patterns and Quality Attributes Interaction. In Proceedings of the 2016 Qualitative Reasoning about Software Architectures (QRASA), Venice, Italy, 5–8 April 2016; pp. 27–36. [Google Scholar] [CrossRef]
- Tang, A.; Avgeriou, P.; Jansen, A.; Capilla, R.; Ali Babar, M. A comparative study of architecture knowledge management tools. J. Syst. Softw. 2010, 83, 352–370. [Google Scholar] [CrossRef]
- Jansen, A.; Bosch, J. Software Architecture as a Set of Architectural Design Decisions. In Proceedings of the 5th Working IEEE/IFIP Conference on Software Architecture (WICSA’05), Pittsburgh, PA, USA, 6–10 November 2005; pp. 109–120. [Google Scholar] [CrossRef]
- Capilla, R.; Jansen, A.; Tang, A.; Avgeriou, P.; Babar, M.A. 10 years of software architecture knowledge management: Practice and future. J. Syst. Softw. 2016, 116, 191–205. [Google Scholar] [CrossRef]
- Babar, M.A.; Dingsyr, T.; Lago, P.; van Vliet, H. Software Architecture Knowledge Management: Theory and Practice, 1st ed.; Springer Publishing Company, Incorporated: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
- Tofan, D. Tacit architectural knowledge. In ECSA ’10: Proceedings of the Fourth European Conference on Software Architecture: Companion Volume; Association for Computing Machinery: New York, NY, USA, 2010; pp. 9–11. [Google Scholar] [CrossRef]
- Borrego, G.; Morán, A.L.; Palacio, R.R.; Vizcaíno, A.; García, F.O. Towards a reduction in architectural knowledge vaporization during agile global software development. Inf. Softw. Technol. 2019, 112, 68–82. [Google Scholar] [CrossRef]
- Nonaka, I.; Takeuchi, H. The Knowledge-Creating Company; Oxford University Press: Oxford, UK, 1995. [Google Scholar]
- Eilon, S. What Is a Decision? Manag. Sci. 1969, 16, B172–B189. [Google Scholar] [CrossRef]
- ISO/IEC/IEEE 42010:2022; Software, Systems and Enterprise—Architecture Description. ISO/IEC/IEEE: Geneva, Switzerland, 2022.
- Salger, F. Software Architecture Evaluation in Global Software Development Projects. In Proceedings of the on the Move to Meaningful Internet Systems: OTM 2009 Workshops; Meersman, R., Herrero, P., Dillon, T., Eds.; Springer: Berlin/Heidelberg, Germany, 2009; pp. 391–400. [Google Scholar]
- Verdecchia, R.; Kruchten, P.; Lago, P.; Malavolta, I. Building and evaluating a theory of architectural technical debt in software-intensive systems. J. Syst. Softw. 2021, 176, 110925. [Google Scholar] [CrossRef]
- Woods, E.; Rozanski, N. Unifying software architecture with its implementation. In ECSA ’10: Proceedings of the Fourth European Conference on Software Architecture: Companion Volume; Association for Computing Machinery: New York, NY, USA; pp. 55–58. [CrossRef]
- Fairbanks, G. Software Architecture is a Set of Abstractions. IEEE Softw. 2023, 40, 110–113. [Google Scholar] [CrossRef]
- Han, E. Setting Business Goals & Objectives: 4 Considerations. 2023. Available online: https://online.hbs.edu/blog/post/business-goals-and-objectives (accessed on 20 March 2025).
- Kazman, R.; Bass, L. Categorizing Business Goals for Software Architectures. Technical Report CMU/SEI-2005-TR-021. 2005. Available online: https://www.sei.cmu.edu/library/categorizing-business-goals-for-software-architectures/ (accessed on 31 July 2024).
- Bass, L.; Clements, P. Business Goals and Architecture. In Relating Software Requirements and Architectures; Avgeriou, P., Grundy, J., Hall, J.G., Lago, P., Mistrík, I., Eds.; Springer: Berlin/Heidelberg, Germany, 2011; pp. 183–195. [Google Scholar] [CrossRef]
- Clements, P.; Bass, L. Business goals as architectural knowledge. In SHARK ’10: Proceedings of the 2010 ICSE Workshop on Sharing and Reusing Architectural Knowledge; Association for Computing Machinery: New York, NY, USA, 2010; pp. 9–12. [Google Scholar] [CrossRef]
- Bass, L.; Clements, P. The Business Goals Viewpoint. IEEE Softw. 2010, 27, 38–45. [Google Scholar] [CrossRef]
- Gross, D.; Yu, E. Evolving system architecture to meet changing business goals: An agent and goal-oriented approach. In Proceedings of the Proceedings Fifth IEEE International Symposium on Requirements Engineering, Toronto, ON, Canada, 27–31 August 2001; pp. 316–317. [Google Scholar] [CrossRef]
- Clements, P.; Bass, L. Relating Business Goals to Architecturally Significant Requirements for Software Systems. Technical Report CMU/SEI-2010-TN-018. 2010. Available online: https://www.sei.cmu.edu/library/relating-business-goals-to-architecturally-significant-requirements-for-software-systems/ (accessed on 31 July 2024).
- Chung, L.; Nixon, B.A.; Yu, E.; Mylopoulos, J. The NFR Framework in Action. In Non-Functional Requirements in Software Engineering; Springer: Boston, MA, USA, 2000; pp. 15–45. [Google Scholar] [CrossRef]
- Sletholt, M.T.; Hannay, J.E.; Pfahl, D.; Langtangen, H.P. What Do We Know about Scientific Software Development’s Agile Practices? Comput. Sci. Eng. 2012, 14, 24–37. [Google Scholar] [CrossRef]
- Morey, D.; Maybury, M.T.; Thuraisingham, B. Knowledge Management: Classic and Contemporary Works; The MIT Press: Cambridge, MA, USA, 2000. [Google Scholar] [CrossRef]
- Runeson, P.; Höst, M. Guidelines for conducting and reporting case study research in software engineering. Empir. Softw. Eng. 2009, 14, 131–164. [Google Scholar] [CrossRef]
- Stake, R.E. The Art of Case Study Research; Sage Publications, Inc.: Thousand Oaks, CA, USA, 1995; p. 175. [Google Scholar]
- Hussain, S.M.; Bhatti, S.N.; Ur Rasool, M.F. Legacy system and ways of its evolution. In Proceedings of the 2017 International Conference on Communication Technologies (ComTech), Rawalpindi, Pakistan, 19–21 April 2017; pp. 56–59. [Google Scholar] [CrossRef]
- Wagner, C. Model-Driven Software Migration: A Methodology Reengineering, Recovery and Modernization of Legacy Systems; Springer: Berlin/Heidelberg, Germany, 2014. [Google Scholar]
- Abdellatif, M.; Shatnawi, A.; Mili, H.; Moha, N.; Boussaidi, G.E.; Hecht, G.; Privat, J.; Guéhéneuc, Y.G. A taxonomy of service identification approaches for legacy software systems modernization. J. Syst. Softw. 2021, 173, 110868. [Google Scholar] [CrossRef]
- Bisbal, J.; Lawless, D.; Wu, B.; Grimson, J. Legacy information systems: Issues and directions. IEEE Softw. 1999, 16, 103–111. [Google Scholar] [CrossRef]
- Bennett, K. Legacy systems: Coping with success. IEEE Softw. 1995, 12, 19–23. [Google Scholar] [CrossRef]
- Yarza, I.; Azkarate-askatsua, M.; Onaindia, P.; Grüttner, K.; Ittershagen, P.; Nebel, W. Legacy software migration based on timing contract aware real-time execution environments. J. Syst. Softw. 2021, 172, 110849. [Google Scholar] [CrossRef]
- Barbacci, M.; Wood, W. Architecture Tradeoff Analyses of C4ISR Products. Technical Report CMU/SEI-99-TR-014. 1999. Available online: https://www.sei.cmu.edu/documents/1207/1999_005_001_16760.pdf (accessed on 2 August 2024).
- ISO/IEC 25010:2011; Systems and Software Engineering—Systems and Software Quality Requirements and Evaluation (SQuaRE)—Product Quality Model. ISO/IEC: Geneva, Switzerland, 2011.
- Cruz, P.; Astudillo, H. Assessing development teams and organizations with the SEMAT Kernel: Lessons learned from a real-world experience. In Proceedings of the 2017 36th International Conference of the Chilean Computer Science Society (SCCC), Arica, Chile, 16–20 October 2017; pp. 1–4. [Google Scholar] [CrossRef]
- Cruz, P.; Astudillo, H. Using the SEMAT Kernel for Software Process Assessment and Practices Implementation in an Software Process Improvement Initiative. In Proceedings of the 2018 37th International Conference of the Chilean Computer Science Society (SCCC), Santiago, Chile, 5–9 November 2018; pp. 1–7. [Google Scholar] [CrossRef]
- Shadish, W.; Cook, T.; Campbell, D. Experimental and Quasi-Experimental Designs for Generalized Causal Inference; Houghton Mifflin: Boston, MA, USA, 2002. [Google Scholar]
- Maxwell, J.A. Qualitative Research Design: An Interactive Approach; Sage Publications, Inc.: Thousand Oaks, CA, USA, 1996. [Google Scholar]












| Design Aspect | Explanation |
|---|---|
| Goal |
|
| Cases |
|
| Data Gathering Strategies |
|
| Kinds of Triangulation |
|
| Observed Effect |
|
| Related Alpha |
|
| Evidence Observed |
|
| Action |
|
| Consequence |
|
| Progress tracking |
|
| Reporting |
|
| Decision handling |
|
| Stakeholder participation |
|
| Organizational memory |
|
| Observed Effect |
|
| Related Alpha |
|
| Evidence Observed |
|
| Action |
|
| Consequence |
|
| Progress tracking |
|
| Reporting |
|
| Decision handling |
|
| Stakeholder participation |
|
| Organizational memory |
|
| Characteristic | Case Study 1 | Case Study 2 |
| Type of institution | Military. | Public (i.e., state-run) cancer-treatment oriented hospital. |
| Main function | Human Resources Management System. | Electronic Health Record System. |
| Type of system | Legacy, still in use and critical. | Legacy, still in use and critical. |
| Method or approach | Use of the Alphas and the ATAM (Architecture Tradeoff Analysis Method) for guiding and assessing progression. | Use of the Alphas for guiding and assessing progression with a decision-challenging focus. |
| Number of stakeholders involved | ∼60 people, including a permanent evaluation team of 7 people, with one evaluation leader (external to the institution). | ∼15 people, including an evaluation leader and evaluation supporters (both external to the Hospital). |
| Main observations (specific to one case) |
|
|
| Main observations (in both cases) |
| |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Cruz, P.; Solar, M.; Astudillo, H. Towards Software Architecture as an Auditable Practice. Appl. Sci. 2026, 16, 3020. https://doi.org/10.3390/app16063020
Cruz P, Solar M, Astudillo H. Towards Software Architecture as an Auditable Practice. Applied Sciences. 2026; 16(6):3020. https://doi.org/10.3390/app16063020
Chicago/Turabian StyleCruz, Pablo, Mauricio Solar, and Hernán Astudillo. 2026. "Towards Software Architecture as an Auditable Practice" Applied Sciences 16, no. 6: 3020. https://doi.org/10.3390/app16063020
APA StyleCruz, P., Solar, M., & Astudillo, H. (2026). Towards Software Architecture as an Auditable Practice. Applied Sciences, 16(6), 3020. https://doi.org/10.3390/app16063020

