DS PRO-S: A Success Assessment Model and Methodology for Data Science Projects
Abstract
1. Introduction
2. Background
3. Methodology
3.1. DSRM Activity 1: Problem Identification
3.2. DSRM Activity 2: Objectives of the Solution
3.3. DSRM Activity 3: Design and Development
3.4. DSRM Activity 4 and Activity 5: Demonstration and Evaluation
4. DS PRO-S: Data Science Project Success Assessment Model
4.1. DS PRO-S Meta-Model Components
4.2. Integrated Lifecycle View
4.3. Success Constructs, Assessment Levels, and Dimensions
4.4. Modules of DS PRO-S
4.4.1. Project Success Structure
4.4.2. Phase Success Structure
- Deliverable Success: The main phase deliverables meet their stated requirements, and specifications, and the relevant stakeholders’ expectations (e.g., validated data sets, engineered features, trained models).
- Phase Management Success: The phase is completed within its planned scope, schedule, and cost. In phases where baselines are not yet applicable (e.g., pre-kickoff phases), management success is evidenced through process adherence and governance (e.g., decision cadence, risk/assumption logging, required reviews and sign-offs).

4.4.3. Phase Health Structure
4.4.4. Project Health Structure
4.4.5. Measurement Information Models (MIMs) as Measurement and Evaluation Specifications
4.5. DS PRO-S Operational Methodology
- Be formally reviewed and approved by designated stakeholders (e.g., senior management, sponsor);
- Be incrementally versioned (e.g., DS PRO-S Baseline v1, v2);
- Be justified and documented, with a rationale linked to the evaluation results, project changes, or stakeholder input.
5. Multiple Case Studies
5.1. Objective and Research Questions
5.2. Case Selection and Unit of Analysis
- Unit of Analysis: The unit of analysis is a single data science project
- Case Selection Rationale:
- Case A (Development Context): A predictive analytics project within a large energy enterprise (Company A) in Türkiye. Selected to validate the application of DS PRO-S in a high-uncertainty, internal environment.
- Case B (Operations Context): A GenAI (LLM) project delivered by a vendor (Company B) in Türkiye. Selected to validate the application of DS PRO-S in a high-stability, commercial environment.
| Characteristic | Case A (Internal/Predictive) | Case B (Vendor/GenAI) |
|---|---|---|
| Organization Size | Large enterprise (>5000 employees) | SME/vendor (~100 employees) |
| Industry | Energy | Information technology (IT) and software services |
| Project Development by | In-house Data Analytics department, delivering to a business unit of the company. | The company is a solution provider (i.e., vendor) delivering to a client. |
| Technology | Predictive Analytics: Machine learning. Deterministic outputs focusing on numerical precision. | Generative AI: Large Language Model (RAG). Probabilistic outputs focusing on semantic relevance and user experience |
| Lifecycle Phase | Development: (two phases) data preparation and modeling | Operations and Maintenance: Focus on stability and SLA compliance. |
5.3. Case Study Protocol
- Materials and Instruments: The validation relied exclusively on the artifact’s components:
- User Guide: Provided to the practitioners to explain the core concepts (Health vs. Success) and the operational cycle.
- Instantiation Kit: A collection of global lists of model elements (such as CSFs, success criteria, phases, deliverables), templates, and derivation logic (e.g., GQM mappings) used to construct specific models.
- Reference Examples: Pre-worked scenarios used to demonstrate the logic of the model.
- The Execution Procedure: The protocol was structured as a three-stage process for each case, mirroring the operational methodology of DS PRO-S.
- Stage. 1: “Contextual Setup and Training” Stage: The researcher introduced the DS PRO-S framework to ensure the project team understood the rationale behind the model and methodology and the structural distinction between concepts such as CSF (to define and measure health) vs. success criteria (to define and measure success).
- Stage. 2: “Define” Stage (Instantiation): The practitioners were tasked with constructing their project-specific models (i.e., Project Success, Phase Success, Phase Health). For the Project Success and Phase Success modules, the practitioners mapped their project/phase goals to success categories and derived success criteria and specific metrics. For Phase Health, the practitioners elicited CSFs under related categories. They presented their measurement and evaluation specifications for each module.
- Stage. 3: “Measure and Evaluate” Stage: The practitioners executed instantiated models using real project data (from the data sources and respondents specified in their models) via surveys or direct extracts from systems or reports, which yielded the measurement and evaluation results. They interpreted the calculated results (e.g., Health and Success Scores) and determined actions where necessary. Finally, they provided feedback on DS PRO-S based on their experience.
5.4. Results of Case Study A
5.4.1. Case A—Project Success Assessment
5.4.2. Case A—Phase Success Assessment
5.4.3. Case A—Phase Health Assessment
5.4.4. Actions for Improvement in Case A
- Establish Alignment through Dedicated Meetings: To resolve the misalignment between the Analytics and Business Unit teams caused by the lack of formal tracking, weekly meetings will be used to establish a shared understanding of the completed work and the remaining schedule.
- Manage Expectations to Control Timeline Slippage: It is necessary to manage stakeholder expectations more effectively by increasing the frequency of feedback loops. This action is required to communicate the delays caused by the inclusion of external topics or “extra” tasks and ensure that both technical and business stakeholders remain aligned even as the schedule is adjusted.
- Standardize Iteration Tracking via Management Tools: Although data preparation is finalized, ongoing data quality concerns still pose potential risks for further iterations. Additionally, since projects start very quickly with limited Business Understanding (e.g., with few tags suggested by the Business Unit), this leads to many subsequent iterations and a high volume of “back-and-forth” information flow. It is identified that a task management tool must be used to systematically track these technical inputs and manage these complex exchanges more efficiently.
5.5. Results of Case Study B
5.5.1. Case B—Project Success Assessment
5.5.2. Case B—Phase Success Assessment
5.5.3. Case B—Phase Health Assessment
5.5.4. Actions for Improvement in Case B
- Prioritize Software Compliance: To mitigate the risk of low software compliance, a new sprint will be performed to meet the requirements for software features.
- Resolve System Robustness Issues ASAP: Although the target date for stress tests is not due, there seems to be a planning flaw; therefore, tests will be moved to an earlier date.
- Train the Client Sales Team on Prompting: When Phase Health and Phase Success Assessment results are evaluated together, the low Incident Management Efficiency Score (57%) is attributed to the Client’s Sales Teams’ poor prompting proficiency (CSF score: 38%). A targeted training course on prompting will be arranged in two weeks.
- Prepare the Configuration Management Process: Currently, Company B lacks a formal process for configuration management, as evidenced by the low CSF score (38%) for Effective Configuration Management. This process will be prepared, documented and put into practice in two weeks.
5.6. Cross Case Study Analysis and Results
Applicability of DS PRO-S
- Instantiability:
- The Instantiation Kit (GQM templates, global lists) allowed for a context-specific model to be built without extensive external guidance (5).
- The logic for deriving metrics from higher-level success structures was clear and easy to operationalize (4.5).
- Completeness:
- 3.
- DS PRO-S’ modular structure adequately reflects the multi-dimensional nature of data science project success (Project, Phase, Success, Health) (5).
- 4.
- The global lists of CSFs and success criteria provide sufficient reference content for deriving your own project’s success and health structure (5).
- Operational Utility:
- 5.
- The distinction between “Phase Health” (enablers) and “Phase Success” (goal achievement) provided a more accurate diagnosis of project status (5).
- 6.
- The model outputs (assessment scores) provided actionable insights that improved decision-making compared to existing practices in your organization (5).
6. Discussion
- Theoretical Contribution (Meta-Model Architecture):DS PRO-S proposes a meta-model that can be instantiated for various data science projects instead of relying on a fixed checklist of success criteria or CSFs. This approach strikes a balance between standardization by providing a shared structure, and customization, by adapting to specific project objectives and constraints. Furthermore, it separates Success (the achievement of objectives via success criteria) from Health (enablers via CSFs). In addition, the DS PRO-S broadens success beyond the “iron triangle” (i.e., scope, time and cost) by including outcomes., such as Business Value and Financial Value, alongside outputs like Product Quality and Project Management.
- Methodological contribution (Formal Measurement and Traceability):The approach keeps assessment at the metric level, with explicit metric types, normalization rules, and aggregation formulas, so scores can be traced back (“drilled down”) to base measures. By aligning the measurement and evaluation with ISO/IEC 15939 (MIM), indicators, rating scales, and decision criteria become more traceable and repeatable.
- Practical Contribution (Modularity and Timely Interventions):DS PRO-S supports assessment during project execution rather than solely at the project completion which helps with early detection and intervention when issues arise. Its modular structure allows for its partial adoption (e.g., focusing only on Phase Health during high-risk phases). Phase-level assessment reduces the risk that issues in earlier phases propagate and compound later. Finally, the provided toolkit- which includes global lists of configurable elements, templates, method explanations- helps practitioners instantiate a valid model, even when their organizational process maturity is limited.
- DS PRO-S was not applied across the whole lifecycle of a data science project; therefore, the Project Health module, which requires the measurement of success of the phases from the beginning of project, could not be evaluated in a real-world environment.
- Like any other measurement, monitoring or controlling systems, the selection of indicators, their weights, and the target values against which measurements are made can affect the results. In DS PRO-S, this risk is minimized by introducing a project sponsor or upper-management approval mechanism to the baselines and versions of the models instantiated for projects. In the case studies, this approval mechanism could not be executed as DS PRO-S was not a part of the case study organizations’ formal project governance processes.
- Subjective measurements are enabled in DS PRO-S to account for the diverse maturities of organizations, which may affect the objectivity of the assessment results. Forcing at least two respondents in the case of subjective measurements was a tactic to reduce the effects of this limitation.
Threats to Validity
- Construct Validity: The core concepts and early outputs (e.g., design principles) were grounded on established theories and the prior literature using systematic and multivocal reviews. Before the case studies, expert interviews were used to check the comprehensiveness, consistency, and comprehensibility of the model elements and the operational methodology.
- Internal Validity: During the case studies, assessment results were discussed with stakeholders to confirm that the output reflects the actual project situation. Scenario-based demonstrations were used to verify the internal logic and consistency of the measurement and evaluation calculations, and to remove potential errors before real-world use. Traceability from objectives and HLRs to design decisions and outputs also supported the controlled refinement of the model over iterations.
- External Validity: Two different case organizations were selected to explore its applicability in diverse settings. The assessments were considered meaningful by the stakeholders in both cases, suggesting that the approach can be transferred beyond a single project type or organizational context.
- Reliability: Where possible, computations relied on objective measures; when subjective inputs were required, feedback from more than one stakeholder was collected to reduce single-person bias and to keep the evaluation more consistent.
7. Conclusions
- A longitudinal case study could be conducted to cover all phases of a data science project, from their inception to their completion.
- Pre-configured template models for key sub-domains, project types, or specific organizational settings in the data science domain can be produced with tailored instantiation kits to increase adoption.
- A decision support system can be developed to suggest CSFs and success criteria, based on organizational processes and project documentation.
- A DS PRO-S meta-model, instantiation kit and methodology can be further established as an application so that organizations can adopt DS PRO-S more easily.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
| DS PRO-S | Data Science Project Success Assessment Model |
| CSF | Critical Success Factor |
| DSR | Design Science Research |
| DSRM | Design Science Research Methodology |
| RQ | Research Question |
| MLR | Multivocal Literature Review |
| SLR | Systematic Literature Review |
| HLR | High-Level Requirements |
| P-DOTS | Project, Data, Organization, Technology, Strategy |
| MIM | Measurement Information Model |
| IS | Information Systems |
| PMO | Project Management Office(r) |
| GQM | Goal Question Metric |
Appendix A
| ID | High-Level Requirement | Objective | Reference |
|---|---|---|---|
| HLR-0 | Provide mechanisms for defining Project Success constructs. | O-0 | Basic Capability—Define success constructs |
| HLR-1 | Provide mechanisms and/or rules for measuring each defined success construct. | O-0 | Basic Capability—Measure success constructs |
| HLR-2 | Enable evaluation of measurement results to support decisions. | O-0 | Basic Capability—Evaluate results |
| HLR-3 | Provide a comprehensive definition of success constructs (i.e., beyond cost, time and scope) | O-1 | “The project success concept has more criteria than that of the iron triangle” [65] |
| HLR-4 | Enable both result-based and enabler-based assessment so that interventions can be timely and targeted. | O-2 | Fundamental purpose of the model |
| HLR-5 | Reflect data science project characteristics. | O-3 | “Each project should be measured in a context-specific way.” [48,51] |
| HLR-6 | Enable customization to project-specific circumstances. | O-5 | |
| HLR-7 | Provide a core backbone of the shared categories of success and evaluation flows for benchmarking and comparison. | O-6 | “There is a need for common elements that can be compared between projects including project success assessment methods, monitoring and controlling projects.” [52] |
| HLR-8 | Incorporate stakeholder perspectives into the success assessment process, where feasible. | O-1 | “Therefore, it is very important for project success to be measured by taking into consideration the perceptions or business values of the project from the viewpoint of those stakeholders who are possible beneficiaries of the project.” [46] |
| HLR-9 | Enable assessment throughout the lifecycle of projects. | O-4 | “In practice, model/methods for success evaluation should be defined considering not only the performance during the project but also the impacts post-project …” [70] |
| HLR-10 | Enable ex-post assessment of the project. | O-4 | |
| HLR-11 | Enable assessment of interim phases related to the project. | O-4 | “… the relative importance of various critical success factors are subject to change at different phases of the project implementation.” [43] |
| HLR-12 | Account for the influence of a phase’s success in the evaluation of its subsequent phases. | O-4 | “In each life-cycle phase, the influence of the success of the preceding phase is always significant and, in fact, far exceeds that of other success factors listed in the model.” [44] |
| HLR-13 | Specify concrete metrics or their derivation mechanisms to measure success constructs. | O-7 | “In many cases, even though the project owners have a fairly good knowledge of the project, they do not use such knowledge effectively in the definition of KPI and success indicators, just because they do not follow a proper methodology that provides guidelines for the definition and computation of KPI and success indicators.” [53] |
| HLR-14 | Provide mechanisms for adapting the assessment process to changing project conditions. | O-5 | “Moreover, reflection on project success can also change as time progresses, conditions change, and the project is viewed in longer retrospect.” [38] |
| HLR-15 | Provide a comprehensive set of implementation resources | O-8 | “… questions about “who,” “when” and “how” the evaluation of project success should be done, are not well answered yet …” [71] |
| ID | Context | Explanation |
|---|---|---|
| A-1 | Relationship with existing process, project or quality management models, methods or standards | The model shall not replace any process, project, and quality management frameworks. |
| E-1 | Success at the strategic level | Scope on assessing success of projects in terms of their contribution to long-term strategy of the organization is excluded. |
| W-Questions | Answers |
|---|---|
| Why do we assess success of data science projects? | To identify pain points before it is too late or too expensive to take corrective actions To make informed decisions on whether to cancel unused, outdated, or no longer useful projects or to continue to support To understand if the targets of the project are achieved To ensure targeted functionality or the correct utilization of the outputs of the projects (secondary) To compare performance across projects (secondary) To drive continuous improvement |
| What aspects of the data science projects do we assess? | Meeting project objectives Ensuring enabling conditions |
| Who conducts and acts on the assessments? | Conducted by: Project Manager, Project Lead, PMO Analyst, or any relevant role at the organization Actioned by: Project/Portfolio Managers, Department Heads, Executives, Sponsors, Steering Committee or any relevant role at the organization |
| Where in the lifecycle of data science projects or organizational processes do assessments belong to? | Pre-selection processes Pre-deployment phases Post-deployment phases |
| When do we run assessments? | (options) After a phase closes At predefined intervals (e.g., every month) On milestone dates (e.g., release) Triggered by threshold breaches (e.g., 100% consumption of budget) |
| No. |
Design
Focus | Related HLR | Design Principle | Design Elements | |
|---|---|---|---|---|---|
| Meta-Model | Instantiation | ||||
| DP-1 | Success Constructs | HLR-0 HLR-4 | Build the model on two basic success constructs: Success criteria (what defines success) CSF (what increases success likelihood) | Success Criterion CSF | |
| DP-2 | Project Component > Project Structure | HLR-0 HLR-9 | Represent each project with a goal and an ordered sequence of phases. | Project Goal Phase | |
| DP-3 | Model Components | HLR-0, HLR-1, HLR-2 | Group model elements under four core components: Project Success Measurement Evaluation | Project Component Success Component Measurement Component Evaluation Component | |
| DP-4 | Defining Success Constructs > Categories as Backbone | HLR-0 HLR-3 HLR-7 | For benchmarking and comparison, maintain a fixed backbone of categories while allowing success constructs to fit into that backbone | Categories for success criteria Categories for CSFs | |
| DP-5 | Defining Success Constructs > Prioritization | HLR-6 | Reflect on the context-specificity of success by enabling the prioritization of success constructs and categories | Weights Weighting Methods | |
| DP-6 | Defining Success Constructs > Goal and Success category connection | HLR-7 | Distribute goal into a set of objectives which map to success categories. Success criteria will be selected for each objective. | Objective Categories for success criteria Success Criterion | |
| DP-7 | Defining Success Constructs > Success Criteria Elicitation | HLR-0 HLR-6 HLR-13 | Facilitate the selection of success criteria and their associated metrics by providing examples or derive them using an applicable derivation method | Success criterion metric Derivation method of success criteria | Candidate success criterion/metric catalog, Template(s) for the derivation of success criteria |
| DP-8 | Defining Success Constructs > CSF Elicitation | HLR-0 HLR-6 | Facilitate the selection of CSFs by providing examples or derive them using an applicable derivation method | Critical success factor Derivation method of CSFs | Candidate CSF catalog Template(s) for the derivation of CSFs |
| DP-9 | Measuring Success Constructs | HLR-1 HLR-10 HLR-13 | Organize measurements and evaluation at both project and phase levels. | Phase-level assessments; project-level assessments | |
| DP-10 | Measuring Success Constructs | HLR-1 HLR-13 | Support both quantitative (defined metrics where data is available), and qualitative measurement (simple attainment or graded scales) | Measurement specification | |
| DP-11 | Measuring Success Constructs | HLR-2 HLR-12 HLR-13 | Aggregate leaf-level measurements into upper-level scores via a defined measurement framework and configurable weighting methods | Measurement specification Weighting method Weights | Weighting method options |
| DP-12 | Evaluating Results | HLR-2 HLR-12 HLR-13 | Provide interpretation guidelines to support decision-making | Rating scales and decision criteria | Templates for Rating-scales and Decision-criteria |
| DP-13 | Defining Success Constructs | HLR-12 | Allow for reflections on a phase’s success to affect the successive phase (e.g., via a CSF) | CSF or success criterion | |
| DP-14 | Assessment Frequency | HLR-9 HLR-10 HLR-11 | Establish a minimal managed frequency (e.g., once at the phase end) and allow for periodic plans to be defined (e.g., monthly) | Assessment schedule | |
| DP-15 | Stakeholder Perspectives Integration | HLR-8 | Capture stakeholder views for subjective measurements (e.g., for CSFs) | Measurement Specification | |
| DP-16 | Adaptiveness | HLR-14 | Support re-baselining of configured constructs and elements as project conditions evolve. | Re-baselining Versioning | |
| DP-17 | Implementation Resources | HLR-15 | Provide all necessary implementation processes, templates, and recommended methods so practitioners know how to execute each assessment step. | Process diagrams (BPMN) User Guide Template Library (Forms) | |
| DP-18 | Data Science Specificness | HLR-5 | Enable the model to meet data science project-specific characteristics and for candidate instantiations to be suggested for each core model element | To be determined (adjusted/new model elements, attributes or functionality) | Candidate model element (e.g., success criteria, metric, CSF, goal, phase, etc.) catalogs |
| Design Principle | Purpose | Method | For Model Usage | For Model Development | |
|---|---|---|---|---|---|
| To Do | Step | ||||
| DP-6 DP-7 | Success Criteria Elicitation | GQM | Practitioners can utilize this method to determine success criteria under each success category | Provide GQM templates as a guide in the Instantiation Toolkit | Prepare the Instantiation Toolkit |
| DP-8 | CSFs Elicitation | (Proposed) Phase-Challenge-CSF | Practitioners can utilize this method to determine CSFs for each phase by identifying tasks and challenges | Review and provide data science project typical challenges, project phases and main tasks in the Instantiation Kit Provide a short description of this method | Investigate Domain Project Characteristics |
| Prepare the Instantiation Toolkit | |||||
| DP-5 | Weighting Method | AHP | Practitioners will select one of these methods (or use their organization’s own method) to determine the weights of success constructs or categories | Provide basic descriptions and links for the methods in the Instantiation Kit. | Prepare the Instantiation Toolkit |
| $100 Allocation | |||||
| T-Shirt Sizing | |||||
| DP-10 DP-11 | Measurement and Evaluation Method | Measurement Information Model (MIM) | Construct the meta-model’s measurement and evaluation specifications (including entities, rules, formulations, etc.) Develop MIM’s for phase- and project-level measurements and evaluations | Construct the Meta-Model as a Blueprint | |
| Consider the developed MIMs in creating operational methodology | Develop the Implementation Guidance | ||||
| DP-12 | Rating Scales and Decision Criteria | RAG Rating system | Suggest configurable rating scales with thresholds and decision criteria based on these approaches, and use them in developing MIMs | Construct the Meta-Model as a Blueprint | |
| RAG + B Rating system | |||||
| Rating scale in ISO/IEC 33020 [72] | |||||
| Module | Base Measures (Input Data) | Derived Measures (Calculated/Normalized) | Indicators (for Decision Support) |
|---|---|---|---|
| Project Success | Measured Value of Success Criterion Metric, m at time t: or Measurement Timestamp: | Normalized Metric Value (per metric, %): Standard Deviation of Normalized Subjective Responses (per metric, percentage points): Duration Consumption Rate (per metric, %): Duration Consumption Rate (per metric, %): Relative Metric Achievement (per metric, %): Array of Possible Final Normalized Values (per metric, array of %): | , , , Absolute Success Scores (%) , , Relative Success Scores (%) , SC Perception Variability (percentage points) , , , Weighted Evidence coverage (data completeness) (%) , Maturity shares (%) Array of Possible Final Absolute Success Scores (for any level L, ) |
| Phase Success | Measured Value of Success Criterion Metric, m at time t: or Measurement Timestamp: | Normalized Metric Value (per metric, %): Standard Deviation of Normalized Subjective Responses (per metric, percentage points): Duration Consumption Rate (per metric, %): Relative Metric Achievement (per metric, %): Array of Possible Final Normalized Values (per metric, array of %): | , , , Absolute Success Scores (%) , , Relative Success Scores (%) , SC Perception Variability (percentage points) , , , Weighted Evidence coverage (data completeness) (%) , Maturity shares (%) Array of Possible Final Absolute Success Scores (for any level L, ) |
| Project Health | Phase Success Score (%) available at time t: Measurement Timestamp: | CSF Achievement Score (per csf, %): Duration Consumption Rate (per phase, %): | , Project Health Score (%) , Project Maturity Share (time to target) (%) |
| Phase Health | The Raw Evaluator (e) Response to a Question q of a Critical Success Factor at time t: Measurement Timestamp: | Normalized Response Value (per question, %): Standard Deviation of Normalized Responses (per question, percentage points): Duration Consumption Rate (per phase, %): | , CSF Achievement Score (%) , CSF Achievement Perception Variability , CSF Category Score (%) , Phase Health Score (%) , Phase Maturity Share (time to target) (%) |
Appendix B
Appendix B.1
| Question No. | Question | Respondents | Data Source |
|---|---|---|---|
| Q1 | What are the daily absolute errors for the last month? | Data Scientist | Monthly records in Analytics Dashboard |
| Q2 | How satisfied are you with solution’s prediction performance? | At least two Control Operators (BU) | Survey |
| Q3 | How much of the main work packages and outputs (deliverables) defined within the project scope has been completed? | Data Scientists (2) Process Engineer (BU) | Survey |
| Q4 | Is the overall progress status of the project in alignment with the targeted project schedule (timeline)? | Data Scientists (2) Process Engineer (BU) | Survey |
| Q5 | To what extent are the project management processes (encompassing scope, resource, and risk management) being effectively executed within this specific phase? | Data Scientists (2) Process Engineer (BU) | Survey |
| Q6 | How much did the daily charged material increase? | Data Scientist | Monthly records in Analytics Dashboard |
| Q7 | How much revenue was made due to the daily charged material increase? | Data Scientist | Monthly records in Analytics Dashboard |


| Question No. | Question | Respondents | Data Source |
|---|---|---|---|
| Q1 | What is the ratio of Lines without N/A to Total Lines of data? | Data Scientist | Local Database |
| Q2 | What proportion of the data preparation tasks (e.g., cleaning, transformation) planned for this phase has been successfully completed? | Data Scientists (2) Process Engineer (BU) | Survey |
| Q3 | Is the current completion level of this phase in accordance with the targeted schedule? | Data Scientists (2) Process Engineer (BU) | Survey |
| Q4 | To what extent are the phase management processes (encompassing scope, resource, and risk management) being effectively executed within this specific phase? | Data Scientists (2) Process Engineer (BU) | Survey |
| Question No. | Question | Respondents | Data Source |
|---|---|---|---|
| Q1 | What is the Mean Absolute Error (of test cycles)? | Data Scientist | Local Database |
| Q2 | What proportion of the modeling tasks (e.g., testing alternative models) planned within this phase has been successfully completed? | Data Scientists (2) Process Engineer (BU) | Survey |
| Q3 | Is the current completion level of this phase in accordance with the targeted schedule? | Data Scientists (2) Process Engineer (BU) | Survey |
| Q4 | To what extent are the phase management processes (encompassing scope, resource, and risk management) being effectively executed within this specific phase? | Data Scientists (2) Process Engineer (BU) | Survey |




| Question No. | Question (Statement for Likert Scale Answers) | Respondents |
|---|---|---|
| Q1 | The quality of the data used in this phase is sufficient to perform the targeted analyses | Data Scientists (2) |
| Q2 | Data governance policies (access, anonymization, GDPR, etc.) ensure access to the necessary data without slowing down the data preparation processes | Data Scientists (2) |
| Q3 | Senior management actively provides the necessary support (resources, approval, motivation) for the intensive data processing and transformation processes. | Data Scientists (2) Process Engineer (BU) |
| Q4 | The outputs obtained in the previous phase (Data Understanding) are clear and sufficient to carry out our work in this phase smoothly. | Data Scientists (2) |
| Q5 | The business unit is willing and participatory in providing the necessary domain knowledge for the work in this phase | Data Scientists (2) |
| Q6 | Communication within the project team (information flow, meeting efficiency, etc.) is open, timely, and effective | Data Scientists (2) Process Engineer (BU) |
| Q7 | The project team has the technical and business competence required to successfully complete the tasks in this phase. | Data Scientists (2) Process Engineer (BU) |
| Q8 | The project team demonstrates high effort and participation in carrying out the work in this phase. | Data Scientists (2) Process Engineer (BU) |
| Q9 | The project team clearly understands the strategic objectives of the project and the business problem to be solved. | Data Scientists (2) Process Engineer (BU) |
| Q10 | The technical infrastructure (pipeline/integrations) providing data flow is working uninterrupted; there are no technical issues or delays in data access. | Data Scientist (2) |
| Question No. | Question (Statement for Likert Scale Answers) | Respondents |
|---|---|---|
| Q1 | The project team clearly understands the project’s strategic objectives and the business problem it aims to solve. | Data Scientists (2) Process Engineer (BU) |
| Q2 | The project team demonstrates a high level of participation and effort in model development and improvement. | Data Scientists (2) Process Engineer (BU) |
| Q3 | The project team possesses the necessary technical expertise to implement the selected algorithms and conduct the modeling process | Data Scientists (2) Process Engineer (BU) |
| Q4 | The outputs obtained in the data preparation phase are clear and sufficient to ensure the smooth execution of our work in this phase. | Data Scientists (2) |
| Q5 | The business unit is willing and committed to reviewing the results and providing domain knowledge during the modeling phase. | Data Scientists (2) |
| Q6 | Intra-team and inter-stakeholder communication regarding technical decisions and results in the modeling process is open and effective. | Data Scientists (2) Process Engineer (BU) |
| Q7 | Senior management actively provides the necessary support (resources, approval, motivation) for the modeling phase. | Data Scientists (2) Process Engineer (BU) |
| Q8 | Tools supporting intra-team collaboration (e.g., Git, Dataiku, Azure DevOps, shared Notebook environments) are sufficient and accessible for the modeling process. | Data Scientists (2) |
| Q9 | Sufficient technical research is being conducted on different algorithms and approaches to select the most appropriate method for the problem. | Data Scientists (2) |
| Q10 | The necessary infrastructure and tools have been provided to automate repetitive modeling processes (training, testing, data flow). | Data Scientists (2) |
Appendix B.2
| No. | Question | Respondents | Data Source |
|---|---|---|---|
| Q1 | What is the relevance score? | Tech Lead | Eval. Reports |
| Q2 | What is the attribution ratio? | Tech Lead | RAG Logs |
| Q3 | What is the fluency score? | Tech Lead | Human Evaluation reports |
| Q4 | What is the coherence score? | Tech Lead | Human Evaluation reports |
| Q5 | What is the cosine similarity score? | Tech Lead | Evaluation Dataset |
| Q6 | What is the average end-to-end response latency recorded in system logs? | AI Team | System Logs |
| Q7 | What percentage of the software requirements are met? | Tech Lead | Technical Compliance Report |
| Q8 | What was the average robustness score recorded in the post-deployment adversarial stress test? | AI Team | Test Report |
| Q9 | What is the ratio of progress reports actually delivered vs. contractually required? | PMO | Email Archive |
| Q10 | How many percent of the WPs delivered? | PMO | MS Project |
| Q11 | What is the variance (in days) between the planned and actual Go-Live date? | Tech Lead | Project Baseline |
| Q12 | What percentage of the total workforce uses the assistant daily? | Tech Lead | Database Logs |
| Q13 | What is the latest satisfaction rate by Client? | Tech Lead | Feedback Survey |
| Q14 | What percentage of the total contract value was successfully invoiced without deduction? | PMO | Invoices/ERP |


| No. | Question | Respondents | Data Source |
|---|---|---|---|
| Q1 | What was the total realized downtime (in hours) for the system last month? | AI Engineer | Azure Monitoring |
| Q2 | What percentage of the periodic quality checks (Relevance, Latency, Groundedness, etc.) remained within the acceptable thresholds last month? | AI Engineer | Customer Evaluation Forms |
| Q3 | How many required progress reports were successfully submitted to the client last month? | PMO | e-mail archives |
| Q4 | What was the average time to respond to critical incidents within the last month? | AI Engineer | Teams Log |


| No. | Question (Statement for Likert Scale Answers) |
|---|---|
| Q1 | User feedback and potential issues observed in the live environment are communicated to the technical team quickly, and the resolution process is handled with transparent communication. |
| Q2 | The system’s architectural health and operational performance (e.g., response time, availability) are monitored in real time and effectively using advanced monitoring tools. |
| Q3 | Frequently changing model versions, prompt templates, and configuration files are managed in a controlled and traceable way, so that they do not create unnecessary complexity. |
| Q4 | The Client (or business unit) has assigned a dedicated responsible person to follow the operational process and provide improvement suggestions. |
| Q5 | The team has the technical capability (e.g., cloud, artificial intelligence) required to deliver the project work packages and to produce customized solutions based on emerging customer needs. |
| Q6 | The data sources and document formats feeding the system (e.g., PDF, Excel) are stable and consistent enough not to disrupt the model’s operation. |
| Q7 | Users have the necessary prompt engineering skills to obtain correct outputs from the model, and they use the system effectively. |
| Q8 | This project is aligned with the (client) organization’s overall AI (artificial intelligence) adoption strategy and long-term vision. |
References
- Sharma, S. Data Is Essential to Digital Transformation. Available online: https://www.forbes.com/councils/forbestechcouncil/2020/12/03/data-is-essential-to-digital-transformation/ (accessed on 13 January 2026).
- VentureBeat Why Do 87% of Data Science Projects Never Make It Into Production? Available online: https://venturebeat.com/ai/why-do-87-of-data-science-projects-never-make-it-into-production/ (accessed on 15 July 2023).
- Challapally, A.; Pease, C.; Raskar, R.; Chari, P. State of AI in Business 2025; MIT NANDA: Cambridge, MA, USA, 2025. [Google Scholar]
- Gartner. Gartner Data & Analytics Summit 2024 London: Day 1 Highlights; Gartner: Stamford, CT, USA, 2024; Available online: https://www.gartner.com/en/newsroom/press-releases/2024-05-13-gartner-data-and-analytics-summit-london-2024-day-1-highlights (accessed on 12 February 2026).
- Saltz, J.S. The Need for New Processes, Methodologies and Tools to Support Big Data Teams and Improve Big Data Project Effectiveness. In Proceedings of the 2015 IEEE International Conference on Big Data, IEEE Big Data, Santa Clara, CA, USA, 29 October–1 November 2015; pp. 2066–2071. [Google Scholar] [CrossRef]
- Aho, T.; Sievi-Korte, O.; Kilamo, T.; Yaman, S.; Mikkonen, T. Demystifying Data Science Projects: A Look on the People and Process of Data Science Today. In Proceedings of the Product-Focused Software Process Improvement, Turin, Italy, 25–27 November 2020; Morisio, M., Torchiano, M., Jedlitschka, A., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 153–167. [Google Scholar]
- Midler, C.; Alochet, M. Understanding the Phoenix Phenomenon: Can a Project Be Both a Failure and a Success? Proj. Manag. J. 2024, 55, 187–204. [Google Scholar] [CrossRef]
- Ika, L.A.; Pinto, J.K. The “Re-Meaning” of Project Success: Updating and Recalibrating for a Modern Project Management. Int. J. Proj. Manag. 2022, 40, 835–848. [Google Scholar] [CrossRef]
- Baccarini, D. The Logical Framework Method for Defining Project Success. Proj. Manag. J. 1999, 30, 25–32. [Google Scholar] [CrossRef]
- Rode, A.L.G.; Svejvig, P.; Martinsuo, M. Developing a Multidimensional Conception of Project Evaluation to Improve Projects. Proj. Manag. J. 2022, 53, 416–432. [Google Scholar] [CrossRef]
- Takagi, N.; Varajão, J.; Ventura, T.; Ubialli, D.; Silva, T. Implementing Success Management and PRINCE2 in a BPM Public Project. In Proceedings of the ACIS 2021, Sydney, Australia, 6–10 December 2021; Available online: https://aisel.aisnet.org/acis2021/4/ (accessed on 26 December 2025).
- Takagi, N.; Varajão, J. Success Management and the Project Management Body of Knowledge (PMBOK): An Integrated Perspective. Int. Res. Workshop IT Proj. Manag. 2020, 6. [Google Scholar]
- Takagi, N.; Varajão, J. ISO 21500 and Success Management: An Integrated Model for Project Management. Int. J. Qual. Reliab. Manag. 2021, 39, 408–427. [Google Scholar] [CrossRef]
- Project Management Institute. PMI PMBOK Guide; Project Management Institute: Newtown Square, PA, USA, 2021. [Google Scholar]
- ISO 21502:2020; Project, Programme and Portfolio Management—Guidance on Project Management. ISO: Geneva, Switzerland, 2020.
- The Stationery Office. Managing Successful Projects with PRINCE2; The Stationery Office: Norwich, UK, 2017; ISBN 978-0-11-331533-8. [Google Scholar]
- Rigo, P.D.; Siluk, J.C.M.; Lacerda, D.P.; Rediske, G.; Rosa, C.B. A Model for Measuring the Success of Distributed Small-Scale Photovoltaic Systems Projects. Sol. Energy 2020, 205, 241–253. [Google Scholar] [CrossRef]
- Zavadskas, E.K.; Vilutienė, T.; Turskis, Z.; Šaparauskas, J. Multi-Criteria Analysis of Projects’ Performance in Construction. Arch. Civ. Mech. Eng. 2014, 14, 114–121. [Google Scholar] [CrossRef]
- Elkarmi, F.; Shikhah, N.A.; Alomari, Z.; Alkhatib, F. A Novel Methodology for Project Assessment and Evaluation. J. Serv. Sci. Manag. 2011, 4, 261–267. [Google Scholar] [CrossRef]
- Barclay, C.; Osei-Bryson, K.-M. Project Performance Development Framework: An Approach for Developing Performance Criteria & Measures for Information Systems (IS) Projects. Int. J. Prod. Econ. 2010, 124, 272–292. [Google Scholar] [CrossRef]
- McLeod, L.; Doolin, B.; MacDonell, S.G. A Perspective-Based Understanding of Project Success. Proj. Manag. J. 2012, 43, 68–86. [Google Scholar] [CrossRef]
- Joseph, N.; Marnewick, C. The Continuum of Information Systems Project Success: Reflecting on the Correlation between Project Success Dimensions. S. Afr. Comput. J. 2021, 33, 37–58. [Google Scholar] [CrossRef]
- Martinez, I.; Viles, E.; Olaizola, I.G. Data Science Methodologies: Current Challenges and Future Approaches. Big Data Res. 2021, 24, 100183. [Google Scholar] [CrossRef]
- Dukino, C.; Kutzias, D.; Link, M. Roles and Competences of Data Science Projects. In Proceedings of the International Conference on the Human Side of Service Engineering 2022, New York, NY, USA, 24–28 July 2022. [Google Scholar]
- Guo, J.X. Measuring Information System Project Success through a Software-Assisted Qualitative Content Analysis. Inf. Technol. Libr. 2019, 38, 53–70. [Google Scholar] [CrossRef]
- Gao, J.; Koronios, A.; Selle, S. Towards A Process View on Critical Success Factors in Big Data Analytics Projects. In Proceedings of the Twenty-First Americas Conference on Information Systems, Fajardo, Puerto Rico, 13–15 August 2015. [Google Scholar]
- Miller, G.J. Artificial Intelligence Project Success Factors—Beyond the Ethical Principles. In Proceedings of the Information Technology for Management: Business and Social Issues, Sofia, Bulgaria, 4–7 September 2022; Ziemba, E., Chmielarz, W., Eds.; Springer International Publishing: Cham, Switzerland, 2022; pp. 65–96. [Google Scholar]
- vom Brocke, J.; Hevner, A.; Maedche, A. Introduction to Design Science Research. In Design Science Research. Cases; vom Brocke, J., Hevner, A., Maedche, A., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 1–13. ISBN 978-3-030-46781-4. [Google Scholar]
- Peffers, K.; Tuunanen, T.; Rothenberger, M.A.; Chatterjee, S. A Design Science Research Methodology for Information Systems Research. J. Manag. Inf. Syst. 2007, 24, 45–77. [Google Scholar] [CrossRef]
- Gökay, G.T.; Nazlıel, K.; Şener, U.; Gökalp, E.; Gökalp, M.O.; Gençal, N.; Dağdaş, G.; Eren, P.E. What Drives Success in Data Science Projects: A Taxonomy of Antecedents. In Proceedings of the International Conference on Computing, Intelligence and Data Analytics (ICCIDA 2022), Kocaeli, Turkey, 16–17 September 2022; García Márquez, F.P., Jamil, A., Eken, S., Hameed, A.A., Eds.; Springer International Publishing: Cham, Switzerland, 2023; pp. 448–462. [Google Scholar]
- Tsoy, M.; Staples, D.S. What Are the Critical Success Factors for Agile Analytics Projects? Inf. Syst. Manag. 2021, 38, 324–341. [Google Scholar] [CrossRef]
- Demir, N.; Aysolmaz, B.; Özcan-Top, Ö. Critical Success Factors in Data Analytics Projects: Insights from a Systematic Literature Review. In Proceedings of the Disruptive Innovation in a Digitally Connected Healthy World, Heerlen, The Netherlands, 11–13 September 2024; van de Wetering, R., Helms, R., Roelens, B., Bagheri, S., Dwivedi, Y.K., Pappas, I.O., Mäntymäki, M., Eds.; Springer Nature Switzerland: Cham, Switzerland, 2024; pp. 129–141. [Google Scholar]
- Gökay, G.T.; Gökalp, E.; Eren, P.E. Data Science Projects: A Systematic Literature Review on Characteristics, Implementation, and Challenges. In Proceedings of the International Conference on Information Technology and Applications, ICITA 2025, Oslo, Norway, 14–16 October 2025; Lecture Notes in Networks and Systems; Springer: Singapore, 2026. [Google Scholar]
- Kraut, N.; Transchel, F. On the Application of SCRUM in Data Science Projects. In Proceedings of the 2022 7th International Conference on Big Data Analytics (ICBDA), Guangzhou, China, 4 March 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 1–9. [Google Scholar]
- Saltz, J.; Shamshurin, I. Big Data Team Process Methodologies: A Literature Review and the Identification of Key Factors for a Project’s Success. In Proceedings of the 2016 IEEE International Conference on Big Data (Big Data), Washington, DC, USA, 5–8 December 2016; p. 2879. [Google Scholar]
- Bannerman, P. Defining Project Success: A Multi-Level Framework. In Proceedings of the Project Management Institute Research Conference, Warsaw, Poland, 13–16 July 2008; pp. 1–14. [Google Scholar]
- Shenhar, A.J.; Dvir, D. Reinventing Project Management: The Diamond Approach to Successful Growth and Innovation; Harvard Business Review Press: Brighton, MA, USA, 2007; ISBN 978-1-59139-800-4. [Google Scholar]
- Zwikael, O.; Meredith, J. Evaluating the Success of a Project and the Performance of Its Leaders. IEEE Trans. Eng. Manag. 2021, 68, 1745–1757. [Google Scholar] [CrossRef]
- DeLone, W.H.; McLean, E.R. Information Systems Success: The Quest for the Dependent Variable. Inf. Syst. Res. 1992, 3, 60–95. [Google Scholar] [CrossRef]
- DeLone, W.H.; McLean, E.R. The DeLone and McLean Model of Information Systems Success: A Ten-Year Update. J. Manag. Inf. Syst. 2003, 19, 9–30. [Google Scholar]
- Nelson, R.R. Project Retrospectives: Evaluating Project Success, Failure, and Everything In Between. MIS Q. Exec. 2008, 4, 5. [Google Scholar]
- Varajão, J. The Many Facets of Information Systems (+projects) Success. Int. J. Inf. Syst. Proj. Manag. 2018, 6, 5–13. [Google Scholar] [CrossRef]
- Pinto, J.K.; Prescott, J.E. Variations in Critical Success Factors Over the Stages in the Project Life Cycle. J. Manag. 1988, 14, 5–18. [Google Scholar] [CrossRef]
- Khang, D.B.; Moe, T.L. Success Criteria and Factors for International Development Projects: A Life-Cycle-Based Framework. Proj. Manag. J. 2008, 39, 72–84. [Google Scholar] [CrossRef]
- de Wit, A. Measurement of Project Success. Int. J. Proj. Manag. 1988, 6, 164–170. [Google Scholar] [CrossRef]
- Siddique, L.; Hussein, B.A. A Qualitative Study of Success Criteria in Norwegian Agile Software Projects from Suppliers’ Perspective. Int. J. Inf. Syst. Proj. Manag. 2022, 4, 63–79. [Google Scholar] [CrossRef]
- Dvir, D.; Lipovetsky, S.; Shenhar, A.; Tishler, A. In Search of Project Classification: A Non-Universal Approach to Project Success Factors. Res. Policy 1998, 27, 915–935. [Google Scholar] [CrossRef]
- Shenhar, A.J.; Dvir, D.; Lechler, T.; Poli, M. One Size Does Not Fit All: True for Projects, True for Frameworks. In Proceedings of the PMI Research Conference, Seattle, WA, USA, 14–17 July 2002; Project Management Institute: Newtown Square, PA, USA, 2002; pp. 14–17. [Google Scholar]
- Ahimbisibwe, A.; Daellenbach, U.; Cavana, R.Y. Empirical Comparison of Traditional Plan-Based and Agile Methodologies: Critical Success Factors for Outsourced Software Development Projects from Vendors’ Perspective. J. Enterp. Inf. Manag. 2017, 30, 400–453. [Google Scholar] [CrossRef]
- Crisan, E.L.; Dan, M.; Beleiu, I.N.; Ciocoiu, E.; Beudean, P. How Critical Success Factors Combine to Influence Success? A Configurational Theory Approach on Multiple Social Projects. Int. J. Manag. Proj. Bus. 2023, 16, 767–787. [Google Scholar] [CrossRef]
- Ika, L.A. Project Success as a Topic in Project Management Journals. Proj. Manag. J. 2009, 40, 6–19. [Google Scholar] [CrossRef]
- Castro, M.S.; Bahli, B.; Barcaui, A.; Figueiredo, R. Does One Project Success Measure Fit All? An Empirical Investigation of Brazilian Projects. Int. J. Manag. Proj. Bus. 2020, 14, 788–805. [Google Scholar] [CrossRef]
- Lavazza, L.; Frumento, E.; Mazza, R. Defining and Evaluating Software Project Success Indicators—A GQM-Based Case Study. In Proceedings of the 10th International Conference on Software Engineering and Applications, Colmar, France, 20–22 July 2015; SCITEPRESS—Science and Technology Publications: Setúbal, Portugal, 2015; pp. 105–116. [Google Scholar]
- Hevner, A.R.; March, S.T.; Park, J.; Ram, S. Design Science in Information Systems Research. MIS Q. 2004, 28, 75–105. [Google Scholar] [CrossRef]
- Venable, J.; Pries-Heje, J.; Baskerville, R. FEDS: A Framework for Evaluation in Design Science Research. Eur. J. Inf. Syst. 2016, 25, 77–89. [Google Scholar] [CrossRef]
- ISO 9001:2015; Quality Management Systems—Requirements. ISO: Geneva, Switzerland, 2015.
- ISO/IEC/IEEE 15939:2017; Systems and Software Engineering–Measurement Process. ISO: Geneva, Switzerland, 2017. [CrossRef]
- Saltz, J.; Shamshurin, I.; Connors, C. Predicting Data Science Sociotechnical Execution Challenges by Categorizing Data Science Projects. J. Assoc. Inf. Sci. Technol. 2017, 68, 2720–2728. [Google Scholar] [CrossRef]
- Kelleher, J.D.; Tierney, B. Data Science; MIT Press: Cambridge, MA, USA, 2018; ISBN 978-0-262-34703-7. [Google Scholar]
- Al-Debei, M.M. The Era of Business Analytics: Identifying and Ranking the Differences between Business Intelligence and Data Science from Practitioners’ Perspective Using the Delphi Method. J. Bus. Anal. 2024, 7, 94–119. [Google Scholar] [CrossRef]
- Bertalanffy, L.V. General System Theory: Foundations, Development, Applications; George Braziller Inc.: New York, NY, USA, 1968; ISBN 978-0-8076-0453-3. [Google Scholar]
- ISO/IEC 25010:2023; Product Quality Model. ISO: Geneva, Switzerland, 2023.
- Bannerman, P.L.; Thorogood, A. Celebrating IT Projects Success: A Multi-Domain Analysis. In Proceedings of the 2012 45th Hawaii International Conference on System Sciences, Maui, HI, USA, 4–7 January 2012; pp. 4874–4883. [Google Scholar]
- Marques, A.; Varajão, J.; Sousa, J.; Peres, E. Project Management Success I-C-E Model—A Work in Progress. Procedia Technol. 2013, 9, 910–914. [Google Scholar] [CrossRef]
- Howsawi, E.; Eager, D.; Bagia, R.; Niebecker, K. The Four-Level Project Success Framework: Application and Assessment. Organ. Proj. Manag. 2014, 1, 1–15. [Google Scholar] [CrossRef][Green Version]
- Zwikael, O.; Smyrk, J. A General Framework for Gauging the Performance of Initiatives to Enhance Organizational Value. Br. J. Manag. 2012, 23, S6–S22. [Google Scholar] [CrossRef]
- Badewi, A. The Impact of Project Management (PM) and Benefits Management (BM) Practices on Project Success: Towards Developing a Project Benefits Governance Framework. Int. J. Proj. Manag. 2016, 34, 761–778. [Google Scholar] [CrossRef]
- Yin, R.K. Case Study Research and Applications; SAGE Publications: Thousand Oaks, CA, USA, 2018; Volume 6. [Google Scholar]
- Patton, M.Q. Qualitative Research & Evaluation Methods: Integrating Theory and Practice; SAGE Publications: Thousand Oaks, CA, USA, 2014; ISBN 978-1-4833-0145-7. [Google Scholar]
- Varajão, J.; Lourenço, J.C.; Gomes, J. Models and Methods for Information Systems Project Success Evaluation—A Review and Directions for Research. Heliyon 2022, 8, e11977. [Google Scholar] [CrossRef]
- Teixeira, A.; Oliveira, T.; Varajão, J. Evaluation of Business Intelligence Projects Success—A Case Study. Bus. Syst. Res. Int. J. Soc. Adv. Innov. Res. Econ. 2019, 10, 1–12. [Google Scholar] [CrossRef]
- ISO/IEC 33020:2015; Information Technology—Process Assessment—Process Measurement Framework for Assessment of Process Capability. ISO: Geneva, Switzerland, 2015.




















| No. | Objective | Explanation |
|---|---|---|
| O-0 | Provide Holistic Assessment Capability | As the core functionality and the reason for this model’s existence, the model will enable an end-to-end success assessment process, allowing practitioners to define success constructs, measure them with concrete metrics, and evaluate the results to allow for informed decisions to be made. |
| O-1 | Define Success Constructs Comprehensively | Moving beyond the traditional cost, time, and scope triangle, the model will define success constructs that address the multidimensional nature of data science value. |
| O-2 | Enable Dual-Faceted Assessment | The model will include an evaluation of both the fulfillment of goals and the conditions enabling success, allowing for timely and targeted interventions. |
| O-3 | Reflect Data Science Characteristics | The solution will explicitly account for the distinct properties and challenges of data science (e.g., uncertainty) that distinguish it from traditional software projects. |
| O-4 | Ensure End-to-End Lifecycle coverage | The model will align evaluation activities with the distinct phases of a data science project, guaranteeing no critical transition or risk area is omitted. |
| O-5 | Allow for Flexible Adaptation | To ensure its applicability across any data-science initiative and any organizational setting, the model will be supported by configurable resources such as templates, guidelines, and mechanisms that can be adapted to the specific attributes of a given project. |
| O-6 | Facilitate Cross-Project Comparability | By providing a standardized evaluation framework, the model will allow for the fair and comparable measurement of different data science projects, which, in turn, will create a foundation for benchmarking, identifying best practices, and learning for the organization. |
| O-7 | Provide Element-Level Actionable Granularity | To avoid becoming merely theoretical and unused, the model will make abstract concepts measurable, explaining how to derive them and establishing rules for how to calculate them. |
| O-8 | Offer Directly Implementable Guidance | Beyond specifying metrics, the model will be accompanied by step-by-step instructions, ready-to-use templates, and worked examples to ensure practitioners can easily utilize the model in their own contexts. |
| ID | High-Level Requirement | Related Objective | Reference |
|---|---|---|---|
| HLR-7 | Provide a core backbone of shared categories of success and evaluation flows for benchmarking and comparison. | O-6 | “There is a need for common elements that can be compared between projects, including project success assessment methods, monitoring, and controlling projects” [52] |
| HLR-8 | Incorporate stakeholder perspectives into the success evaluation process, where feasible. | O-1 | “Therefore, it is very important for project success to be measured by taking into consideration the perceptions or business values of the project from the viewpoint of those stakeholders who are possible beneficiaries of the project.” [46] |
| W-Questions | Answers |
|---|---|
| What aspects of the data science project do we evaluate? | Meeting success objectives Ensuring enabling conditions |
| Who conducts and acts on the evaluations? | Conducted by: Project Manager, Project Lead, PMO Analyst, or any relevant role Actioned by: Project/Portfolio Managers, Department Heads, Executives, Sponsors, Steering Committee |
| No. |
Design
Focus | Related HLR |
Design
Principle | Design Elements | |
|---|---|---|---|---|---|
| Meta-Model | Instantiation | ||||
| DP-4 | Defining Success Constructs—Success Criteria Derivation | HLR-0, HLR-6, HLR-13 | Allow practitioners to select success criteria and their associated metrics or to derive them using an applicable derivation method. | Success criterion, metric, derivation method of success criteria | Candidate success criterion/metric catalogs, template(s) for the derivation method of success criteria |
| Property | Values | Logic |
|---|---|---|
| Evaluation Level | 1. Project Level | The model can be instantiated to assess a single discrete phase or the project as a holistic system. |
| 2. Phase Level | ||
| Evaluation Dimensions | 1. Success (Achievement) | Success measures the achievement of objectives via success criteria. |
| 2. Health (Enablers) | Health measures the achievement of enabling conditions for success via CSFs | |
| Success Constructs | 1. Success Criteria | Success criteria are used specifically to measure the Success dimension. |
| 2. Critical Success Factors (CSFs) | CSFs are used specifically to measure the Health dimension. | |
| Implementation Modularity | 1. Independent (Standalone) | Phase Success, Phase Health, and Project Success can be implemented as standalone modules. Project Health depends on Phase Success outputs and requires phases to be evaluated to function |
| 2. Integrated (Dependent) | ||
| Application Type | 1. Self-Assessment (Default) | The default mode is self-assessment by the project team. However, the model is architected to support third-party assessment when scaled for audit or governance purposes. |
| 2. Third-Party (Scalable) | ||
| Input Sources | 1. Objective Data (Reports/Extracts) | Objective data (e.g., budget variance) |
| 2. Subjective Data (Surveys/Perceptions) | Subjective data (e.g., top management support) | |
| Evaluation Timing | 1. Ad hoc (Snapshot) | Ad hoc: Instant assessments |
| 2. Lifecycle (Longitudinal) | Lifecycle: Continuous surveillance throughout the integrated lifecycle | |
| Primary Interface | Project Manager | The primary user is the Project Manager, though data may be ingested from organizational systems or stakeholders in automated settings. |
| Project Success | ||||
| Level-1 Project Success Category | Output Success | Outcome Success | ||
| Weight | 0.44 | 0.56 | ||
| Level-2 Project Success Category | Product Quality Success (PQS) | Project Management Success (PMS) | Business Value Success (BVS) | Financial Value Success (FVS) |
| Project Objectives (Weight) | Develop a solution compliant with the desired requirements (0.71) | Developing the solution on time and within scope (0.29) | Maximizing the unit’s processing potential and operational efficiency (0.50) | Increase revenue (0.50) |
| Phase Success | ||
|---|---|---|
| Success Category | Deliverable Success | Phase Management Success |
| Data Preparation Phase Objectives (weight) | Extract specific process tags from the source system (PHD Historian) and integrate them into the master dataset, ensuring all new features are normalized and validated (0.67). | Execute the data preparation and feature engineering tasks within the defined scope and timeline (0.33). |
| Modeling Phase Objectives (weight) | Develop a predictive model that quantitatively outperforms the baseline performance established in the first modeling iteration (0.67). | Execute the second modeling iteration within the defined scope and timeline (0.33). |
| Project Success | ||||
| Level-1 Project Success Category | Output Success | Outcome Success | ||
| Weight | 0.44 | 0.56 | ||
| Level-2 Project Success Category | Product Quality Success (PQS) | Project Management Success (PMS) | Business Value Success (BVS) | Financial Value Success (FVS) |
| Project Objectives (Weight) | Developing the solution in full conformance with contract requirements (0.56) | Delivering the solution within scope and schedule (0.44) | (Strategic Client Retention) Securing a strategic partnership and “referenceable” status by demonstrating the solution’s impact, increasing the potential for contract renewal and opening opportunities for upselling future projects. (0.63) | Securing full contract value (0.38) |
| Phase Success | ||
|---|---|---|
| Phase Success Category | Deliverable Success | Phase Management Success |
| Operations and Maintenance Phase Objectives (weight) | To maintain high system availability and ensure the GenAI agent system delivers accurate, reliable responses throughout the period (0.63). | To ensure strict compliance with contractual reporting schedules and achieve rapid response times for all system incidents (0.38). |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Gökay, G.T.; Gökalp, E.; Eren, P.E. DS PRO-S: A Success Assessment Model and Methodology for Data Science Projects. Appl. Sci. 2026, 16, 2551. https://doi.org/10.3390/app16052551
Gökay GT, Gökalp E, Eren PE. DS PRO-S: A Success Assessment Model and Methodology for Data Science Projects. Applied Sciences. 2026; 16(5):2551. https://doi.org/10.3390/app16052551
Chicago/Turabian StyleGökay, Gonca Tokdemir, Ebru Gökalp, and P. Erhan Eren. 2026. "DS PRO-S: A Success Assessment Model and Methodology for Data Science Projects" Applied Sciences 16, no. 5: 2551. https://doi.org/10.3390/app16052551
APA StyleGökay, G. T., Gökalp, E., & Eren, P. E. (2026). DS PRO-S: A Success Assessment Model and Methodology for Data Science Projects. Applied Sciences, 16(5), 2551. https://doi.org/10.3390/app16052551

