Next Article in Journal
Safeguarding Brand and Platform Credibility Through AI-Based Multi-Model Fake Profile Detection
Previous Article in Journal
A Comparative Study of PEGASUS, BART, and T5 for Text Summarization Across Diverse Datasets
Previous Article in Special Issue
Confidential Smart Contracts and Blockchain to Implement a Watermarking Protocol
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Framework for Data Lifecycle Model Selection

by
Mauro Iacono
1,*,†,
Michele Mastroianni
2,*,†,
Christian Riccio
1,† and
Bruna Viscardi
3,†
1
Dipartimento di Matematica e Fisica, Università degli Studi della Campania “L. Vanvitelli”, 81100 Caserta, Italy
2
Dipartimento di Scienze Agrarie, Alimenti, Risorse Naturali e Ingegneria, Università degli Studi di Foggia, 71122 Foggia, Italy
3
Independent Researcher, 81100 Caserta, Italy
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Future Internet 2025, 17(9), 390; https://doi.org/10.3390/fi17090390
Submission received: 9 July 2025 / Revised: 31 July 2025 / Accepted: 26 August 2025 / Published: 28 August 2025

Abstract

The selection of Data Lifecycle Models (DLMs) in complex data management scenarios necessitates finding a balance between quantitative and qualitative characteristics to ensure regulation, improve performance, and maintain governance requirements. In this context, an interactive web application based on AHP-Express has been developed as a user-friendly tool to facilitate decision-making processes related to DLM. The application facilitates customized decision matrices, organizes various expert interviews with distinct weights, calculates local and global priorities, and delivers final DLM rankings by consolidating sub-criteria scores into weighted macro-category values, accompanied by graphical representations. Key functions encompass consistency checks, sensitivity analysis for macro-category weight variations, and graphical representations (bar charts, radar maps, sensitivity charts) that emphasize strengths, shortcomings, and the robustness of rankings. In a suggested application for sensor-based artifact monitoring at the Museo del Carbone, the tool swiftly selected the most appropriate DLM as the leading contender, exhibiting consistent performance across diverse weight scenarios. The results of the Museo del Carbone case validate that AHP-Express facilitates rapid, transparent, and reproducible DLM selection, reducing cognitive load while maintaining scientific rigor. The tool’s modular architecture and visualization features enable educated decision making for various data management issues.

1. Introduction

Management of data in complex processes implies managing all phases upon which applications rely, as well as ensuring compliance with regulations concerning data storage and security. Regulations may vary according to the kind of data retained: e.g., data subject to GDPR must be managed for the minimal needed time for processing and must not be used for purposes that have not been explicitly authorized by the data subjects. In addition, management should support the efficiency, in terms of performances, of all processes that run over data on the same information system. Finally, the scale of data is increasing, and the organization of data is getting diverse, including now relational and non-relational databases, data warehouses, data lakes, or other sources.
Decision making under such complex conditions is not trivial, and requires tools to be performed in an informed way. Data Lifecycle Models [1] provide a reference, in some measure validated, in terms of best practices, by the fact that they are successfully used in different domains and that a corpus of experience is available and can be leveraged in new projects. Data lifecycles have been studied and documented in literature—and technical literature is also available—but generally exhibit numerous aspects to be evaluated, are composed of different sets of phases, and present different parameters and pros and cons that must be taken into consideration in the decision processes related to the definition of a new system, the evolutionary maintenance of existing systems, and the operations with respect to the evolutions of the workload. Decisions have to be taken based not only on quantitative parameters but primarily on qualitative parameters, which require expertise and experience and may differ from phase to phase or may be proper of a single phase or of the whole lifecycle.
The original contribution of this paper is a novel decision support approach to DLM choice and design, supported by a tool, based on the Analytic Hierarchy Process (AHP), which we developed for data managers, providing a proper methodological approach. The tool allows data managers to exploit the expertise of data lifecycle experts and domain experts to choose the most appropriate data lifecycle, comparing it with several reference data lifecycles from literature, supporting them in the analysis by providing qualitative and quantitative references that condense the knowledge and experience of the panels of experts, including the possibility for extending and customizing the support by incrementing and updating experts’ or specialists’ contributions. In fact, the tool fully supports the approach by allowing incremental evaluation in case of change in the experts’ panels, evolutionary evaluation in time, weighting of evaluations according to the level of expertise of each panelist, and the addition of a novel data lifecycle to the set of reference lifecycles. The tool has been developed in the framework of a complete data lifecycle evaluation methodology defined by our research group, but is meant for reuse both in different projects and for different purposes.
Our approach defines proper evaluation criteria and sub-criteria for each phase, which are weighted appropriately on each single case to synthesize a reference about quality, security, compliance, and general governance for the information system. The goal is to help non-specialists of the application domain in decomposing and systematizing a plurality of complex judgments and to update periodically the outcomes, so this tool will be useful for data managers to choose in their project the Data Lifecycle Model that is best fit for the particular data management problem.
That said, we designed the proposed tool to support decisions on the basis of abstract, high-level comparative criteria that differ for each phase of a data lifecycle, constituting a range of options related to data planning, assessment, and governance.
This paper is structured as follows: the next section provides the necessary background, while in Section 3 are discussed the motivations that drive the approach and the related tool; Section 4 summarizes the basic ideas behind AHP; Section 5 describes the proposed approach; Section 6 presents the support tool; Section 7 shows the application of the method to a simple case by means of the tool, while in Section 8, the impact of blockchain technologies in Data Lifecycle Management is been discussed; and conclusions close this paper.

2. Background

The GDPR (General Data Protection Regulation), which became enforceable in May 2018, requires the implementation of a Data Management Plan (DMP), in particular for projects involving personal data. The DMP covers all aspects of data management, from collection and storage to sharing and access, and it characterizes the data management strategy of organizations (large or small, public or private). In this context, an essential instrument that can be adopted by companies or public administrations is the Data Lifecycle Model (DLM). A DLM can be considered a set of phases related among themselves, thanks to which data flows and processes used to transform data into knowledge can be identified. In principle, there are as many DLMs as there are organizations that need to use them, since the models are easy to adapt to different situations as possible. In fact, the literature proves a high degree of flexibility of these models.
In [2], a description is provided of a DLM with fourteen elementary phases that can be used to represent the basic properties of DLMs. Consider that the fourteen steps described in [2] are not strictly in an order and, above all, do not need to be all in each implementation of a DLM. Instead, increasing the complexity of the model tends to correspond to higher costs for organizations. Let us take into consideration the DLM for a scientific dataset as an example. Depending on the nature of the application, not all phases are required in such a DLM: for example, if no personal data are involved, it is not needed to include a privacy-related phase. In fact, it is certainly not forbidden to include each step in a model, but it is appropriate that the information system is integrated and economically well structured in the choices for its structure. The sequence of DLM phases is not rigid but very flexible, and some phases described in [2] (Governance, Safety and Security, Quality) can even be considered “transversal” so that they do not actually fit a proper sequence of phases. Considering in this case as an example two DLMs designed to collect scientific data, USGS [3] and DataONE [4], in the first case, there is a database that must be defined ex novo because, at the dawn of this information system, it is necessary to plan the data collection phase, as previous sources did not exist yet. Setting up a database into which to pour data acquisitions requires different prior and planning management. In the case of DataONE, however, the scientific reference data already existed and came from different data sources. The problem to be tackled is not easier because data could be heterogeneous, and this can be an issue, but the collection phase of DataONE has a series of rules to be applied to integrate the various datasets.
The final phase may also not be always present in a data management strategy, but it is in general necessary to go through the concept of data waste [5], and it might may be necessary to plan for data disposal or destruction. The data deletion process is implemented when the organization wishes to dispose of inactive or obsolete data. This approach has two advantages: it reduces both storage costs and the risks of non-compliance with prevailing regulations. Note that, in the case of sensitive data, physical backup media must also be securely destroyed. Examples are digital health DLM or video surveillance models [6]. In these cases, the need to free up space in the storage subsystems may become a routine operation to be foreseen in the DMP by planning the final deletion of them, which for other DLM is not mandatory, but is a close and cyclic activity with such high frequencies. Given the adaptability and diversity of DLMs in many organizational scenarios, choosing, ranking, and tailoring the DLM stages to the unique requirements, limitations, and goals of each situation is a crucial task. In such types of contexts, determining the best DLM configuration becomes a difficult task that frequently calls for striking a balance between qualitative and quantitative factors.
Multi-Criteria Decision-Making (MCDM) approaches are being used more and more to assist in the assessment and choice of DLM components in order to address this complexity, and among them, AHP has proven to be one of the most successful of these because of its domain neutrality and capability of breaking down decision difficulties. By combining quantitative data and expert judgment, AHP makes it easier to compare various criteria and options. Recent studies have applied AHP in various decision-making and risk assessment contexts [7], demonstrating its effectiveness in guiding structured evaluations. For instance, ref. [8] presents an AHP-based Information Security Risk Assessment framework, ref. [9] applies AHP to identify vulnerable IoT components in healthcare systems, and [10] utilizes AHP to define appropriate countermeasures for protecting Personally Identifiable Information.
AHP results in being a well-established and widespread method in both industrial practice and academia research. In industry, its application covers, for instance, supplier selection, strategic planning, project prioritization, and quality management, with documented case studies in sectors such as healthcare, manufacturing, and finance [11,12,13,14,15,16,17,18]. In academia, systematic reviews report many peer-reviewed studies and applications across domains such as human resource management, operations, and R & D evaluation, further demonstrating robust empirical validation and methodological maturity [19,20,21,22].
Moreover, in [23], AHP was adopted to evaluate strategic assessment of healthcare agencies; in [18], the authors used AHP in mining engineering for mine-planning risk assessment, investment analysis, and qualitative decision making. Meanwhile, other works used AHP in urban and architectural strategic planning in order to evaluate different design solutions [24]. Human resources emerges as another domain in which the technique has been involved for the recruitment of new employees [25]. Lastly, in [26], AHP was used as a model for selecting business processes for software management.
In our context, AHP is applied in our work to support the configuration and evaluation of DLMs tailored to privacy-sensitive contexts.

3. Rationale

Management of information systems is a consolidated practice that can be considered one of the first management activities that assumed a specific and autonomous characterization, together with software engineering. Notwithstanding its long evolution, this discipline still provides methodological contributions to professionals because of the evolution of architectures, applications, needs, and cultural perspectives. As for software engineering, management of information systems is strongly rooted in the needs of the professional practice of engineering and has to cope with a variety of requirements and a variety of possible choices in the design of the hopefully best solutions for users’ problems, taking into account all practical constraints, such as technical limitations, cost viability, maintenance needs, performance requirements, and compliance. Some of these constraints require an early evaluation and enforcement, and consequently influence design choices in the very first phases of a design cycle, whichever is the chosen approach: it is, for example, the case of privacy requirements rooted in the GDPR, which prescribes privacy-by-design and privacy-by-default, or of performance requirements for systems, which are in general analyzed and verified in their consequences even before the definition of systems architecture, using the desired behaviors of the system emerging from functional specifications as a source for quantitative or qualitative models, such as Petri nets [27], queuing networks [27], fault trees [28], or even more complex conceptual design support tools such as multiformalism modeling approaches [29].
Ideas and methods from software engineering, of course, provide consistent and general support to design, specially for what pertains to the organization and management of design processes.
The experience of performance evaluation, a peculiar part of software engineering, suggests intervening in the project in the very early stages: here, however, it is not possible to leverage the system’s behaviors, i.e., the functional specifications, since the object of the project is the creation of a data management cycle on which different applications will be based over time, not a software, and probably the specifications available, which will generate the technological specifications for the system, are non-functional specifications relating to the quality of the lifecycle, to privacy constraints, and to performance, all specifications that are generally expressed at a high level and that have a mainly qualitative nature or can be chosen only on a comparative basis.
The comparison occurs by comparing different solutions based on a plurality of quali–quantitative factors, but the area in which literature and professional practice provide the greatest support, the technological one, is in this case partially fixed identically for all alternatives, since the platform often already exists, and cannot be defined a priori in detail for the remaining part, because it is a consequence of decisions that depend on comparisons (and the same could be true for the available data, or part of them). This therefore makes it necessary to structure an approach to the choice and design of DLMs that can make use, from the early stages of the project cycle, of methods suitable for supporting informed choices on high-level aspects of the systems.
In this work, therefore, we propose to base the decision-making process on a literature solution, known as AHP; however, there are other methods that can be used, which are outside the scope of this work and are already currently under analysis for further development.

4. The Analytic Hierarchy Process (AHP) Method

AHP is a modeling method belonging to the families of the so-called multi-criteria decision models that was originally proposed by Saaty [30,31,32]. Given a set of alternatives and criteria, it used to obtain priorities (or weights) starting from the construction of a pairwise comparison matrix and calculating its principal eigenvector.
This modeling technique allows for approaching the problem as a structured and multi-level hierarchical decomposition, via (i) general objective definition; (ii) definition of the alternatives that need to be compared and evaluated; and (iii) criteria set (and eventually sub-criteria) definition, which competes with the final mark of an alternative. Each criterion is compared with all others via a relative importance measure defined by Saaty’s scale.
Saaty also proposed a numeric scale to evaluate each criterion, composed by numbers ranging from 1 to 9 and defined in the following Table 1. The intermediate values (2, 4, 6, 8) can be used if a factor is slightly more important than the other to express intermediate judgments.
Pairwise comparisons allow for constructing the pairwise comparisons matrix  A = ( a i j ) n × n , in which each a i j indicates the relative importance of criterion C i with respect to criterion C j : a i j = 1 if criterion C i is as significant as criterion C j ; a i j > 1 if C i is more significant than C j .
The A matrix is reciprocal, meaning that i , j a i j = 1 a i j . The priority vector w, in which each element represents the relative weight of each criterion, is drawn from the pairwise comparisons matrix; this corresponds to solving the eigenvalue problem, as follows:
A w = λ m a x w ,
in which λ m a x is the largest eigenvalue of matrix A.
Priority vector w is then obtained by normalizing the principal eigenvector linked to λ m a x until the sum of its elements equals 1. The relative weights of the criterion are represented by the normalized primary eigenvector.
In order to assess the reliability of the given pairwise judgments, a consistency check is performed through the calculation of the consistency index (CI), as follows:
C I = λ m a x n n 1 ,
allowing for calculating the consistency ratio (CR) using the following formula:
C R = C I R I ,
in which R I is the random index for a matrix of order n. Accepted values of CR are those for which CR is less than 0.1 , meaning that the degree of inconsistency is within tolerable limit and the judgments are considered consistent [33].
The AHP logic presented so far makes the decision process more structured and clear to the decision maker. Nevertheless, in a context with a large number of criteria, n ( n 1 ) 2 comparisons are required, meaning that as n increases, the time and the cognitive burden of the decision maker increase, resulting in considering AHP as less appealing. In order to tackle this problem, a simplified variant of AHP has been proposed, named AHP-Express. AHP-Express [34] represents a light version of the standard AHP because it reduces the number of pairwise comparisons required while still relying on the core idea to approach the problem hierarchically. Here, the difference resides in the choice of a criterion as a reference element, which is considered the most dominant among all the others, so all the comparisons are made against such element, and this drastically decreases the number of pairwise comparisons to n 1 . The priority of each element j against the reference i is given by
p r j = 1 a i j k a i k ,
where
  • i is the index corresponding to the reference factor R;
  • j is the index of the non-reference factor;
  • a i j > 0 denotes the user-assigned comparison value of R against j.
In case of consistency of the comparisons, this priority corresponds exactly with the eigenvector of the pairwise comparison matrix.
This highlights the following key benefits:
  • Reduction in the overall analysis time: compared with traditional AHP, which requires all pairwise comparisons, the express version speeds up the decision process while saving human resources and time resources, allowing for a more consistent evaluation by avoiding attention deficits due to so many comparisons.
  • Increased acceptability and stakeholder involvement: a more straightforward method can certainly help its acceptability by avoiding the opinion that AHP is too time-consuming and/or too cumbersome.
  • Oriented iterative review: in data context, often conditions (e.g., volumes, technologies, and regulations) change rapidly, so the initial decision about a particular data lifecycle might need some reviews; to do so, AHP-Express’s ease of use and velocity are crucial.

Performance Issues of AHP Method

Despite its popularity and widespread use in the scientific community as a reliable MCDM tool used in the best alternative selection, the AHP method suffers from several issues [30,31,32,35,36].
The main issue regarding the traditional AHP method is that it requires a large number of pairwise comparisons, especially in the presence of a large number of criteria. The number of pairwise comparison is n(n − 1)/2, where n is the number of criteria. In fact, the method requires an eigenvector and geometric mean evaluation, so the computations needed grow quadratically with the number of criteria. Moreover, there is another critical issue: the method requires that a team of experts perform n(n − 1)/2 comparisons with an average memory cost of O ( n 2 ) . The method requires assigning a score for every comparison, so with the growth of the number of criteria, this can be cognitively demanding, particularly when the hierarchy encompasses numerous criteria or in case of many alternatives, leading to growth of errors and bias.
For those reasons, we choose to use the modified AHP version AHP Express [34], which requires much less pairwise comparisons than the original Saaty method, in particular (n − 1), so the number of comparisons is of the order of O ( n ) . Moreover, AHP-Express does not require eigenvector and geometric mean evaluation, reducing the overall computation. On the other side, it is to be noted that AHP, such as most other MCDM methods, requires a problem structuring (hierarchy- or network-based) that naturally leads to a limited number of criteria, thus keeping, in practice, low the overall needed computational workload, notwithstanding the intrinsic complexity. More on the advantages of AHP-Express can be found in the Appendix A.

5. Methodological Approach

The logical workflow of the AHP-Express decision support process can be adapted to the context of DLM selection [37]. The structure of the process is articulated as in Figure 1, in the following steps:
  • General objective definition: the principal scope of the analysis is here explicitly expressed and placed at the top of the hierarchical structure;
  • Hierarchy construction: it is defined on several levels, depending on the case;
  • Definition of the values in the decision matrix;
  • Choice of the reference element and pairwise comparisons, to determine which element is perceived as more relevant;
  • Weight calculation, to obtain, with the logic of levels and AHP reference factors, first the local weights of the alternatives for a specific criteria, then priorities by aggregating the weights along the level escalation, in order to obtain the global weights of each alternative with respect to the general objective;
  • Analysis and decision, in terms of a relative ranking of all the alternatives.
AHP should be mapped to the decision issues that characterize the choice of DLM. On the basis of the meta-phases defined in [2], all DLMs may be mapped onto a set of phases, which are the phases we mentioned in the examples reported in Section 2 and that can be overall summarized in a set reported in Table 2. We grouped the meta-phases, for the aims of the application of AHP-Express to our problem, in criteria that semantically represent the role of meta-phases in the logic of the decision process, as in Table 2, which represents groupings on each single line of the table. Each criterion stands for one of the relevant roles that a phase can play in a DLM. Further, criteria are related in the table to the categories they belong.
Figure 1. Operational phases of AHP-Express.
Figure 1. Operational phases of AHP-Express.
Futureinternet 17 00390 g001
To apply the steps to our case, the levels mentioned in step 2 will be organized as follows:
  • First level: general criteria definition, in terms of two categories, category A, in which the particularly critical criteria have been grouped in case the data to be managed have to be collected from scratch (namely, “New Data”), and category B (namely, “Old Data”), which groups all the remaining criteria, which are to be considered of the same importance regardless of whether the data to be managed are collected from scratch or are already existing data belonging to other projects;
  • Second level: criteria definition, which corresponds to the six meta-phases identified, as shown in Table 2.
The values mentioned in step 3 are defined, which in the logic of the decision process represent the evaluations for each phase of each selected DLM defined by a different DLM expert as a score in Saaty’s scale. This requires the definition of a set of DLMs that are coherent with the purposes of the decision process and provide one of the alternatives between which the decision process should discern. Here, we define the set, considering a selection of the most significant papers in the literature about data management with relation to DLMs. In this step, consequently, a panel of DLMs experts does select the set of DLMs and assign a score for each criterion for each DLM.
The idea of this work is to take advantage of the combination of main phases and transversal phases of the main DLMs to design and implement a tool for the assisted selection of the best-suited DLM for the needs of a specific data management problem by exploiting a pre-evaluation of DLM phases operated by the mentioned panel. The grouping of the phases into six criteria operated in Table 2 is meant to simplify the comparison of DLMs and to generalize the applicability of the tool.
There are a number of different DLMs developed by academic and professionals over the past years, and in [2], 78 different DLMs are mentioned. To redact a list of significant DLMs for further analysis, the idea we use for choosing the models is to consider all models that have at least a high rating for each criterion. In this way, we have selected 10 different DLMs, as follows:
  • USGS [3] and DDI [38] because of the relevance in the definition of Starting criterion;
  • HINDAWI [39] for the Assessment criterion;
  • DataONE [4] and CIGREF [40] for the Computation criterion;
  • DCC [41] for the Administration criterion;
  • IBM [42], PII [43], and CRUD [44] for Security;
  • Enterprise Datalifecycle (or EDLM) [45] for the relevance of criterion End-of-Life.
The chosen DLMs have been selected because they are well consolidated and used both in academia and in industry, and in particular, USGS has been developed by a government agency (U.S. Geological Service); DataONE, IBM, and CIGREF have been implemented by industries; and all other models are the result of academic work. Our selection process has been based on a ranking of the various phases in which they are articulated, considering them in the framework presented in [2], reconsidered in terms of use, reuse, and feedback, share, publish, and governance phases. This process has been documented in another paper, currently submitted for publication, consequently it is out of the scope of this work.
To give anyway a glance of the process, in the case of DCC here, we rank it as 10 in administration in Table 3 because this category includes and summarizes use, reuse and feedback, share, publish, and governance phases, which all contribute to a high ranking as DCC is a DLM for digital artifacts supported by the conformance to OAIS and ISO 15489; in the case of CRUD, we rank it here as 10 in security because it can accommodate different processes but mandatorily includes the Create, Store, and Destruct phases, which are of paramount importance to control the security aspects of data acquisition, maintenance, and disposal, ensuring that data are only stored until their permanence in the system is justified and their disposal is guaranteed and controlled after that moment. Only DLMs from literature that exhibit one or more high rank in the relevant categories have been selected, unless they exhibit characteristics that are not totally covered by others, as in the case of CIGREF [46] or DDI. In this moment, readers can refer to authors for details about the overall analysis, while the publishing process is ongoing.
The selected panel, composed of the authors and a group of data management experts that agreed to keep themselves anonymous, because of their positions, evaluated the six criteria for the ten DLMs, as reported in Table 3. This matrix has been used to accomplish step 3.
Steps 4, 5, and 6 are specific for each single application of the decision process a user of our method may want to perform to choose the best fitting DLM. In step 4, AHP should be applied by eliciting preferences about the criteria prioritization, in comparison with a chosen reference criterion: this step may be assisted by software. The reference choice is meant to be the one that is thought to be most important in category A and category B. Categories A and B group, in order to build the top-most level of the AHP hierarchy, criteria that are more relevant if the decision is about a DLM suitable for systems in which the prevalent attention is on creating from scratch a new data repository or if new data feeds are more relevant than existing ones (A) or on augmenting an existing repository or if existing data are more relevant than new data feeds (B).
Step 5 simply applies AHP-Express computation, while step 6 allows for evaluating the final rank of the proposed DLMs alternatives, which also may be assisted by software. As decision should be as much informed as possible within the limitations of the experience of the user, in this step, an extended support to the user, in terms of both graphical comparisons and numerical information, may greatly improve the value of the presented method.
Consequently, our method has been companioned by means of a support tool.

6. The Proposed Tool

This software tool is available as an interactive web application on Streamlit at https://ahp3-python.streamlit.app/ (accessed on 30 July 2025). The source code is hosted on GitHub at https://github.com/christianriccio/ahp3-python.git (accessed on 30 July 2025).
The purpose of the tool is to support both professionals and academics in applying, in an easy and fast way, our method, but it is in general suitable to support any AHP-Express-based decision process.

6.1. General Architecture and Workflow

The tool is structured as following:
  • User Interface
The decision maker may interact with the application via the Streamlit interface, which introduces the user to the usage of the tool and guides the user smoothly through the entire process, from data loading to results evaluation.
  • Data Loading
The first section allows for uploading a file, either as CSV or as Excel, that contains the information about the alternatives that go under analysis. The required structure of this file consists of DLMs as rows and sub-criteria as columns.
  • Weight Configuration
The user can assign a weight to the categories presented (which in the case of this paper are identified with “New Data” or “Old Data”), whose sum equals 1. Moreover, if required, the tool also allows one to conduct multiple interviews (of different experts) and assign a weight to each one of them as well. The tool then aggregates the results of the interviews according to a weighted geometric mean.
  • AHP-Express and Reference Element Selection
For each macro-category, the user can define a “reference” that is considered to be initially more relevant, and because of this, the needed pairwise comparisons are simplified (as already pointed out) to only n 1 checks.
  • Priority Calculation
The key function of the tool is calculate_ahp_express_prior(), receives as input the ratio of comparisons a r e f e r e n c e , j , and returns the normalized weights of the priority j according the following formula:
p r j = 1 a r e f e r e n c e , j × ( k 1 a r e f e r e n c e , k ) 1 .
  • Scores Aggregation and DLMs Final Ranking
Once the priorities of each sub-criterion with each macro-category (i.e., “New Data” and “Old Data”) were computed, we combined these weights into a global priority vector. The aggregation is obtained by the following formula:
ω i = w i · s c i
S c o r e ( D L M k ) = j = 1 n ω j · V a l u e ( D L M k , j )
where
  • ω i is the global priority vector;
  • s c i is the priority of the sub-criteria i in its own category (i.e., Cat.A or Cat.B of our case);
  • V a l u e ( D L M k , j ) is the value of the k t h DLM for the sub-criterion j.
In other words, at first, the aggregation is obtained by multiplying each priority by the weight assigned to the relative macro-category, determined by the user. This results in an overall priority for each sub-criterion, consistent with the hierarchical setting of AHP. Then, the final score of each DLM is obtained as a weighted sum of its values in each macro-category by using final priorities as weights.

6.2. Sensitivity Analysis

Another distinctive feature of the tool is the possibility to conduct a sensitivity analysis on macro-categories’ weight variation. The function sensitivity_anal() allows for exploring the variation of p A [ 0 , 1 ] to observe how each DLM score varies. The procedure is as follows:
  • Iterates through all the possible values of p A with a step of 0.05 ;
  • At each step, re-computes combined sub-criteria priorities;
  • Re-computes final DLM scores;
  • At the end of the process, a chart is drawn showing all variations in DLM score depending on p A variations.
In this way, more “robust” models (those for which the score does not vary drastically) and more “sensible” models are identified as well by a change in the importance of categories.

6.3. Output and Advantages for the Decision Maker

The tool integrates some visualization features that produce (i) a bar chart comparing final DLM scores, (ii) a radar chart displaying sub-criteria strengths and weaknesses, and (iii) sensitivity plots showing score variation. At the end of the analysis, the decision maker has a final ranking of the DLMs under evaluation, and information about how much each criteria weight influences the overall ranking. This allows for taking more robust and informed decisions.
Moreover, the strong point of the overall analysis is AHP-Express itself, which drastically reduces the time and human resources dedicated to the task, by only requiring, as seen, n 1 pairwise comparisons. This results in being particularly useful when the alternatives and the sub-criteria to evaluate are a lot, or when evaluations from a panel of experts are required.
In summary, the presented tool offers an interactive and user-friendly environment for the evaluation of different DLMs via an AHP-Express modeling approach. Because of its modular architecture, the tool offers the following:
  • The possibility to easily manage input data;
  • Automatic priority calculation via AHP-Express, drastically reducing time and cognitive load caused by pairwise comparisons;
  • The evaluation of the stability of the decisions via sensitivity analysis;
  • The production of useful charts that aim at helping even the less experienced decision makers in the field of data lifecycle modeling.

7. A Running Example: “Museo del Carbone”

The Museo del Carbone belongs to the field of cultural heritage conservation. This Italian museum, situated in Carbonia, is part of the European Route of Industrial Heritage and is dedicated to documenting the historical activities of local coal mining, featuring mineral specimens and historical relics. In this context, temperature, humidity, and light levels are critical for artifact preservation, as variations in these conditions may result in material degradation.
In this instance, data management is focused on monitoring and regulating parameters to mitigate threats to artifacts, while maintaining the usability of the site and ensuring safety. This case, developed during a doctoral research project, is based on a technical report on the design of sensing garments for museums, and serves as a suitable candidate case study for selecting the most appropriate DLM system to manage the data lifecycle generated by sensors, aiming for optimal sensor allocation and precise artifact monitoring.
The objective is to use the proposed tool to identify the most appropriate DLM aiming to support the data management of sensor-based artifact monitoring within the museum, balancing usability, security, administration, and computational load across all lifecycle phases. The first step is between choosing the pre-configured decision matrix or inputting a customized one in CSV format, as shown in Figure 2.
The next step is to perform the interviews for the experts’ group (Figure 3). Due to the fact that our primary objective at this time is to try the tool as a Decision Support System and to highlight not only the DLM with the best ranking but the eventual alternatives to be discussed, the group is composed of only three experts to avoid a strongly characterized output. The results of interviews are the priority vectors for both A and B categories, as shown in Figure 4.
At this point, all needed information has been loaded, and the computation is done. The final ranking is shown in Figure 5.
All obtained results are also shown in Figure 6, a bar diagram output about the final DLM ranking in which scores were calculated based on the weighted criteria. The three top-performing DLMs are the following:
  • Hindawi (7.9);
  • DCC (7.4);
  • CIGREF (7.2).
Figure 6. Bar plot with the overall final ranking.
Figure 6. Bar plot with the overall final ranking.
Futureinternet 17 00390 g006
Better insights can be gained from the radar plot in Figure 7, showing the performances of the ten DLMs across the six lifecycle criteria. The power of this plot resided in its capability in providing strong points and weak points of each model in a comprehensive perspective, supporting a decision while presenting all elements in a synthetic presentation. Models such as DataONE and DCC exhibit strong performance in computation and administration, while Hindawi scores consistently high across almost all sub-factors, suggesting an overall balanced and robust framework. In contrast, IBM and EDLM reveal limitations in several dimensions, particularly in the End-of-Life and Starting phases, respectively, suggesting gaps in lifecycle completeness or implementation practicality.
A sensitivity study was carried out by progressively changing the weight of category A, a high-priority criterion (such as Security or Starting), in order to evaluate the ranking robustness. Figure 8 displays the findings.
The previous figure shows that Hindawi, DCC, and DataONE are highly resilient to shifts in the relative importance of specific criteria, maintaining their rankings with little variation. On the other hand, when the weight of category A increases, IBM and EDLM continue to deteriorate, proving their unsuitability in situations where priorities are crucial.

8. Interactions with the Technological Variable: The Blockchain in Data Lifecycle Management

The presented approach provides a general method to conduct an assessment of solutions in the earliest phases of a design process. Anyway, DLM characteristics are not the only factor that can be exploited in order to guide preliminary decisions, once the main evaluations have been carried on. Technological factors may also be considered, defining a second approximation scenario, which may confirm or disprove a preliminary decision. Blockchain technology provides a significant example, which allows us to show how the proposed approach may be further used for the next step of the design process.
Blockchain technologies present unique features, such as immutability, decentralization, transparency, and cryptographic security, that may influence the phases of the data lifecycle, introducing several constraints and particular advantages/disadvantages in some part of DLMs. The impact of blockchain in DLM phases may be described as follows [47,48,49]:
  • Planning/collection phase: data are created as transactions or blocks in a distributed ledger, and are cryptographically hashed for integrity, and smart contracts can automate data generation based on predefined rules;
  • Share/Governance phase: share phase may benefit from the distributed nature of blockchain;
  • Archival/Disposal phase: blockchain systems are typically append-only systems; the deletion of an item is not simple to implement [50];
  • Data assessment: the distributed nature of blockchain makes this phase more challenging;
  • Analysis/Storage phase: data are replicated across nodes, ensuring redundancy and availability;
  • Computation: this phase could benefit from the distributed nature of blockchain;
  • Security phase: blockchain systems have a high built-in security level due to the high tolerance to attacks [50].
It is therefore relevant to highlight that the good level of security of blockchain systems can ensure security compliance even if the DLM to be used has a weak (or none) security phase. At the same time, it is clear that the possible data deletion phase is hindered by the intrinsic immutability of blockchain. That said, having in mind the same scenario described in the previous section, in case of blockchain implementation, the expert group could hypothetically choose to modify the score for some criteria in the decision matrix, such as the following:
  • Giving a minimum score of 5 for the Security criterion of all DLMs (blockchain is intrinsically secure);
  • Giving a maximum score of 5 for the End-of-Life criterion (intrinsic immutability).
The modified decision matrix is shown in the following Table 4.
Using those values in the proposed tool, and giving in input the same values determined by experts, the global score of DLMs has been changed: in this case, the three DLMs that have been evaluated as best alternatives are the following.
  • Hindawi (7.5);
  • DCC (6.8);
  • USGS (6.7).
The overall ranking of all DLMs in this case is shown in Figure 9.
It can therefore be observed that, in case of blockchain implementation, the results change significantly. In the new list, the best rank is obtained by Hindawi, followed by DCC, as in the previous case. Meanwhile, the third rank has been obtained by USGS DLM, which does not have any security phase: this means that, in a blockchain-based implementation, even a simple but solid DLM may be worth the attention due to the intrinsic high security of a blockchain environment.

9. Conclusions and Final Remarks

In this paper, the application of the AHP-Express method to DLM selection has been experimented, and a tool for decision support in this field has been developed. In the evaluated example, the proposed tool has proven to be effective in assessing DLM suitability to the specific problem, while maintaining the visibility of a wide range of indexes useful to a well-reasoned selection of the best DLM.
It should also be noted that the proposed tool, in the current version, does not interact with existing data management software, which limits its applicability in dynamic contexts where DLM data may evolve over time.
Future work will span in two different directions: the the impact of blockchain on data management and DLMs will be explored in more depth, and a series of case studies will be proposed. Another direction sees the integration of this tool in a wider project that will include also risk–benefit analysis, defining a proper risk–benefit model for this purpose. With regard to the proposed tool, the radar plot representing the ranking of the alternative DLM will be restricted to a limited number of alternatives, allowing for a better readability of the results and easier understanding.

Author Contributions

Conceptualization, M.I., and M.M.; methodology, M.M.; software, C.R.; validation, M.M., C.R., and B.V.; formal analysis, M.I.; investigation, M.I., M.M., C.R., and B.V.; resources, M.I.; data curation, C.R., and B.V.; writing—original draft preparation, M.I., M.M., C.R., and B.V.; writing—review and editing, M.I., M.M., C.R., and B.V.; visualization, C.R.; supervision, M.I., and M.M.; project administration, M.I.; funding acquisition, M.I. All authors have read and agreed to the published version of the manuscript. All authors contributed in equal measure to this work.

Funding

This research was partially funded by MUR PRIN PNRR 2022 grant number P20227W8ZC.

Data Availability Statement

All codes and data discussed in this paper are available in a public-access GitHub repository at https://github.com/christianriccio/ahp3-python (accessed on 30 July 2025).

Acknowledgments

The authors wish to thank Giusi Castaldo, Roberta De Fazio, Luigi Piero Di Bonito, Benedetta Paolino, and Fabrizio Rainone for providing the Museo del Carbone case.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AHPAnalytical Hierarchy Process
CIConsistency Index
CRConsistency Ratio
CRUDCreate, Read, Update, and Delete
CSVSeparated Values
DCCCuration Lifecycle Model
DDIData Documentation Initiative
DLMData Lifecycle Model
DMPData Management Plan
EDLMEnterprise Data Lifecycle Model
GDPRGeneral Data Regulation Protection
MCDMMulti-Criteria Decision Making
PIIPersonally Identifiable Information
OAISOpen Archival Information System
RIRandom Index
USGSUnited States Geological Survey Science Data Lifecycle Model

Appendix A. Advantages of AHP-Express

AHP-Express is well suited for the analysis and comparison of DLMs, where the decision maker (be it an IT manager, a manager, or even a team) has the responsibility of selecting and/or optimizing the most suited models within an organization and evaluating its suitability with respect to some specific requirements. As previously shown, DLMs describe the set of phases that data encounter from the generation until eventual destruction, showing how the literature is full of proposals. Often, the evaluation and comparison of such models may result in being complex because of factors such as the following:
  • Scalability: ease of extending the model to increasing volumes of data;
  • Security and compliance: compliance with legal regulations (i.e., GDPR) and privacy and data protection requirements;
  • Efficiency of the processes: capability of effectively managing data during various phases;
  • Flexibility and integration: adaptability to different application scenarios and integration with other systems.
In such a context, the use of a simplified multi-criteria method, i.e., AHP-Express, helps to manage such complexity in a structured way, evaluating the importance of each criterion/sub-criterion with respect to the principal objective (e.g., “selection of the best DLM for the organization X”) via the construction of a simplified comparison matrix.

References

  1. Sinaeepourfard, A.; Garcia, J.; Masip-Bruin, X.; Marín-Tordera, E. Towards a Comprehensive Data LifeCycle Model for Big Data Environments. In Proceedings of the 2016 IEEE/ACM 3rd International Conference on Big Data Computing Applications and Technologies (BDCAT), Shanghai, China, 6–9 December 2016; pp. 100–106. [Google Scholar]
  2. Shah, S.I.H.; Peristeras, V.; Magnisalis, I. DaLiF: A data lifecycle framework for data-driven governments. J. Big Data 2021, 8, 1–44. [Google Scholar] [CrossRef]
  3. Faundeen, J.; Burley, T.E.; Carlino, J.A.; Govoni, D.L.; Henkel, H.S.; Holl, S.L.; Hutchison, V.B.; Martín, E.; Montgomery, E.T.; Ladino, C.; et al. The United States Geological Survey Science Data Lifecycle Model; Technical Report; US Geological Survey: Reston, VA, USA, 2014.
  4. Allard, S. DataONE: Facilitating eScience through collaboration. J. Escience Librariansh. 2012, 1, 3. [Google Scholar] [CrossRef]
  5. Ragib Hasan, R.B. The Life and Death of Unwanted Bits: Towards Proactive Waste DataManagement in Digital Ecosystems. arXiv 2011, arXiv:1106.6062. [Google Scholar]
  6. Campanile, L.; Iacono, M.; Mastroianni, M.; Viscardi, B. Estimating performance costs of enabling privacy-awareness in data lifecycles. In Proceedings of the The 1st International Conference on Modeling, Simulation and Computer Technology, Skikda, Algeria, 5–6 November 2024; Springer: Berlin/Heidelberg, Germany, 2024. [Google Scholar]
  7. Thakkar, J.J. Analytic hierarchy process (AHP). In Multi-Criteria Decision Making; Springer: Berlin/Heidelberg, Germany, 2021; pp. 33–62. [Google Scholar]
  8. Goztepe, K. Information Security Risk Assessment Evaluation Applying AHP. JoCI 2020, 21, 33. [Google Scholar]
  9. Kinjo, E.M.; Librantz, A.F.H.; de Souza, E.M.; Galdino, M. Criticality assessment of the components of IoT system in health using the AHP method. Res. Soc. Dev. 2021, 10, e57010212917. [Google Scholar] [CrossRef]
  10. Lin, I.C.; Lin, Y.W.; Wu, Y.S. Corresponding Security Level with the Risk Factors of Personally Identifiable Information through the Analytic Hierarchy Process. J. Comput. 2016, 11, 124–131. [Google Scholar] [CrossRef]
  11. Ho, W. Integrated analytic hierarchy process and its applications–A literature review. Eur. J. Oper. Res. 2008, 186, 211–228. [Google Scholar] [CrossRef]
  12. Canco, I.; Kruja, D.; Iancu, T. AHP, a reliable method for quality decision making: A case study in business. Sustainability 2021, 13, 13932. [Google Scholar] [CrossRef]
  13. Stofkova, J.; Krejnus, M.; Stofkova, K.R.; Malega, P.; Binasova, V. Use of the Analytic Hierarchy Process and Selected Methods in the Managerial Decision-Making Process in the Context of Sustainable Development. Sustainability 2022, 14, 11546. [Google Scholar] [CrossRef]
  14. Alshaibi, A.; Kahraman, G.D.; Qasim, A. Analytic Hierarchy Process (AHP) as criteria in business decision making and their implementation in practice. Int. J. Manag. Bus. Stud. 2016, 6, 209–220. [Google Scholar]
  15. Schmidt, K.; Aumann, I.; Hollander, I.; Damm, K.; von der Schulenburg, J.M.G. Applying the Analytic Hierarchy Process in healthcare research: A systematic literature review and evaluation of reporting. BMC Med. Inform. Decis. Mak. 2015, 15, 112. [Google Scholar] [CrossRef] [PubMed]
  16. Fiore, P.; Sicignano, E.; Donnarumma, G. An AHP-based methodology for the evaluation and choice of integrated interventions on historic buildings. Sustainability 2020, 12, 5795. [Google Scholar] [CrossRef]
  17. Badri, M.A. A combined AHP–GP model for quality control systems. Int. J. Prod. Econ. 2001, 72, 27–40. [Google Scholar] [CrossRef]
  18. Digiesi, S.; Mossa, G.; Ranieri, L.; Rubino, S. An integrated approach based on balanced scorecard and analytic hierarchy process for strategic evaluation of local healthcare agencies. In Proceedings of the International Symposium on the Analytic Hierarchy Process, Sorrento, Italy, 15–18 June 2011; pp. 15–18. [Google Scholar]
  19. Salehzadeh, R.; Ziaeian, M. Decision making in human resource management: A systematic review of the applications of analytic hierarchy process. Front. Psychol. 2024, 15, 1400772. [Google Scholar] [CrossRef] [PubMed]
  20. Vaidya, O.S.; Kumar, S. Analytic hierarchy process: An overview of applications. Eur. J. Oper. Res. 2006, 169, 1–29. [Google Scholar] [CrossRef]
  21. Subramanian, N.; Ramanathan, R. A review of applications of Analytic Hierarchy Process in operations management. Int. J. Prod. Econ. 2012, 138, 215–241. [Google Scholar] [CrossRef]
  22. Ramík, J.; Ramík, J. Applications in decision-making: Analytic hierarchy process—AHP revisited. In Pairwise Comparisons Method: Theory and Applications in Decision Making; Springer International Publishing: Berlin/Heidelberg, Germany, 2020; pp. 189–211. [Google Scholar]
  23. Samanaseh, V.; Noor, Z.; Mazlan, M. The application of analytic hierarchy process for innovative solution: A review. In Proceedings of the IOP Conference Series: Earth and Environmental Science; IOP Publishing: Bristol, UK, 2023; Volume 1143, p. 012022. [Google Scholar]
  24. Ogrodnik, K. Multi-criteria analysis of design solutions in architecture and engineering: Review of applications and a case study. Buildings 2019, 9, 244. [Google Scholar] [CrossRef]
  25. Amrozi, Y.; Usman, I.; Ramdhani, M.A. History of Decision-Making: Development and its Applications. In Proceedings of the Journal of Physics: Conference Series; IOP Publishing: Bristol, UK, 2020; Volume 1573, p. 012010. [Google Scholar]
  26. Štemberger, M.I.; Bosilj-Vukšić, V.; Jaklić, M.I. Business process management software selection–two case studies. Econ. Res.-Ekonomska istraživanja 2009, 22, 84–99. [Google Scholar] [CrossRef]
  27. Gribaudo, M.; Iacono, M. Places, Transitions and Queues: New Proposals for Interconnection Semantics. Lect. Notes Comput. Sci. 2023, 13659, 216–230. [Google Scholar] [CrossRef]
  28. Ruijters, E.; Stoelinga, M. Fault tree analysis: A survey of the state-of-the-art in modeling, analysis and tools. Comput. Sci. Rev. 2015, 15, 29–62. [Google Scholar] [CrossRef]
  29. Franceschinis, G.; Gribaudo, M.; Iacono, M.; Mazzocca, N.; Vittorini, V. DrawNET++: Model objects to support performance analysis and simulation of systems. Lect. Notes Comput. Sci. 2002, 2324, 233–238. [Google Scholar] [CrossRef]
  30. Saaty, R.W. The analytic hierarchy process—What it is and how it is used. Math. Model. 1987, 9, 161–176. [Google Scholar] [CrossRef]
  31. Saaty, T.L. Decision making with the analytic hierarchy process. Int. J. Serv. Sci. 2008, 1, 83–98. [Google Scholar] [CrossRef]
  32. Tavana, M.; Soltanifar, M.; Santos-Arteaga, F.J. Analytical hierarchy process: Revolution and evolution. Ann. Oper. Res. 2023, 326, 879–907. [Google Scholar] [CrossRef]
  33. Pant, S.; Kumar, A.; Ram, M.; Klochkov, Y.; Sharma, H.K. Consistency indices in analytic hierarchy process: A review. Mathematics 2022, 10, 1206. [Google Scholar] [CrossRef]
  34. Leal, J.E. AHP-express: A simplified version of the analytical hierarchy process method. MethodsX 2020, 7, 100748. [Google Scholar] [CrossRef] [PubMed]
  35. Hontoria, E.; Munier, N. Uses and Limitations of the AHP Method a Non-Mathematical and Rational Analysis; Springer: Berlin/Heidelberg, Germany, 2021. [Google Scholar]
  36. Nasution, S.M.; Husni, E.; Kuspriyanto, K.; Yusuf, R. Personalized Route Recommendation Using F-AHP-Express. Sustainability 2022, 14, 10831. [Google Scholar] [CrossRef]
  37. Iacono, M.; Mastroianni, M.; Riccio, C.; Viscardi, B. A Blockchain/Cloud Privacy Performance Comparison Using AHP Methodology: A Smart Road Case Study. In Proceedings of the Communications of the ECMS. European Council for Modelling and Simulation, Catania, Italy, 24–27 June 2025; Volume 39, pp. 584–592. [Google Scholar]
  38. Ma, X.; Fox, P.; Rozell, E.; West, P.; Zednik, S. Ontology Dynamics in a Data Life Cycle: Challenges and Recommendations from a Geoscience Perspective. J. Earth Sci. 2014, 25, 407–412. [Google Scholar] [CrossRef]
  39. Khan, N.; Yaqoob, I.; Hashem, I.A.T.; Inayat, Z.; Mahmoud Ali, W.K.; Alam, M.; Shiraz, M.; Gani, A. Big Data: Survey, Technologies, Opportunities, and Challenges. Sci. World J. 2014, 2014, 712826. [Google Scholar] [CrossRef]
  40. CIGREF. Data & Analytics Governance and Architecture, Developing and Implementing a data Strategy. 2023. Available online: https://www.cigref.fr/wp/wp-content/uploads/2023/03/Data-Analytics-Governance-and-Architecture_January_2023_EN.pdf (accessed on 8 July 2025).
  41. Higgins, S. The DCC curation lifecycle model. Int. J. Digit. Curation 2008, 3, 134–140. [Google Scholar] [CrossRef]
  42. IBM Software. Wrangling Big Data: Fundamentals of Data Lifecycle Management; Technical Report; IBM: Osaka, Japan, 2013. [Google Scholar]
  43. Michota, A.; Katsikas, S. Designing a seamless privacy policy for social networks. In Proceedings of the 19th Panhellenic Conference on Informatics, New York, NY, USA, 1–3 October 2015; PCI ’15. pp. 139–143. [Google Scholar] [CrossRef]
  44. Yu, X.; Wen, Q. A View about Cloud Data Security from Data Life Cycle. In Proceedings of the 2010 International Conference on Computational Intelligence and Software Engineering, Sanya, China, 23–24 October 2010; pp. 1–4. [Google Scholar] [CrossRef]
  45. Chaki, S. The Lifecycle of Enterprise Information Management. In Enterprise Information Management in Practice: Managing Data and Leveraging Profits in Today’s Complex Business Environment; Apress: Berkeley, CA, USA, 2015; pp. 7–14. [Google Scholar] [CrossRef]
  46. El Arass, M.; Tikito, I.; Souissi, N. Data lifecycles analysis: Towards intelligent cycle 2017. In 2017 Intelligent Systems and Computer Vision (ISCV) (pp. 1–8); IEEE: Piscataway, NJ, USA, 2017; pp. 1–8. [Google Scholar]
  47. Wen, L.; Zhang, L.; Li, J. Application of blockchain technology in data management: Advantages and solutions. In Proceedings of the International Conference on Big Scientific Data Management, Beijing, China, 30 November–1 December 2018; Springer: Berlin/Heidelberg, Germany, 2018; pp. 239–254. [Google Scholar]
  48. Li, R.; Asaeda, H. A Blockchain-Based Data Life Cycle Protection Framework for Information-Centric Networks. IEEE Commun. Mag. 2019, 57, 20–25. [Google Scholar] [CrossRef]
  49. Freund, G.P.; Fagundes, P.B.; de Macedo, D.D.J. An analysis of blockchain and GDPR under the data lifecycle perspective. Mob. Netw. Appl. 2021, 26, 266–276. [Google Scholar] [CrossRef]
  50. Campanile, L.; Iacono, M.; Marulli, F.; Mastroianni, M. Designing a GDPR compliant blockchain-based IoV distributed information tracking system. Inf. Process. Manag. 2021, 58, 102511. [Google Scholar] [CrossRef]
Figure 2. Screenshot of the first part of the tool, related to the data insertion.
Figure 2. Screenshot of the first part of the tool, related to the data insertion.
Futureinternet 17 00390 g002
Figure 3. Screenshot of the second part of the tool, concerning the setting up of the interview.
Figure 3. Screenshot of the second part of the tool, concerning the setting up of the interview.
Futureinternet 17 00390 g003
Figure 4. Numerical results of the final prioritization.
Figure 4. Numerical results of the final prioritization.
Futureinternet 17 00390 g004
Figure 5. Numerical results of the final prioritization.
Figure 5. Numerical results of the final prioritization.
Futureinternet 17 00390 g005
Figure 7. Radar plot showing how DLMs are weighted according criteria.
Figure 7. Radar plot showing how DLMs are weighted according criteria.
Futureinternet 17 00390 g007
Figure 8. Sensitivity analysis curves.
Figure 8. Sensitivity analysis curves.
Futureinternet 17 00390 g008
Figure 9. The new final score of DLMs in case of blockchain implementation.
Figure 9. The new final score of DLMs in case of blockchain implementation.
Futureinternet 17 00390 g009
Table 1. Saaty’s scale for AHP method.
Table 1. Saaty’s scale for AHP method.
Intensity of ImportanceDefinitionExplanation
1Equal ImportanceTwo criteria contribute equally to the objective.
3Moderate ImportanceOne criterion is slightly more important than the other.
5Strong ImportanceOne criterion is strongly favored over the other.
7Very Strong ImportanceOne criterion is very strongly favored over the other.
9Extreme ImportanceThe dominance of one criterion over the other is absolute.
2, 4, 6, 8Intermediate ValuesUsed when compromise between two adjacent judgments is needed.
Reciprocals (1/2, 1/3,...)Inverse ComparisonIf criterion A is x times more important than B, then B is 1 / x times A.
Table 2. The structured set of criteria; the meta-phases are detailed in [2].
Table 2. The structured set of criteria; the meta-phases are detailed in [2].
CategoryCriteriaMeta-Phases
AStartingPlanning
Collection
AdministrationUse/Reuse/Feedback
Share
Governance
End-of-LifeArchival
Disposal
BData AssessmentPreparation
Quality
ComputationAnalysis
Visualization
Storage
SecurityAccess
Protection
Table 3. Decision matrix values of the selected DLMs.
Table 3. Decision matrix values of the selected DLMs.
StartingAssessmentComputationAdministrationSecurityEnd-of-Life
USGS8105305
DataONE4710200
IBM502594
Hindawi5108675
DCC7781064
CRUD5747108
CIGREF879650
DDI875306
PII5446104
EDLM406729
Table 4. The modified decision matrix values in case of blockchain implementation.
Table 4. The modified decision matrix values in case of blockchain implementation.
StartingAssessmentComputationAdministrationSecurityEnd-of-Life
USGS8105355
DataONE4710250
IBM502594
Hindawi5108675
DCC7781064
CRUD5747105
CIGREF879650
DDI875355
PII5446104
EDLM406755
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Iacono, M.; Mastroianni, M.; Riccio, C.; Viscardi, B. A Framework for Data Lifecycle Model Selection. Future Internet 2025, 17, 390. https://doi.org/10.3390/fi17090390

AMA Style

Iacono M, Mastroianni M, Riccio C, Viscardi B. A Framework for Data Lifecycle Model Selection. Future Internet. 2025; 17(9):390. https://doi.org/10.3390/fi17090390

Chicago/Turabian Style

Iacono, Mauro, Michele Mastroianni, Christian Riccio, and Bruna Viscardi. 2025. "A Framework for Data Lifecycle Model Selection" Future Internet 17, no. 9: 390. https://doi.org/10.3390/fi17090390

APA Style

Iacono, M., Mastroianni, M., Riccio, C., & Viscardi, B. (2025). A Framework for Data Lifecycle Model Selection. Future Internet, 17(9), 390. https://doi.org/10.3390/fi17090390

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop