Data sharing and data reuse are two complementary aspects of modern research. Researchers share their data for a sense of community, to demonstrate integrity of acquired data, and to enhance the quality and reproducibility of research [1
]. In addition, data sharing is supported by the emerging citation system for datasets, scientific journal requirements, and funding agencies that want to maximize their return on investments in science [2
]. At the same time, researchers are eager to reuse available data to integrate information that answer interdisciplinary research questions and to optimize use of funding [4
]. Although attitudes towards data sharing and reuse are increasingly favorable [1
], data discovery and reuse remain difficult in practice [5
]. Studies show that 40% of qualitative datasets were never downloaded, and about 25% of data is used less than 10 times [6
]. In addition, Vines et al. demonstrated that the availability of existing datasets associated with published articles decreases 17% per year due to the lack of appropriate hardware to access old storage media or because data were lost [7
]. To be effective, data sharing and reuse need appropriate infrastructure, standards, and policies [5
In 2016, the FORCE 11 group proposed guidelines to increase data reuse in the life sciences. These guidelines aimed to make data findable, accessible, interoperable, and reusable, and were summarized with the acronym FAIR [8
]. In a short time, the FAIR guidelines have gained remarkable popularity, and they are currently supported by funding agencies and political entities such as the European Commission, the National Institutes of Health in the United States, and institutions in Africa and Australia [10
]. In addition, academic and institutional initiatives were launched to promote and implement data FAIRness, such as GO FAIR [11
] and FAIRsharing [12
Although largely adopted, the FAIR principles do not specify any technical requirement, as they are deliberately intended to be aspirational [10
]. The lack of practical specifications generated a large spectrum of interpretations and concerns and raised the need to define measurements of data FAIRness [9
]. Some of the authors of the seminal paper proposed a set of FAIR metrics [13
], subsequently reformulated as FAIR maturity indicators [14
]. At the same time, they invited consortia and communities to suggest and create alternative evaluators. The majority of the proposed tools are online questionnaires that researchers and repository curators can manually fill to assess the FAIRness of their data (Table 1
). However, the FAIR metrics guidelines emphasize the importance of creating “objective, quantitative, [and] machine-interpretable” evaluators [13
]. Following these criteria, two platforms have recently been developed to automatically compute FAIR maturity indicators: FAIR Evaluation Services and FAIRshake. The first platform offers an evaluation of maturity indicators and compliance tests [14
], whereas the second platform provides metrics, rubrics, and evaluators for registered digital resources [15
]. Both platforms provide use cases for FAIRness assessment, however they do not provide systematic analysis of evaluated datasets and repositories. Indeed, a key feature of the FAIR principles is their requirement for domain-specific identification of the metadata required to drive community acceptance and facilitate date reuse within and beyond that specific research community. An example of such a community-driven approach is presented by Papadiamantis et al. [16
] on metadata standards for the nanosafety community, which extends the technical FAIR principles, which are directed largely at database managers and curators, with a further set of scientific FAIR principles directed at data generators (experimental and computational) defined to support operationalization of the FAIR principles for nanosafety researchers.
Literature reports two studies evaluating FAIRness for large datasets. Dunning et al. [17
] used a qualitative approach to investigate 37 repositories and databases. They assessed FAIRness using a traffic-light rating system that ranges from no to full compliance. Weber et al. [18
] implemented a computational workflow to analyze the retrieval of more than a million images from five repositories. They proposed metrics specific to images, including time and place of acquisition, to assess image provenance. The first study provides valuable concrete guidelines to assess data FAIRness, however the implementation was manual, diverging from what the guidelines suggest. The second study is a relevant example of computational implementation, although limited to retrieval of images and evaluation of 10 out of 15 FAIR principles, and without unique correspondence between the FAIR principles and the maturity indicators.
These existing analyses show that more common features are easier to test, causing them to focus on the more common FAIR principles, particularly the findability and accessibility aspects, as these apply to any database or dataset. Then, analysis can be done at the repository/database level, which does not provide information about the FAIRness of the data that constitutes it, while the latter would decide if data can really be reused. Data quality aspects are part of the interoperability and reusability criteria and as such are underappreciated aspects of FAIR.
In this paper, we propose a computational approach to calculate FAIR maturity indicators in the life sciences as exemplified in nanotoxicology data. We followed the recommendations provided by the Maturity Indicator Authoring Group (MIAG) [14
], and we created a visualization tool to summarize and compare FAIR maturity indicators across various datasets and/or repositories containing toxicology and/or nanotoxicology related data. We tested the feasibility of our approach on three real use cases where researchers retrieved data from six scientific repositories to answer their research questions. Finally, we made our work open and reproducible by implementing our computations in a Jupyter Notebook using Python.
We proposed a semiautomatic computational approach to evaluate FAIR maturity indicators for scientific data repositories in the life sciences. We tested the feasibility of our method on three real use cases where researchers looked for datasets to answer their scientific questions. Despite having different data types and difference purposes, the three use cases, for six databases, scored similarly. Finally, we created a FAIR balloon plot to summarize and compare our results, and we made our approach open and reproducible. Real use cases in the life sciences were the starting point of our computational implementation.
In their guidelines, the MIAG suggests to calculate maturity indicators starting from a global unique identifier (GUID) (e.g., InChI, DOI, Handle, URL) [28
]. However, a priori
knowledge of a GUID often signifies that a researcher has already found and accessed the dataset they are going to reuse. In addition, it assumes that the repository of interest provides unique identifiers, which is not the case for all the databases assessed in this work, based on the information we retrieved from re3data.org
. Similar to Weber et al. [18
], we decided to start our computations from dataset retrieval. We explored how researchers looked for the datasets of interest and which keywords they used, e.g., as part of the eNanoMapper requirement analysis [30
]. Then, we computationally reproduced their manual search by programmatically retrieving data and metadata using the same keywords. We recognize that this approach limits the generalization of the FAIRness calculation. We note that the definition of FAIR, in fact, is different from one use case to another. While creating a use case for every dataset is extremely demanding, the same dataset could be used to answer different research questions.
To assess data FAIRness, we implemented criteria that follow principles and guidelines recommended by the MIAG [28
], reused concepts from similar studies in literature [17
], and added new considerations (see Table 2
: The criteria to assess principles F1 (unique identifier), F3 (metadata includes identifier), and F4 ((meta)data are indexed) are similar for all previous studies. In our case, to assess F1 we investigated whether a repository provides a DOI in the registry re3data.org
. We chose this registry because it is one of the largest registries of scientific repositories, and it provides an open API. Of course, different communities use different approaches, and FAIRSharing is an important complementary service [12
]. For F3, we accepted any dataset identifier provided by the repository as the principle does not explicitly mention restrictions on the characteristics of the identifier. Finally, for F4 we looked for dataset titles in Google Dataset Search. We chose this searchable resource because it could become one of the main search engines specific for data in the future, similar to Google Scholar for publications. However, for Google Dataset Search or the newer DataCite Commons (https://commons.datacite.org/
) to recognize datasets, the datasets also need semantic annotation, with, for example, schema.org
. This is not tested in the current notebook. Another limitation is that, in contrast to the previous maturity indicators, the implementation of F2 (data are described with rich metadata) has large variations across literature publications. The MIAG recommends to evaluate whether metadata contains “structured” elements, Dunning et al. looked for attributes that favor findability, whereas Weber et al. used metrics of time and space of image acquisition. We followed the criteria suggested by Dunning et al. and looked for the keywords that researchers had used in their manual search to find
: Similar to the other published approaches, we retrieved our data using the HTTP protocol, which is free, open, and allows for authentication, and thus satisfies all the requirements of the A1 group. Additionally, there is concordance among approaches for the principle A2, which requires that a repository should explicitly provide a policy for data availability. In our implementation, we looked for the policy in re3data.org
. However, for integration into research workflows, the mere use of the HTTP is a very narrow definition and choices of protocols on top of HTTP may be needed, e.g., for the authentication.
Interoperable: Similarly to the MIAG, we assigned a positive score to metadata in a structured file format, such as XML (I1). In contrast, Dunning et al. and Weber et al. suggested that metadata should be in a standardized schema, such as Dublin Core or DataCite, which would increase data interoperability and simplify retrieval. None of the studies assessed I2 (vocabularies are FAIR), because it would require a separate implementation that includes the recursive nature of the FAIR principles. Finally, for I3 all previous studies looked for references to other datasets in metadata. Similarly to accessibility, these metrics are only a first step and not enough to link the various information sources needed to apply workflows for risk governance.
: Although the MIAG does not provide any guidelines, the various studies implemented different ways to assess R1 (plurality of relevant attributes). While Weber et al. used the same metrics as for F2, Dunning et al. focused on metadata that provide information on how to reuse a dataset. In our implementation, we assess the presence of metadata attributes other than search keywords. The principles R1.1 (availability of data usage license) and R1.2 (data provenance) had a straight-forward implementation for all approaches. In our approach, we looked for a data license in re3data.org
and for authors, author emails, and titles of the corresponding publication in the metadata from the dataset repository. Note that data would ideally be shared before publication and arguably should be shared as independent research output, in which case our implementation of R1.2 would not suffice. Finally, none of the authors evaluated whether metadata follow community standards (R1.3), as community agreements are not formally established yet. It will be clear that here too that these minimal expectations are not a sufficient requirement to ensure research output is practically useful for risk assessment.
We assessed FAIR maturity indicators using a mixed manual and automatic approach. In the literature, Dunning et al. used a fully manual approach to assess the maturity indicators, whereas Weber et al. used a completely automatic approach, calculating 10 of the total 15 maturity indicators. Our mixed approach enabled automatic assessment of maturity indicators wherever possible, and us to manually complement when we could not retrieve information via API. By definition, that means none of the databases could reach a full FAIRness score, since not all information was automatically retrieved.
As repositories do not use a standardized metadata schema, our mixed implementation required prior manual investigation of metadata attributes for each repository. For example, ArrayExpress uses the attributes “authors”, “email”, and “title” that we could use for the principle R1.2, whereas ChEMBL uses only the “authors” and “title”, making the score of principle R1.2 0.5 points for not providing the full provenance details. Finally, Gene Expression Omnibus, eNanoMapper, caNanoLab, and NanoCommons do not have attributes for provenance. Clearly, community standards for data and metadata must be specified, in addition to minimal reporting standards and data quality criteria. This is a significant challenge for a cross-disciplinary field such as nanosafety. A first approach to build consensus in terms of the requirements for a nanosafety metadata schema [16
] has been developed via the Nanomaterial Data Curation Initiative (NDCI), a project of the National Cancer Informatics Program Nanotechnology Working Group (NCIP NanoWG) [31
]. Part of the challenge lies in the large variety of guidelines and data requirements that play a role in risk assessment of different nanomaterial applications, e.g., in the food, pharmaceutical, and other industrial sectors, and the very broad range of data reuse scenarios making complete metadata description an extensive task.
Established minimal reporting standards and data quality measures for nanosafety data as developed by the European and US nanoinformatics communities, on the other hand, are currently not defined as FAIR metrics [32
], nor are experiment-specific standards like MIAME for microarray experiments [33
], CONSORT-AI for clinical trials [34
], and MINBE for nanomaterial biocorona experiments [35
]. It should also be noted that data quality and data completeness, unlike reporting standards, are dependent on the goal of use of the dataset [36
]. For instance, two studies derived expectations related to data completeness as part of their assessment of nanosafety data quality [37
]. Both include material properties, but Comandella and co-workers specifically included the completeness of metadata for the different methods applied to measure the different nanomaterial physico-chemical properties (chemical composition, size, surface charge, etc.), because this is important for read-across applications [37
]. The study by Fernandez-Cruz et al. limited its assessment to whether these physico-chemical properties were reported, focusing on the data rather than the metadata as relevant for risk assessment [38
]. These studies also show that it should be possible to derive metrics for data quality related to different study goals.
It is important to note, however, that compliance with these established standards and quality measures, in addition to compliance to FAIR principles, still does not ensure that data can actually be (re)used in risk assessment, for example by the forthcoming Risk Governance Council for nanomaterials [39
]. It should not be forgotten that regulatory agencies also apply criteria for assessing reliability, relevance, and consistency of data to be used in risk assessment [40
]. These types of guidelines should all be considered relevant in establishing community standards for assessing data quality and extending the current set of FAIRness metrics, as part of the domain-relevant community standards, as defined in R1.3.
Before we can update an automated analysis of the relevant FAIRness scores, the relevant quality metrics first must be selected and formally defined, as discussed in the previous paragraphs. We note here that the selection of these metrics is likely to differ from one application to another. For example, nanoQSAR approaches may have different requirements than a read-across application. Note that even when data are of exceptional quality, this does not warrant their direct application in risk assessment if they are not transformed into parameters required in risk assessment models and tools, such as Benchmark Dose, EC50 or half-life in the environment, although mechanistic data can be used as part of a weight of evidence approach in this case.
Besides the limitations of the current FAIRness metrics noted above, they serve well as a screening tool to evaluate the level of FAIRness of existing life science and nanosafety databases. Only then can we start defining where to begin with making our resources more FAIR. For this, to summarize and compare dataset FAIRness, we created a FAIR balloon plot. As the MIAG guidelines recommend, we did not create a final score to avoid concerns for data and resource providers [14
]. In our visualization, a dataset that reached full FAIRness (at the level measured by the used metrics) would have all maturity indicators depicted as circles (or diamonds in the case of manually determined metrics) with maximum size, meaning full score and automatic retrieval. In addition, by vertically stacking representations for different datasets, we can visually compare FAIRness levels for each maturity indicator. In the literature, another example of visualization is insignia
, created for the platform FAIRshake [15
]. It consists of multiple squares colored from blue (satisfactory) to red (unsatisfactory) for different levels of FAIRness. In addition, they can dynamically expand to visualize multiple scores calculated using different rubrics (i.e., criteria). Although this representation embeds the possibility of using different criteria, it does not allow direct comparison across datasets. Finally, we applied our FAIR balloon plot to the results collected by Dunning et al. to demonstrate that this kind of visualization can be reused for FAIR assessment with other criteria (Figure 2
Furthermore, to make our analysis open and reproducible, we implemented our approach in a Jupyter Notebook. This shows the exact details of how the FAIRness is assessed, which we anticipated could help the developers of the databases to improve their FAIRness. However, changes to APIs or metadata attributes could affect reproducibility of the results. The possibility of querying a specific version of a repository could be a possible solution. In addition, we implemented our approach in Python, a language increasingly used in various scientific communities that can potentially favor extension and reuse of our work. For new datasets, FAIR maturity indicators could be evaluated by changing the search procedure and the values assigned manually. However, our observation is that the diversity in choices of protocols, standards, and other approaches to FAIR makes the possibility of a unifying approach remote. Another limitation of this approach is that it tests databases in a single way, whereas databases can have multiple, complementary access routes, each with their own use case and own level of FAIRness.
The six analyzed datasets met the majority of the criteria used to assess FAIRness, with ChEMBL being relatively the most FAIR dataset and GEO the least FAIR. Higher FAIRness compliance could be reached by using a standard for metadata (e.g., Dublin Core, DataCite, or schema.org
), which could include all attributes required by the FAIR principles, and by providing explicit information about data policy, licenses, etc., to registries of repositories.