Next Article in Journal
Green Knowledge Management—Bibliometric Analysis
Previous Article in Journal
The Evolutionary System of the Biosphere and the Metameric Concept of Its Evolution: From the Past to the Future
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Entry

Responsible Research Assessment and Research Information Management Systems

by
Joachim Schöpfel
1,* and
Otmane Azeroual
2
1
GERiiCO Laboratory, University of Lille, 59653 Villeneuve-d’Ascq, France
2
German Centre for Higher Education Research and Science Studies (DZHW), 10117 Berlin, Germany
*
Author to whom correspondence should be addressed.
Encyclopedia 2024, 4(2), 915-922; https://doi.org/10.3390/encyclopedia4020059
Submission received: 25 April 2024 / Revised: 22 May 2024 / Accepted: 25 May 2024 / Published: 30 May 2024
(This article belongs to the Section Social Sciences)

Definition

:
In the context of open science, universities, research-performing and funding organizations and authorities worldwide are moving towards more responsible research assessment (RRA). In 2022, the European Coalition for Advancing Research Assessment (CoARA) published an agreement with ten commitments, including the recognition of the “diversity of contributions to, and careers in, research”, the “focus on qualitative evaluation for which peer review is central, supported by responsible use of quantitative indicators”, and the “abandon (of) inappropriate uses in research assessment of journal- and publication-based metrics”. Research assessment (RA) is essential for research of the highest quality. The transformation of assessment indicators and procedures directly affects the underlying research information management infrastructures (also called current research information systems) which collect and store metadata on research activities and outputs. This entry investigates the impact of RRA on these systems, on their development and implementation, their data model and governance, including digital ethics.

1. What Is Research Assessment?

Research assessment (RA) encompasses a multifaceted process aimed at evaluating the quality, impact, and effectiveness of research endeavors. As highlighted by Robert K. Merton [1], “The activities of scientists are subject to rigorous policing”, indicating the inherent scrutiny applied to scientific endeavors. This scrutiny is manifest in the monitoring and evaluation of research performance, which permeates through individual, institutional, and governmental levels within the scientific community. RA is conducted by a variety of entities, each with their own motivations and objectives, such as academic institutions (promotion and tenure committees, departments), funding agencies (grant review panels), research councils (advisory committees, program managers), or government bodies (national assessment programs). Their main reasons are resource allocation, accountability, quality assurance, strategic planning on the institutional level, and recognition and reward of individual researchers.
At its core, RA serves to establish and uphold standards of research quality. It plays a pivotal role in shaping the allocation of resources by contributing to the efficient and accountable funding of projects, programs, and research teams. It affects the scholarly career of researchers in various ways, insofar that it forms the basis for decisions related to promotions and tenure in academic institutions and influences the likelihood of securing grants and funding (often based on number of publications, citation counts, h-index, and journal impact factors); a strong record in RA can enhance a researcher’s reputation in their field, lead to invitations to speak at conferences, collaborate on projects, and join editorial boards, further enhancing a researcher’s career prospects and competitivity for academic positions. Moreover, RA functions as a governance tool, fostering improvements in the quality of scholarship by promoting rigorous inquiry and adherence to established methodologies [2,3].
The landscape of RA is dynamic, continually evolving in response to the shifting demands of the scientific enterprise [4]. Emerging technologies, such as artificial intelligence (AI) and machine learning, offer opportunities to enhance assessment methodologies by facilitating the analysis of vast datasets with greater precision and efficiency [5]. Additionally, there is a continual refinement of metrics and indicators to encompass the diverse dimensions of research output and impact. Multidimensional assessment approaches have gained traction, recognizing the multifaceted nature of research performance and accounting for various aspects beyond traditional bibliometric measures [6].
Collaboration among stakeholders, including researchers, institutions, and funding agencies, is instrumental in advancing RA practices. By fostering synergies and sharing best practices, collaboration promotes methodological advancements and ensures that assessment frameworks align with the evolving needs and aspirations of the scientific community [7].
Recent discourse in the field underscores the importance of responsible research assessment (RRA), which advocates for transparency, diversity, quality, and open metrics [8]. This paradigm shift acknowledges the diverse contributions to research and underscores the imperative of qualitative evaluation while advocating for the judicious use of quantitative indicators. By embracing RRA principles, the scientific community can mitigate the unintended consequences of metric-driven assessment practices and foster a culture of responsible and equitable evaluation [9].

2. What Are Research Information Management Systems?

To address the challenge of RA, universities, research-performing organizations (RPOs) and research funding organizations (RFOs) make use of a large diversity of research information management (RIM) infrastructures [10]. Such systems collect and store research information, i.e., “information (…) relating to the conduct and communication of research”, including bibliographic metadata (titles, abstracts, references, author data, affiliation data, and data on publication venues), metadata on research software, research data, samples, and instruments, information on funding and grants, and information on organizations and research contributors [11].
While the necessity for RIM systems is widespread across the academic landscape, the specific solutions implemented vary considerably [12]. These solutions encompass a broad spectrum of tools and technologies, including but not limited to current research information systems (CRIS), contract management software, bespoke in-house software solutions, expert finder tools, and other specialized systems. Each of these solutions exhibits distinct characteristics in terms of format, data model, and terminology, reflecting the diverse needs and contexts within which they are deployed [13].
RIM systems play a crucial role in facilitating not only the documentation and tracking of research activities but also in enabling comprehensive and systematic evaluation processes. By providing a centralized repository for research-related metadata, these systems support decision-making processes related to resource allocation, performance assessment, and strategic planning within academic institutions and research organizations [14]. Furthermore, they contribute to enhancing transparency, collaboration, and efficiency across the research ecosystem.
The continuous evolution and refinement of RIM systems reflect ongoing efforts to adapt to the evolving needs and challenges of RA in the context of open science and responsible research evaluation practices [15]. As such, further research and development efforts are warranted to optimize the functionality, interoperability, and usability of these systems, ensuring that they remain robust and responsive tools for supporting the advancement of scholarly inquiry and knowledge dissemination.

3. Which Are the New Challenges?

Traditional RA methods, primarily reliant on bibliometrics and peer reviews, face increasing scrutiny for their narrow focus and potential to distort research priorities. In response, there is a growing movement towards RRA, emphasizing inclusivity, broader evaluation criteria, and adherence to open science practices.
For decades, research performance has been assessed essentially through bibliometrics (e.g., number of publications and citations) and peer reviews. Yet, RA has come under increasing scrutiny and criticism; the criticism focuses, among other things, on the choice of data sources, on the methods of calculating indicators, on the assessment procedures, and on underlying infrastructures [16]. For instance, RA systems have been criticized as “too narrow in what they measure (…) Existing approaches favour individuals or teams that secure large grants, publish in journals with high impact factors—such as Nature—or register patents, at the expense of high-quality research that does not meet these criteria (…)” [17]. Even if the “practice of university appointment procedures shows that easily measurable quantitative indicators continue to be prioritized (…) the strong weighting of indicators of pure quantity of research output is problematic in the assessment and prediction of excellent scientific performance due to their questionable validity” [18]. Compared to quantitative indicators (such as the number of publications, the journal impact factor, or h-index), the assessment of the quality, rigor, reliability, robustness, transparency, and innovativeness of scientific work appears to be more valid as predictors of excellence and good research practice.

4. What Is Responsible Research Assessment?

Within the framework of open science, the San Francisco Declaration on Research Assessment [19], UNESCO [20], the European Commission [21], and, more recently, the Paris Call on Research Assessment [22], have put RA reform at the top of the agenda. The International Network of Research Management Societies developed a framework to ensure that research evaluation is meaningful, responsible, and effective [23], and authorities have begun to bring their RA systems into line with these emerging principles and goals [24].
In France, for instance, the Academy of Sciences requests an RRA based on clear, objective, transparent, and predetermined criteria [25], and the national Plan for Open Science calls for the promotion of open science and the diversity of scientific productions in RA, to foster the quality, effectiveness, and societal impact of research [26]. Consequently, universities, RPOs, and RFOs are reevaluating their RA strategies to align with evolving priorities and societal demands. This entails a shift towards open-access publishing, transparency, fairness, inclusivity, and ethical conduct in research, as integral components of their open science initiatives. Other countries, such as the UK, The Netherlands, Finland, Spain, Italy, Germany, or Austria, are moving in the same direction.
Along with criticisms of traditional metrics such as journal impact factors and citation counts, other factors contribute to this shift, like the emergence of alternative metrics, the drive for fairness and inclusivity, the emphasis on dedicating more time to research rather than evaluation, the importance of sustaining long-term research endeavors, and the demand for broader evaluation criteria such as societal impact, public engagement, reproducibility, and adherence to open science practices, as well as contributions to policy or industry. In Europe, the Coalition for Advancing Research Assessment, supported by the European University Association, the European Science Foundation, and Science Europe, started efforts to shape and implement this transition towards RRA [8]. As of 18 April 2024, 723 organizations have signed the CoARA agreement, mainly universities and research centers, representing nearly 50 countries. The European Commission, as the most important RFO in Europe, signed the CoARA Agreement on Reforming Research Assessment in 2022 and endorsed the San Francisco Declaration on Research Assessment [19].
Yet, RRA is not a clearly defined concept. It refers to the ethical and equitable evaluation of research outputs, such as publications, datasets, and other scholarly contributions. It involves assessing the quality, impact, and significance of research in a fair and transparent manner, while also considering the broader societal implications and ethical considerations. An editorial of the prestigious journal Nature defines RRA as more inclusive, less focused on journal- and publication-based metrics, avoiding using rankings of universities and research organizations, rewarding more-qualitative factors and open science, including data sharing and collaboration [17].
Among the key principles of RRA are fairness and equity (ensuring that all researchers, regardless of background or affiliation, have equal opportunities for evaluation and recognition), transparency (making the assessment process clear and understandable to all stakeholders, including researchers, funding agencies, and the public), diversity and inclusion (recognizing and valuing contributions from researchers representing diverse perspectives, disciplines, and methodologies), and openness (supporting open access to research outputs and data, facilitating reproducibility, and promoting knowledge sharing). Other, more generic principles are quality (evaluating research based on its scientific merit, methodological soundness, and adherence to ethical standards), and accountability (holding individuals and institutions accountable for their assessment practices and ensuring that they align with ethical standards and best practices).

5. What Does This Mean for Systems?

The move towards RRA presents both challenges and opportunities for RIM infrastructures, necessitating their adaptation to meet the evolving needs and expectations of the research community and to ensure that assessment systems minimize harm. Advancements in technology have facilitated the creation of novel tools and methodologies for research assessment, like for instance machine learning algorithms for text and data mining, semantic analysis, and network analysis. Such infrastructures enable more advanced and automated approaches to assess research outputs and their impact, as demonstrated by the Global Research Assessment Platform for Open Science (GraspOS) project whose purpose is to evaluate infrastructures used for RIM [27,28].
The influence of RRA on RIM systems extends beyond technological considerations, impacting the integration of diverse data sources, the improvement of (meta)data standards (including the establishment of reliable and sustainable persistent identifiers), interoperability, data exchange with other institutional and external systems, and the promotion of open science practices.
We already mentioned above that RRA initiatives tend to reject journal-based metrics, especially those produced by corporate companies, preferring altmetrics based on social media, novel metrics based on nonconventional research output (e.g., grey literature, software, or research data), and qualitative procedures (reviews, storytelling…). Moreover, such initiatives generally insist on community control over RA, no vendor lock-in, open infrastructures as common goods, open-source systems, and open data.
For the RA systems and infrastructures, this means more diversity, different processing, and a different kind of governance. Based on ongoing work of CoARA which launched working groups on responsible metrics and indicators and on open infrastructures [8], and the GraspOS project which published a landscape study on existing tools and services that can facilitate the implementation of open-science-aware RRA practices [29], we can add some more details.
More diversity of data: Compliance with RRA requires systems that are able to ingest and handle more and different kinds of data on research, such as research data, software, working papers, public and social media, AV material, but also narratives, data related to ethical performance, and so on. This means, too, that systems are able to deal with unusual data sources, a larger variety of metadata, and other identifiers.
More diversity of metrics: Compliance with RRA requires systems that are able to produce more and different kinds of metrics, including, above all, qualitative indicators of research impact (peer review), a responsible use of quantitative indicators, and community-led curation and annotation. Moreover, metrics should not only reflect research output but also research practice, like professional and integer conduct of research and openness, the reproducibility of results and the originality of ideas; research should be assessed on its own merits, especially results beyond the state of the art.
Enhanced Processing: Ensuring compliance with RRA mandates systems that facilitate transparent processing of research data, particularly in terms of data collection and analysis. This transparency is essential to enable scholars, researchers, and other scientific personnel to scrutinize and validate the outcomes. It entails fostering open and explicit communication regarding assessment criteria, implementing controls and restrictions on system usage and metrics, promoting interactivity, and advocating for open data practices.
Revised Governance Approach: Adherence to RRA necessitates systems characterized by increased community oversight. This entails stewardship by research communities, RPOs, and RFOs, along with the adoption of open-source systems and community-driven development methodologies. Avoiding vendor lock-in is imperative to ensure a certain level of sustainability.
In the same spirit, the Barcelona Declaration on Open Research Information [11] assumes that “openness of information about the conduct and communication of research must be the new norm” and requests openness as the default for research information, and systems, services, and infrastructures supporting and enabling such openness.
The Barcelona Declaration further emphasizes the importance of customization and flexibility within RIM infrastructures. As RRA practices vary significantly across nations, disciplines, institutions, and funding agencies, RIM systems must be adaptable and flexible to accommodate these differences. This adaptability entails the integration of configurable data models, reporting templates, and user interfaces. By offering customization options, RIM systems can better align with diverse assessment criteria and workflows, meeting the unique needs and preferences of users.
Governments, universities, RPOs, and RFOs have implemented a wide array of systems and infrastructures, each with distinct architectures, standards, data models, business models, and governance structures. Some infrastructures are cloud-based; some are commercial and proprietary software while others are open source. Consequently, their transition towards RRA compliance will be non-uniform, as there is no one-size-fits-all solution.
Another challenge is user engagement and training: As RRA practices evolve, RIM systems must actively engage users, i.e., researchers, administrators, funders, and policymakers, to ensure that their requirements and expectations are addressed. This entails providing training and support resources, gathering feedback through user surveys and focus groups, and fostering a community of practice centered on RA and information management.

6. Ethical Issues

In practice, the human element, particularly the ethical dimension, is often overlooked in RIM projects. But the digitization of society prompts us to contemplate both the individual and societal implications within a digital context, while also recognizing the unique considerations of digital ethics [30]. This entails two approaches: firstly, applying general ethical principles to establish behavioral standards in the digital realm; and secondly, examining the digital domain itself and deriving moral insights from its distinct characteristics. In doing so, digital ethics acknowledges the evolving cultural and value dynamics shaped by emerging technologies [31].
Digital ethics is the heir to computer ethics and merges with information ethics [32]. While this field of research encompasses a broad spectrum of research areas (such as medical ethics, journalistic ethics, AI ethics, data ethics, etc.), it can be seen as reflecting on a new ethics, proposing a macro-ethics [33], taking an interest in digital infrastructures, and questioning online conflicts. Digital ethics is concerned not only with the design of systems (ethics by design) but also with the appropriation of these infrastructures, and it raises the question of the distributed moral responsibility of ethical infrastructures (infra-ethics) [34].
Ethics challenges RIM systems in two ways [35]. On the one hand, they should be able to represent research ethics as part of scientific practices and outcomes, on the individual as well as on the institutional level, and their data model should avoid biased terminology. On the other hand, they should be compliant with the principles and values of good scientific practice and with legal frameworks, especially regarding the accessibility and usage of sensitive data. As mentioned above, the guiding principles of open science add other ethical imperatives, such as transparency and openness.

7. Practical Issues

Despite an expanding body of literature on digital ethics, few studies have examined the ethical aspects of RIM systems. For this reason, RIM operators should collaborate with ethics committees or research integrity officers [36].
Implementing RIM systems involves complex technical requirements like data integration, interoperability, and scalability. Addressing these challenges requires robust technical solutions and expertise in system architecture and integration [37].
Managing research data within RIM systems requires effective data management frameworks to ensure data quality, security, and compliance with regulations such as GDPR [38]. Establishing clear data ownership rights, strict access controls, and processes for managing the data lifecycle are crucial to minimize risks and ensure data integrity.
Ensuring high data quality is essential for the effectiveness and reliability of RIM systems [39]. This includes a careful choice of indicators, with special attention to labeling, categories, and granularity, and the selection of reliable sources of information. Moreover, this includes measures for validating, cleansing, and updating research data to minimize errors and inconsistencies [40]. Additionally, continuous monitoring and improvement of data quality are necessary to ensure that the provided information is trustworthy and meaningful [41].
User acceptance is a key factor in the success of RIM systems. By involving users in the development process, considering their feedback, and continuously adapting to their needs and requirements, user acceptance and usability can be enhanced [42]. This includes, for instance, the design of user-friendly interfaces and workflows and the provision of comprehensive user training and support.

8. Future Directions

As part of their open science policies, RPOs and RFOs highlight the importance of openness, transparency, and integrity of research. In this environment and as part of science studies, future research on RA should focus on at least three issues:
  • The awareness of these organizations regarding RRA and ethics, as well as their readiness for potential changes, especially on the level of related infrastructures, procedures, and tools.
  • The assessment of responsible RIM, i.e., the analysis of the development, implementation, governance, and usage of RIM systems from an ethical point of view. This ethical investigation should include the stakeholders’ “distributed responsibility” and the security of RIM data, in the context of the vertical integration of research infrastructures [43].
  • The convergence of RIM systems and institutional repositories, with new challenges regarding the system governance and openness and the quality and usage of research information and data [44].
Additionally, future studies could explore the implications of open science on interdisciplinary research collaboration, knowledge dissemination, and academic career advancement. Understanding how open science practices influence these aspects can provide valuable insights into shaping policies and practices that promote collaboration, transparency, and equity in the research ecosystem.
In the context of big data and AI, information systems should be beneficial, respectful of people and the environment, robust and secure; they should value human autonomy, promote fairness, and be explainable, accountable, and understandable [45]. AI already starts to affect RA procedures and tools. Future research is needed to assess the impact of AI on RRA, especially regarding the sources and quality of research information and the transparency of algorithms, but also the governance, accountability, and openness of the systems.

Author Contributions

Writing—original draft preparation, J.S. and O.A.; writing—review and editing, J.S. and O.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

The paper is based on research conducted since 2019 by the University of Lille (GERiiCO laboratory) and the German Centre for Higher Education Research and Science Studies (DZHW), together with euroCRIS and other colleagues. We would like to express our gratitude to all of them for their support and inspiration throughout this research endeavor.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Merton, R.K. The Sociology of Science: Theoretical and Empirical Investigations; Chicago University Press: Chicago, IL, USA, 1973. [Google Scholar]
  2. Gläser, J.; Laudel, G. Evaluation Without Evaluators. In The Changing Governance of the Sciences; Whitley, R., Gläser, J., Eds.; Springer: Dordrecht, The Netherlands, 2007; pp. 127–151. [Google Scholar]
  3. Dennett, D.C. Quining qualia. In Consciousness in Contemporary Science; Marcel, A., Bisiach, C., Eds.; Oxford University Press: Oxford, UK, 1992; pp. 42–77. [Google Scholar]
  4. Benbya, H.; Nan, N.; Tanriverdi, H.; Yoo, Y. Complexity and information systems research in the emerging digital world. MIS Q. 2020, 44, 1–17. [Google Scholar]
  5. Xu, Y.; Liu, X.; Cao, X.; Huang, C.; Liu, E.; Qian, S.; Liu, X.; Wu, Y.; Dong, F.; Qiu, C.W.; et al. Artificial intelligence: A powerful paradigm for scientific research. Innovation 2021, 2, 100179. [Google Scholar] [CrossRef] [PubMed]
  6. Handoyo, S. Mapping the landscape of internal auditing effectiveness study: A bibliometric approach. Cogent Bus. Manag. 2024, 11, 2289200. [Google Scholar] [CrossRef]
  7. Kale, S.; Hirani, S.; Vardhan, S.; Mishra, A.; Ghode, D.B.; Prasad, R.; Wanjari, M. Addressing cancer disparities through community engagement: Lessons and best practices. Cureus 2023, 15, e43445. [Google Scholar] [CrossRef] [PubMed]
  8. The Agreement on Reforming Research Assessment. Coalition for Advancing Research Assessment (CoARA). Available online: https://coara.eu/ (accessed on 25 April 2024).
  9. Sanders, J.; Moore, J.; Mountford-Zimdars, A. Operationalising Teaching Excellence in Higher Education: From ‘Sheep-dipping’ to ‘Virtuous Practice’. In Challenging the Teaching Excellence Framework; French., A., Thomas, K.C., Eds.; Emerald: Bingley, UK, 2020; pp. 47–94. [Google Scholar]
  10. Bryant, R.; Clements, A.; Feltes, C.; Groenewegen, D.; Hoggard, S.; Mercer, H.; Missingham, R.; Oxnam, M.; Rauh, A.; Wright, J. Research Information Management: Defining RIM and the Library’s Role; OCLC: Dublin, OH, USA, 2017. [Google Scholar]
  11. Barcelona Declaration on Open Research Information. Available online: https://barcelona-declaration.org/ (accessed on 25 April 2024).
  12. Bryant, R.; Clements, A.; De Castro, P.; Cantrell, J.; Dortmund, A.; Fransen, J.; Gallagher, P.; Mennielli, M. Practices and Patterns in Research Information Management: Findings from a Global Survey; OCLC: Dublin, OH, USA, 2018. [Google Scholar]
  13. Rademacher, F.; Sachweh, S.; Zündorf, A. Aspect-oriented modeling of technology heterogeneity in microservice architecture. In Proceedings of the 2019 IEEE International Conference on Software Architecture (ICSA 2019), Hamburg, Germany, 25–29 March 2019. [Google Scholar]
  14. Azeroual, O.; Schöpfel, J. Supporting research information management: Overcoming the inherent culture gap between traditional library ethics and the management of CRIS systems. In Benchmarking Library, Information and Education Services; Baker, D., Ellis, L., Eds.; Chandos: Oxford, UK, 2023; pp. 281–294. [Google Scholar]
  15. Subaveerapandiyan, A. Research Data Management Practices and Challenges in Academic Libraries: A Comprehensive Review. Libr. Philos. Pract. (E-J.) 2023, 1–107. [Google Scholar]
  16. Hicks, D.; Wouters, P.; Waltman, L.; de Rijcke, S.; Rafols, I. Bibliometrics: The Leiden Manifesto for research metrics. Nature 2015, 520, 429–431. [Google Scholar] [CrossRef] [PubMed]
  17. Nature Editorials. Support Europe’s bold vision for responsible research assessment. Nature 2022, 607, 636. [Google Scholar] [CrossRef] [PubMed]
  18. Gärtner, A.; Leising, D.; Schönbrodt, F.D. Towards responsible research assessment: How to reward research quality. PLOS Biol. 2024, 22, e3002553. [Google Scholar] [CrossRef] [PubMed]
  19. San Francisco Declaration on Research Assessment (DORA). Available online: https://sfdora.org/ (accessed on 25 April 2024).
  20. UNESCO Recommendation on Open Science. Available online: https://www.unesco.org/en/open-science/about (accessed on 25 April 2024).
  21. European Commission. Towards a Reform of the Research Assessment System; European Commission: Brussels, Belgium, 2021. [Google Scholar]
  22. Paris Call on Research Assessment. Open Science European Conference (OSEC). Available online: https://osec2022.eu/paris-call/ (accessed on 25 April 2024).
  23. Himanen, L.; Conte, E.; Gauffriau, M.; Strøm, T.; Wolf, B.; Gadd, E. The SCOPE framework—Implementing the ideals of responsible research assessment. F1000Research 2023, 12, 1241. [Google Scholar] [CrossRef]
  24. Himanen, L.; Nykyri, S. Towards a sustainable and responsible model for monitoring open science and research—Analysis of the Finnish model for monitoring open science and research. Res. Eval. 2024, rvae008, unpublished. [Google Scholar] [CrossRef]
  25. Académie des Sciences. Critères pour une Evaluation Transparente et Rigoureuse des Chercheurs et de Leurs Equipes; Academie des Sciences, Institut des Sciences: Paris, France, 2021. [Google Scholar]
  26. MESR. Deuxième Plan National pour la Science Ouverte; Ministère de l’Enseignement Supérieur et de la Recherche: Paris, France, 2021. [Google Scholar]
  27. Tatum, C. GraspOS: Next Generation Research Assessment to Promote Open Science. In Proceedings of the EuroCRIS Strategic Membership Meeting, Nijmegen, The Netherlands, 30 November–2 December 2022. [Google Scholar]
  28. Ivanović, D.; Pölönen, J.; Hyrkkänen, A.-K.; Kari, M. The Analysis of the Available Software Infrastructures for Supporting Research: The GraspOS project. In Proceedings of the EuroCRIS Strategic Membership Meeting, Pamplona, Spain, 21–23 November 2023. [Google Scholar]
  29. Vergoulis, T.; Chatzopoulos, S. GraspOS Deliverable 3.1 “Tools and Services Landscape Report”. Research Report. 2023. Available online: https://zenodo.org/records/8302170 (accessed on 29 May 2024).
  30. CNPEN Comité National Pilote d’Éthique du Numérique. Pour une Ethique du Numérique; Presses Universitaire de France: Paris, France, 2022. [Google Scholar]
  31. Vitali-Rosati, M. Une éthique appliquée? Considérations pour une éthique du numérique. Éthique Publique 2012, 14, 13–32. [Google Scholar]
  32. Müller, V.C. The History of Digital Ethics. In Oxford Handbook of Digital Ethics; Veliz, C., Ed.; Oxford University Press: Oxford, UK, 2022; pp. 3–19. [Google Scholar]
  33. Floridi, L. Distributed Morality in an Information Society. Sci. Eng. Ethics 2013, 19, 727–743. [Google Scholar] [CrossRef] [PubMed]
  34. Floridi, L. Infraethics—On the Conditions of Possibility of Morality. Philos. Technol. 2017, 30, 391–394. [Google Scholar] [CrossRef]
  35. Schöpfel, J.; Azeroual, O. Ethical Issues of the Organization and Management of Research Information. Commun. Technol. Dév. 2023, 14. [Google Scholar] [CrossRef]
  36. Davidson, J.; Molloy, L.; Jones, S.; Kejser, U. Emerging good practice in managing research data and research information within UK Universities. In Proceedings of the 12th International Conference on Current Research Information Systems (CRIS2014), Rome, Italy, 13–15 May 2014. [Google Scholar]
  37. Azeroual, O.; Herbig, N. Mapping and semantic interoperability of the German RCD data model with the Europe-wide accepted CERIF. Inf. Serv. Use 2020, 40, 87–113. [Google Scholar] [CrossRef]
  38. Schöpfel, J.; Azeroual, O. Ethical Dimensions of Research Information Management: A New Challenge for Information Professionals. In Information Services for a Sustainable Society: Current Developments in an Era of Information Disorder; Fombad, M., Chisita, C., Onyancha, O.B., Majanja, M., Eds.; De Gruyter: Berlin, Germany, 2023; pp. 150–165. [Google Scholar]
  39. Azeroual, O.; Koltay, T. Research information in the light of artificial intelligence: Quality and data ecologies. arXiv 2024, arXiv:2405.12997. [Google Scholar]
  40. Azeroual, O.; Saake, G.; Abuosba, M. Data quality measures and data cleansing for research information systems. J. Digit. Inf. Manag. 2018, 16, 12–21. [Google Scholar]
  41. Kwon, O.; Lee, N.; Shin, B. Data quality management, data usage experience and acquisition intention of big data analytics. Int. J. Inf. Manag. 2014, 34, 387–394. [Google Scholar] [CrossRef]
  42. Schöpfel, J.; Azeroual, O.; Jungbauer-Gans, M. Research Ethics, Open Science and CRIS. Publications 2020, 8, 51. [Google Scholar] [CrossRef]
  43. Chen, G.; Posada, A.; Chan, L. Vertical Integration in Academic Publishing. In Connecting the Knowledge Commons—From Projects to Sustainable Infrastructure; Chan, L., Mounier, P., Eds.; OpenEdition Press: Marseille, France, 2019; pp. 15–40. [Google Scholar]
  44. Schöpfel, J.; Azeroual, O. Current research information systems and institutional repositories: From data ingestion to convergence and merger. In Future Directions in Digital Information; Baker, D., Ellis, L., Eds.; Chandos: Oxford, UK, 2021; pp. 19–37. [Google Scholar]
  45. Morley, J.; Floridi, L.; Kinsey, L.; Elhalal, A. From What to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles into Practices. Sci. Eng. Ethics 2020, 26, 2141–2168. [Google Scholar] [CrossRef] [PubMed]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Schöpfel, J.; Azeroual, O. Responsible Research Assessment and Research Information Management Systems. Encyclopedia 2024, 4, 915-922. https://doi.org/10.3390/encyclopedia4020059

AMA Style

Schöpfel J, Azeroual O. Responsible Research Assessment and Research Information Management Systems. Encyclopedia. 2024; 4(2):915-922. https://doi.org/10.3390/encyclopedia4020059

Chicago/Turabian Style

Schöpfel, Joachim, and Otmane Azeroual. 2024. "Responsible Research Assessment and Research Information Management Systems" Encyclopedia 4, no. 2: 915-922. https://doi.org/10.3390/encyclopedia4020059

APA Style

Schöpfel, J., & Azeroual, O. (2024). Responsible Research Assessment and Research Information Management Systems. Encyclopedia, 4(2), 915-922. https://doi.org/10.3390/encyclopedia4020059

Article Metrics

Back to TopTop