Next Article in Journal
Entropy-Based Assessment of AI Adoption Patterns in Micro and Small Enterprises: Insights into Strategic Decision-Making and Ecosystem Development in Emerging Economies
Previous Article in Journal
Mapping Blockchain Applications in FinTech: A Systematic Review of Eleven Key Domains
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Generative AI and the Information Society: Ethical Reflections from Libraries

School of Commerce, University of KwaZulu-Natal, Durban 4000, South Africa
*
Author to whom correspondence should be addressed.
Information 2025, 16(9), 771; https://doi.org/10.3390/info16090771
Submission received: 15 June 2025 / Revised: 25 August 2025 / Accepted: 3 September 2025 / Published: 5 September 2025

Abstract

The integration of generative artificial intelligence (generative AI) into library systems is transforming the global information society, offering new possibilities for improving information access, management, and dissemination. However, these advancements also raise significant ethical concerns, including algorithmic bias, epistemic injustice, intellectual property conflicts, data privacy breaches, job displacement, the spread of misinformation, and increasing digital inequality. This review critically examines these challenges through the lens of the World Summit on the Information Society (WSIS) Action Line C10, which emphasizes the ethical dimensions of the information society. It argues that while such concerns are global, they are particularly acute in the Global South, where structural barriers such as skills shortages, weak policy frameworks, and limited infrastructure undermine equitable access to AI benefits. The review calls for a more inclusive, transparent, and ethically responsible approach to AI adoption in libraries. It underscores the essential role of libraries as stewards of ethical information practices and advocates for collaborative strategies to ensure that generative AI serves as a tool for empowerment, rather than a driver of deepening inequality in the information society.

1. Introduction

The dawn of the 21st century has marked a profound shift in the production, storage, and dissemination of information, culminating in the emergence of the “information society”, a sociotechnical paradigm wherein information functions as a central axis for cultural expression, economic activity, political engagement, and social development [1]. Within this evolving landscape, libraries have transitioned from traditional repositories of knowledge to dynamic agents of equitable access, academic inquiry, and lifelong learning [2]. This transformation has been accelerated by a suite of emerging technologies, including cloud computing, blockchain, machine learning (ML), the Internet of Things (IoT), augmented and virtual reality (AR/VR), and big data analytics, with artificial intelligence (AI), and particularly generative AI, occupying an increasingly influential role. Generative AI’s capacity to autonomously produce and manipulate content introduces new possibilities for enhancing library functions, while simultaneously surfacing complex ethical questions related to privacy, intellectual property, algorithmic bias, misinformation, and equity [3,4]. These tensions intersect with broader debates on digital ethics, positioning libraries as critical actors in the responsible governance of technological adoption.
Despite the operational advantages that generative AI offers, its integration into library systems presents unresolved ethical challenges. AI models trained on skewed or non-representative datasets may inadvertently reproduce biases, marginalize non-Western epistemologies, and reinforce forms of epistemic exclusion [5]. Concerns surrounding user privacy are also salient, as data-driven personalization risks compromise confidentiality, while ambiguities in intellectual property rights threaten the sustainability of open scholarship [6,7]. These challenges are especially acute in the Global South, where structural constraints, such as underdeveloped digital infrastructure, fragile regulatory environments, and limited AI literacy, compound the ethical risks of adoption [8]. Moreover, the dominance of generative AI technologies originating from the Global North introduces cultural and linguistic biases that may entrench technological dependency, undermining the ability of libraries in resource-constrained contexts to serve as inclusive, contextually grounded platforms for knowledge access [9].
Although the existing literature has begun to interrogate the technological and policy-related dimensions of AI in libraries, the ethical implications, particularly in marginalized or resource-limited environments, remain insufficiently theorized and inconsistently addressed in professional practice [10,11]. In particular, three interrelated gaps remain unresolved: first, a lack of systematic synthesis of how ethical risks such as bias, privacy violations, and intellectual property disputes are playing out in library contexts; second, an underdeveloped understanding of how these ethical concerns intersect with structural inequalities in the Global South; and third, limited application of normative frameworks such as the World Summit on the Information Society Action Line C10 (WSIS C10) to guide ethical AI adoption in libraries.
This review addresses these gaps by examining how libraries can ethically integrate generative AI amid the intersecting challenges of algorithmic bias, data privacy, intellectual property, and epistemic injustice. Anchored in the ethical imperatives articulated by WSIS C10, the paper offers a globally informed yet contextually sensitive perspective, with particular emphasis on the distinct challenges faced by libraries in marginalized contexts, particularly in the Global South.
This paper is guided by the following objectives: (1) to examine the application of generative AI technologies in libraries for information processing, storage, and dissemination; (2) to critically examine the ethical dilemmas arising from the use of generative AI in libraries; (3) to investigate the role of libraries and librarians in mitigating ethical challenges associated with AI adoption; (4) to analyze case studies that provide insights into global perspectives on the adoption and implementation of AI-powered technologies in libraries; and (5) to critically explore the ethical considerations associated with AI adoption in libraries in the Global South.
The remainder of the review is structured as follows: Section 2 outlines the conceptual and theoretical framework underpinning the study; Section 3 presents the discussion, organized around the study objectives; and Section 4 concludes the review by summarizing the main findings and proposing directions for future research.

2. WSIS Action Line C10: Ethical Dimensions of the Information Society

This review article employs the World Summit on the Information Society Action Line C10 (WSIS C10) as a conceptual lens to interrogate the ethical implications of generative artificial intelligence (generative AI) in academic libraries. WSIS C10 emphasizes the ethical dimensions of the information society, affirming that the development and use of information and communication technologies (ICTs), including AI, must be rooted in respect for human rights and core values such as freedom, dignity, equality, and solidarity [12]. These principles provide a normative foundation for guiding libraries in safeguarding privacy and personal data, while promoting justice, transparency, and inclusivity within digital knowledge systems.
Originally articulated in the Geneva Plan of Action [12] and reaffirmed in the Tunis Agenda [13], WSIS C10 outlines the ethical responsibilities of multiple stakeholders, including governments, academia, civil society, and the private sector, to ensure that digital technologies serve the common good. Of particular relevance is Article 25 of the Geneva Plan, which underscores the need for technologies that protect privacy, prevent misuse, uphold intellectual freedom, and promote equitable access to information [12]. These stipulations offer broad ethical guidance for institutions navigating the risks and opportunities of emerging technologies like generative AI.
However, while WSIS C10 provides a globally endorsed framework, its applicability to AI in library contexts requires critical scrutiny. WSIS, while inclusive in rhetoric, operates largely from a top-down global governance perspective, which may overlook the situated ethical challenges faced by libraries in the Global South, where infrastructural disparities and data colonialism persist [9]. From a Habermasian perspective, this dynamic resonates with the “colonization of the lifeworld,” whereby global technological and economic systems impose standardized logics that reshape local contexts, often at the expense of cultural autonomy and community-driven values [14]. In this sense, AI adoption guided uncritically by global frameworks risks marginalizing local epistemologies and reinforcing dependency on external systems of knowledge production [15]. Diyaolu et al. [5] underscores that the inadvertent embedding of prejudice and discrimination within generative AI systems carries profound and multifaceted societal consequences, further complicating their ethical integration. In line with this, Al-kfairy [16] argues that the application of AI must be considered in relation to specific organizational and societal settings, since uniform approaches can obscure context-dependent needs and exacerbate existing inequalities. This insight reinforces the call for a more dialogical and context-sensitive application of WSIS C10 to ensure that AI integration in libraries supports, rather than undermines, the “lifeworlds” of communities in the Global South.
To make these ethical imperatives more concrete for library practice, Table 1 summarizes the key WSIS C10 principles and maps them to AI-related concerns in libraries, highlighting how each principle can inform responsible, context-sensitive AI adoption. Despite these limitations, WSIS C10 remains a valuable starting point. It enables libraries to ground their AI integration strategies within a broader vision of ethical responsibility, while simultaneously prompting critical reflection on the sufficiency and contextual adaptability of global frameworks. By situating AI-related ethical concerns in academic libraries within the WSIS ethos, this review draws attention to the tension between global ethical norms and localized practices. It argues for a dynamic interpretation of WSIS principles, one that embraces their aspirational value while recognizing the need for continuous adaptation in response to rapidly evolving AI technologies and uneven socio-technical realities.
Table 1 illustrates the alignment of AI adoption trends in libraries with the ethical principles outlined in WSIS Action Line C10. Across regions, AI implementation reflects significant disparities in technological capacity, governance, and inclusivity, with Global North institutions generally better positioned to integrate AI responsibly. The mapped principles, including respect for core values and ethical awareness, as well as protection, inclusiveness, and cultural sensitivity, highlight recurring ethical priorities that libraries must navigate. This synthesis emphasizes the need for the context-sensitive application of WSIS C10, ensuring that AI adoption not only enhances operational efficiency but also upholds human rights, equity, and local knowledge systems, especially in contexts marked by infrastructural and socio-technical inequalities.

3. Discussion

The discussion is structured around the study’s objectives. A summary of the discussion on generative artificial intelligence (generative AI) and related technologies in libraries is provided in Table 2.

3.1. Generative AI Technologies in Libraries for Information Processing, Storage, and Dissemination

The literature consistently identifies generative AI as a central innovation in the transformation of library systems, particularly in information processing, storage, retrieval, and dissemination. Defined as systems capable of producing synthetic outputs that resemble human-created material, generative AI leverages large-scale models trained on extensive datasets to generate contextually relevant responses [17].
Applications in libraries are reported across multiple platforms, including Copilot, Gemini, GPT-4, and DALL·E 2, which are shown to facilitate content generation, improve search precision, and enhance user interaction [3]. These studies suggest a shift in digital service delivery as libraries increasingly experiment with AI-driven forms of engagement within existing infrastructures [4].
Empirical evidence supports these developments. Li [18], in a survey of Chinese public and private university libraries, documented the adoption of text-to-speech (48.1%), speech-to-text (41.6%), and voice-activated search systems (35.7%). The findings point to growing institutional investment in AI technologies aimed at improving accessibility and expanding user services. Similarly, Mupaikwa [19] reports integration across cataloging, classification, retrieval, and user services, with cognitive tools, robotics, and natural user interfaces deployed to improve metadata accuracy, workflow efficiency, and personalization.
Across these studies, generative AI emerges as a catalyst for innovation in library environments. Its capacity to automate processes, expand personalization, and support adaptive service models is framed as contributing to the modernization of library systems and the reconfiguration of traditional modes of knowledge access and management.

3.1.1. Large Language Models (LLMs)

Large language models (LLMs) are increasingly recognized in the literature as a major advancement in the digital transformation of library services. Defined as deep learning systems capable of generating and interpreting human-like language, LLMs extend automation beyond routine processes into tasks requiring semantic interpretation and adaptive interaction [20]. Studies consistently highlight their application in core library functions, including metadata creation, cataloging, translation, and document summarization, where they are reported to enhance efficiency and scalability [21].
Evidence further indicates that LLMs contribute to interactive and user-facing services, with different models and tools displaying strengths across cataloging support, research queries, and conversational engagement [22]. Beyond operational functions, LLMs are also applied in scholarly workflows, particularly in synthesizing the literature and identifying thematic patterns within large volumes of academic content [23].
Across these domains, the literature frames LLMs as technologies that broaden the scope and responsiveness of digital library services while reshaping professional practices. Rather than replacing human expertise, they appear to complement librarians’ roles in maintaining quality standards and supporting knowledge organization. At the same time, their integration is prompting sustained scholarly reflection on the epistemic and institutional implications of relying on AI systems in digital knowledge environments.

3.1.2. Natural Language Processing (NLP)

Recent scholarship positions natural language processing (NLP) as a critical component of library systems, particularly when integrated with generative AI to advance computational text analysis. Across studies, NLP is consistently associated with improved methods for text classification, sentiment analysis, topic modeling, and semantic structuring, which collectively enhance the organization, retrieval, and accessibility of digital information.
Evidence indicates an evolution in text classification approaches within library contexts, moving from manual heuristics to machine learning models, including K-Nearest Neighbors, Support Vector Machines, Naïve Bayes, and Decision Trees [24]. These models have facilitated the processing of large-scale textual data and contributed to scalable, efficient library systems. Furthermore, the adoption of explainable AI and hybrid modeling approaches is highlighted as a strategy to strengthen reliability, transparency, and interpretability in NLP applications.
Topic modeling represents another prominent focus, with comparisons between probabilistic models such as Latent Dirichlet Allocation (LDA) and neural network-based methods [25]. Findings suggest that neural approaches offer enhanced modeling depth and content analysis capabilities, while semantically enriched hybrid models, alongside refined evaluation metrics and contextual information, improve topic discovery and content curation outcomes in library environments.
The role of NLP in managing tacit knowledge has also been explored, with techniques such as sentiment analysis, semantic classification, and text mining transforming unstructured data into actionable insights [26]. These studies emphasize that NLP supports personalized library services and informed decision-making, particularly when coupled with mechanisms that ensure transparency, oversight, and equitable information practices.
This literature demonstrates that NLP contributes to both operational efficiency and enhanced semantic understanding in library systems. Progress in this domain is contingent upon the development of advanced algorithms, the integration of ethical considerations, and the institutionalization of practices that promote inclusivity and accountability in information management.

3.1.3. Digital Storage and Archiving Systems

AI integration in digital storage and archiving systems is transforming library practices in preservation, access, and collection management. Osagie and Oladokun [27] report that AI enhances cataloging accuracy, optimizes metadata management, and supports preservation through automated monitoring and repair, while advanced semantic searching improves retrieval. Tawalbeh [28] corroborates these effects in Jordanian university libraries, highlighting improvements in archival efficiency and user access and emphasizing the role of infrastructural readiness and inter-institutional collaboration in realizing AI’s potential.
Memon et al. [29] demonstrate that AI-enhanced Optical Character Recognition (OCR) technologies, employing deep learning models such as CNNs, LSTMs, and RNNs, significantly improve the digitization of complex handwritten content. Platforms like Google Cloud Vision exemplify scalable applications for historical document preservation and retrieval. Complementing these findings, Terras [30] shows that Handwritten Text Recognition (HTR) systems, exemplified by Transkribus, achieve up to 98% transcription accuracy while benefiting from integration into broader digitization workflows and ongoing human feedback to refine outputs.
Collectively, these studies indicate that AI contributes substantively to digital archiving by enhancing accuracy, efficiency, and accessibility. Its effectiveness is closely linked to alignment with institutional practices, professional engagement, and the socio-technical context in which libraries operate.

3.1.4. Recommendation Systems

AI-powered recommendation systems have been identified as significant tools in enhancing personalization, user engagement, and resource discoverability in academic libraries. Empirical studies suggest that these systems contribute to improved library operations and resource management through tailored user services [31]. Evidence indicates that adoption is facilitated by user-centered design, interdisciplinary collaboration, capacity development, and transparent communication practices.
Research further highlights the ethical and normative dimensions of recommendation systems. Transparency in algorithmic processes and the mitigation of bias are central to maintaining user trust and institutional credibility [32]. Ethical design practices, including the clear disclosure of recommendation generation methods, are associated with equitable access and inclusive information provision, particularly for historically marginalized user populations.
From a technical perspective, advanced architectures, such as the coherence-of-content (CoC) ensemble integrating Support Vector Machine and Neuro-Fuzzy classifiers, have demonstrated high classification accuracy and improved semantic relevance of resources [33]. Longitudinal assessments suggest that these systems support sustained user engagement and satisfaction, while scalability across diverse academic contexts enhances their applicability.
Overall, the literature indicates that AI recommendation systems contribute to personalized and contextually relevant library services, with effectiveness linked to transparent, inclusive, and scalable implementation strategies that align operational functionality with ethical and normative considerations.

3.1.5. Research Tools and Scholarly Workflows

Recent studies indicate that AI-powered research tools are increasingly integrated into scholarly workflows, supporting literature retrieval, review processes, and knowledge synthesis. Evidence suggests that these tools enhance efficiency and enable more structured engagement with academic content [34,35]. Platforms including Elicit, Semantic Scholar, Litmaps, SciSpace, Consensus, ResearchRabbit, Any Summary, Connected Papers, Explain Paper, and Scite have been identified as supporting multiple stages of the research lifecycle, from initial exploration to conceptual synthesis [36,37].
Jhajj et al. [38] provide an overview of AI-driven platforms such as Sourcely, SciLynk, Perplexity, System Pro, Trinka.ai, Humata, Zotero, Papersapp, Paperpile, EndNote, AI Writer, Jenni.ai, Paperpal, Quillbot, Penelope, Grammarly, Bit.ai, Writefull, and Labarchives. These tools are reported to support searches of the literature, reference management, and writing processes, contributing to research productivity and workflow efficiency. Comparative evaluations highlight that platforms differ in functionality; for instance, Semantic Scholar offers extensive database coverage, while SciSpace provides interactive features including PDF uploads, text highlighting, and AI-generated explanatory responses [39].
Patterson [40] notes that AI-based tools can facilitate rapid analysis, citation mapping, and automated review generation. Evidence indicates that combining AI-supported processes with traditional research methods may strengthen methodological rigor and reliability. Collectively, the literature suggests that AI research tools can contribute to improved accessibility, analytical capacity, and workflow efficiency, particularly when integrated with guidance on ethical and critical use within library-mediated environments.
Table 2. Summary of discussion on generative AI and related technologies in libraries.
Table 2. Summary of discussion on generative AI and related technologies in libraries.
SectionFocus AreaKey ContributionsSources
Section 3.1.Role of generative AI in processing, storage, retrieval, and disseminationTools like Copilot, Gemini, GPT-4, and DALL·E 2 enhance automation, search precision, user interaction, and personalization[3,17,19,41]
Section 3.1.1.Application of LLMs in librariesEnhance translation, summarization, metadata creation, cataloging, research queries, conversational engagement[20,21,22]
Section 3.1.2.NLP for text analysis, classification, and retrievalTechniques include KNN, SVM, Naïve Bayes, Decision Trees; Hybrid and explainable AI; Topic modeling[24,25,26]
Section 3.1.3.AI in preservation, cataloging, metadata, OCR, HTRImproves cataloging precision, metadata quality, preservation, and OCR/HTR accuracy[27,28,29,30]
Section 3.1.4.AI-driven personalization and discoverabilityEnhances personalization, subject categorization, and user engagement; advanced CoC ensemble improves accuracy[31,33,35]
Section 3.1.5.AI tools in review of the literature, reference management, synthesisTools (Elicit, Semantic Scholar, Litmaps, ResearchRabbit, etc.) accelerate discovery, synthesis, and workflow management[38,39,40]
Table 2 synthesizes the diverse ways in which generative AI and related technologies are reshaping library systems. Collectively, the evidence highlights a technological ecosystem in which generative AI, large language models, and natural language processing expand automation, semantic understanding, and user interactivity, while digital storage and archiving systems enhance preservation and access. At the same time, AI-driven recommendation systems and research tools strengthen personalization and scholarly workflows, supporting discovery, synthesis, and engagement across academic contexts. Taken together, these developments illustrate the multidimensional role of AI in modern libraries, underscoring both the opportunities for innovation and the need for ethical, inclusive, and context-sensitive integration.

3.2. Ethical Dilemmas in the Use of Generative AI in Libraries

The integration of generative AI in libraries has been associated with ethical considerations related to algorithmic bias, privacy, intellectual property, transparency, accountability, and information integrity [32,42]. These considerations intersect with broader questions of justice, equity, and power in the global information ecosystem.

3.2.1. Algorithm Bias and Epistemic Injustice

Evidence indicates that algorithmic bias in AI-driven library systems arises from non-representative training datasets, which may reflect historical inequalities and societal stereotypes, thereby affecting decision-making processes such as search ranking and metadata classification [5,43]. Biases in these systems can lead to disproportionate representation of certain knowledge sources, affecting equitable access to information.
The concept of epistemic injustice, defined as the systematic marginalization or devaluation of specific knowledge systems, has been applied to library contexts, particularly regarding the visibility of indigenous knowledge, non-English materials, and outputs from the Global South [44]. Studies show that AI systems tend to prioritize dominant languages and Western knowledge production, which may limit the inclusion of underrepresented linguistic and cultural perspectives [45,46]. Participatory approaches, co-designing with community stakeholders, and locality-aware language technologies have been suggested to enhance inclusivity and support hermeneutic justice.
Research further highlights regional dimensions of AI integration. In Africa, AI applications in library systems have been observed to underrepresent local knowledge, emphasizing the need for culturally relevant ethical frameworks and multi-stakeholder collaboration [47]. In Southeast Asia, studies underscore the potential misrepresentation and erosion of indigenous knowledge, advocating for the integration of international and national legal protections, as well as culturally informed data governance [48].
Within library settings, Ibrahim [42] emphasizes that ethical oversight, professional training, and transparent AI systems contribute to responsible and equitable deployment. Synthesized findings indicate that addressing algorithmic bias and supporting epistemic diversity requires culturally responsive methodologies, structured ethical frameworks, and ongoing professional development. Integrated strategies that combine AI capabilities with critical oversight may enable libraries to enhance efficiency while maintaining inclusivity, cultural sensitivity, and equitable access to knowledge.

3.2.2. Privacy and Security

Research indicates that AI integration in libraries requires careful attention to data privacy and security, with implications for policy, technical measures, and ethical governance. Gupta [49] emphasizes the importance of transparent, adaptive policies grounded in privacy-by-design principles and inclusive stakeholder engagement. The study identifies strategies such as equipping librarians with skills to manage sensitive data, generate synthetic datasets, and understand data-sharing practices with commercial AI providers. Tools including checklists and knowledge-sharing platforms are noted as complementary to ongoing staff capacity-building, supporting user trust and equitable access.
Cox [50] observes that extensive data collection for personalized library services may affect confidentiality and user trust when transparency is limited. Similarly, Ocks and Salubi [51] report that Fourth Industrial Revolution technologies in academic libraries, while enhancing service delivery, increase exposure to risks associated with extensive data retention and practices aligned with commercial data acquisition. These risks are particularly evident when libraries engage with third-party vendors, extending privacy considerations beyond traditional library records. Compliance with data protection legislation, such as South Africa’s Protection of Personal Information Act (POPI Act), is highlighted as essential for maintaining accountability.
From a technical and operational perspective, Persadha et al. [52] identify best practices for safeguarding digital library resources. Their review underscores adherence to international standards such as ISO 27001 [53] and ISO 27701 [54], the deployment of encryption, multi-factor authentication, and fine-grained access controls, and compliance with regulations including the European Union General Data Protection Regulation (GDPR). AI and ML are recognized for supporting proactive threat detection, and continuous education for staff and users is recommended to address human factors that may influence security outcomes. The authors advocate for a multi-layered strategy combining technological safeguards, policy alignment, AI-driven monitoring, and ongoing capacity-building.
Ikwuanusi et al. [55] frame privacy within an ethical dimension, noting that AI systems often lack transparency and safeguards, which can lead to unauthorized data collection. The study recommends ethical guidelines emphasizing informed consent, data minimization, and accountability, alongside library policies, ethics training, and oversight mechanisms to support responsible AI use. Al-kfairy [16] extends this perspective to generative AI, identifying privacy, data protection, copyright, misinformation, bias, and social inequality as areas requiring multidisciplinary policy attention. The study highlights the importance of aligning AI deployment with human rights, fairness, and transparency.
Overall, the literature suggests that AI adoption in libraries benefits from integrated strategies incorporating transparent policies, technical measures, ethical frameworks, staff training, and ongoing assessment. WSIS C10 principles underscore the importance of protecting sensitive data, preventing misuse, and fostering ethical awareness in AI deployment. Evidence indicates that these strategies support user trust, equitable access, and responsible information practices in digital library environments while reinforcing the foundational values of libraries as inclusive and trustworthy institutions.

3.2.3. Automation and Job Displacement

Studies indicate that AI and automation are reshaping workforce structures within academic and digital libraries, altering task composition and professional responsibilities [32]. Research suggests that automation does not uniformly eliminate positions but rather reconfigures functions, creating opportunities for new skills and roles while streamlining routine tasks.
Balakumar et al. [56] report that AI-driven automation leads to both the redefinition of existing roles and the emergence of new positions, frequently requiring advanced technical competencies. The impact is uneven, with low-skilled workers and those in developing regions experiencing greater susceptibility due to limited access to reskilling opportunities. The authors recommend targeted training programs, equitable access to technology, and inclusive policy measures to ensure broad benefits.
George [57] emphasizes that AI primarily modifies the composition of tasks rather than eliminating entire jobs. Evidence indicates rising demand for digital literacy, problem-solving, and interdisciplinary collaboration. Effective adaptation depends on coordinated efforts from governments, industry, and academic institutions to integrate AI while supporting workforce development.
Within library contexts, Zhang [58] highlights that automation optimizes workflows, enhances service delivery, and allows for more user-centered operations. These developments coincide with increased demand for specialized digital skills and adaptive capacities among staff. Sustained investment in workforce upskilling, technological infrastructure, and supportive policy frameworks is identified as critical for maintaining service quality.
Huang [59] provides empirical evidence from Taiwanese academic libraries, showing that librarians generally view AI as a tool to support routine operations rather than replace positions. AI facilitates the allocation of professional effort toward higher-value tasks, emphasizing the importance of continuous professional development to maintain relevance and adaptability.
Collectively, the literature indicates that AI integration in libraries constitutes a reconfiguration of work rather than straightforward job displacement. Evidence supports the importance of reskilling initiatives, inclusive policies, and cross-sector collaboration to ensure that AI enhances professional expertise. The WSIS C10 principles of equity and inclusiveness are relevant in promoting access to development opportunities, while ethical awareness encourages critical evaluation of AI’s effects on workforce dynamics. Overall, AI appears to complement human expertise by improving operational efficiency and enabling higher-order professional contributions, provided that workforce stewardship and ethical oversight are maintained.

3.2.4. Openness and Intellectual Property Rights

The literature indicates that the adoption of generative AI in academic libraries interacts with issues of openness and intellectual property rights in complex ways. AI tools can enhance open access and digital scholarship by automating metadata generation, improving resource discoverability, and supporting large-scale data analysis, thereby potentially accelerating scholarly communication and promoting open science principles [60].
Research also highlights persistent tensions related to copyright law. Generative models are frequently trained on datasets with unclear licensing statuses, raising questions regarding fair use, content creator rights, and institutional responsibilities [61]. Historical studies show that restrictive copyright provisions and low copyright literacy have constrained academic library services in various contexts, suggesting that librarian training and advocacy are essential to support ethical and equitable knowledge dissemination [62].
Transparency in AI development has been proposed as a mechanism to improve accountability and protect rights holders. Evidence suggests that disclosing training datasets can facilitate ethical practice, yet practical barriers exist, including trade secrecy protections and the absence of standardized disclosure frameworks. Scholars argue that transparency should be complemented by targeted legal reforms, licensing systems, and international cooperation to balance innovation with the protection of creators’ rights [61].
The alignment of intellectual property frameworks with open science objectives is emphasized in the literature. Flexible licensing arrangements, clear data-sharing protocols, and institutional policies promoting open access are identified as strategies to reconcile traditional copyright regimes with open science aspirations [63,64]. Academic libraries are recognized as central actors in this process, managing repositories, supporting research data management, and facilitating collaborative networks that advance openness [65].
These studies suggest that the ethical integration of AI in libraries requires balancing its capacity to expand access with adherence to legal and ethical obligations regarding intellectual property. Evidence supports coordinated measures, including the adaptation of copyright frameworks, the implementation of permissive licensing, the incorporation of transparency practices in AI development, and enhanced librarian capacity for advocacy and compliance. These measures may enable libraries to advance openness while maintaining accountability for intellectual property, thereby supporting equitable and responsible dissemination of scholarly knowledge.

3.2.5. Digital Divide and Accessibility

The literature indicates that AI integration in library systems can influence existing digital divides, particularly where infrastructure and digital literacy are unevenly distributed. Yu [66] conceptualizes the “algorithmic divide” as the gap between individuals who derive benefits from AI technologies and those who do not. Drawing on digital divide theory, the study frames this divide across five dimensions: awareness, access, affordability, availability, and adaptability. Consequences identified include algorithmic deprivation, or the loss of potential benefits; algorithmic discrimination, involving biased or unfair outcomes; and algorithmic distortion, referring to effects that disproportionately impact all users. Yu [66] argues that a coordinated strategy incorporating legal reform, communication policy, ethical guidelines, institutional oversight, and responsible business practices is necessary to support equitable AI adoption.
Similarly, Vesna et al. [67] report that AI holds transformative potential in educational contexts such as academic libraries but may amplify disparities due to persistent infrastructure deficits, socioeconomic inequalities, and limited digital literacy. Their findings support investments in technology, affordable AI tools, inclusive policies, cross-sector partnerships, and digital skills training. The authors emphasize a multi-stakeholder approach to ensure that AI-powered education remains accessible and inclusive for all users.
Focusing on library services, Gajbhiye [68] examines access in communities with limited technological resources and digital skills in India. Evidence indicates that disparities in access to information and library services disproportionately affect marginalized groups. The study underscores the relevance of proactive and ethically informed strategies within the library and information science field to promote equitable access and prevent AI deployment from reinforcing digital inequalities.
Overall, the reviewed literature identifies the digital divide as a critical factor in shaping equitable AI adoption in academic libraries. Studies suggest that addressing this issue requires holistic, ethically guided, and collaborative interventions targeting infrastructure, access, and digital literacy gaps. WSIS C10 principles are particularly applicable, emphasizing equity and inclusiveness and ethical awareness. Equity and inclusiveness highlight the design of AI systems to expand access for marginalized groups, while ethical awareness stresses the critical evaluation of technological interventions to avoid unintended exclusion. Coordinated initiatives including policy reform, capacity building, and inclusive technology design are presented as mechanisms to support AI integration that promotes fairness in access to information and educational resources. Evidence further indicates a balance between technological advancement and social equity: AI can enhance accessibility, automate services, and improve efficiency, but unmediated deployment may reinforce existing disparities. Collectively, these findings suggest that strategies integrating innovation with social justice principles may position AI as a tool for inclusion within the academic knowledge ecosystem.

3.2.6. Transparency and Accountability

The literature indicates that the integration of AI into academic libraries necessitates careful attention to transparency and accountability. AI systems are capable of enhancing operational efficiency and information processing; however, their decision-making processes often remain opaque, limiting the normative reasoning that is typical of human judgment [69]. Peters [69] emphasizes that while complex machine learning models can produce accurate outputs by processing large datasets, their internal mechanisms are frequently difficult to interpret. Ethical and regulatory frameworks are therefore recommended to situate AI systems within human-centered accountability structures rather than assuming that machine outputs equate to human reasoning.
Empirical research identifies transparency practices as critical for promoting ethical AI use. Larsson and Heintz [70] report that documenting algorithms, disclosing training data, and providing explainable mechanisms supports fairness, enables critical evaluation, and fosters accountability. Despite these recommendations, transparency is constrained by proprietary systems, complex model architectures, and varying regulatory standards. Standardized transparency frameworks, explainable AI techniques, open data documentation, and multi-stakeholder oversight are highlighted as key measures to support responsible AI deployment.
Evidence from cross-national studies indicates that transparency in AI-driven library systems remains limited. Liu et al. [18] examined academic search platforms in Canada, the United Kingdom, and the United States and found that only 40% provided detailed information on algorithms, data sources, and decision-making processes. Limited disclosure was identified as potentially affecting research integrity and the reliability of scholarly outputs. The authors recommend comprehensive transparency guidelines, including the clear documentation of AI mechanisms, the disclosure of underlying databases, and the identification of indexed content to enhance accountability, user trust, and ethical research practices.
Broader analyses of AI integration in libraries and archives reinforce these findings. Mannheimer et al. [71] note that while AI facilitates services such as information retrieval and automated metadata generation, ethical, legal, and professional considerations remain relevant, particularly regarding transparency. Suggested measures include developing ethical guidelines, enhancing staff AI literacy, fostering stakeholder collaboration, and implementing ongoing monitoring to ensure accountable and equitable use.
Overall, the reviewed literature indicates that transparency and accountability are central to the ethical deployment of AI in academic libraries. Effective practices combine technical explainability with human-centered governance, standardized transparency measures, staff capacity building, and continuous oversight. Embedding these practices across design, implementation, and evaluation phases is associated with the enhanced credibility, inclusivity, and ethical stewardship of library services.

3.2.7. Misinformation and Information Integrity

The recent literature indicates that AI integration in academic libraries influences the generation, dissemination, and perception of information, with implications for accuracy and reliability. Jaidka et al. [72] report that generative AI can produce highly convincing false content at scale, which may affect public trust and the integrity of information ecosystems. The capacity for the personalization and rapid distribution of AI-generated content further complicates detection and verification processes. Recommendations from this study include the development of regulatory frameworks, public AI literacy programs, advanced detection mechanisms, and ethical guidelines to support the reliability of information.
Academic libraries are identified as key institutions for upholding information integrity. Khyat et al. [73] indicate that libraries can address AI-generated inaccuracies through curated resources, verification protocols, and user training. Evidence emphasizes the importance of AI literacy among library staff, monitoring policies, and strategic collaboration with stakeholders to strengthen libraries as reliable information providers. Saeidnia et al. [74] suggest that the effective management of misinformation requires the ongoing refinement of AI models, the adoption of ethical frameworks, cross-sector partnerships, and initiatives that enhance public media literacy.
Research also highlights the risks of AI-generated bibliographic outputs. Walters and Wilder [75] demonstrate that tools such as ChatGPT (GPT-5) may produce citations containing errors or fabricated details, reflecting probabilistic text generation rather than authoritative sources. Verification against established bibliographic databases and integration with reliable repositories are recommended strategies to maintain accuracy in academic outputs.
The synthesis of current evidence suggests that preserving information integrity in AI-enabled academic libraries involves coordinated approaches that combine technological solutions, governance frameworks, professional development, verification procedures, and user engagement strategies. Ethical awareness and critical evaluation of algorithms and data sources are central to sustaining trustworthy scholarly communication. The literature further indicates that while AI can enhance information retrieval and processing, effective oversight and professional stewardship are essential to align efficiency with reliability and support the credibility of academic libraries.
Collectively, the ethical dilemmas explored in Section 3.2 reveal an interconnected web of challenges that are inherent in the adoption of generative AI in libraries. While each dilemma presents distinct operational, legal, and social concerns, recurring ethical principles emerge across contexts, including fairness, inclusivity, transparency, accountability, and the safeguarding of knowledge integrity. Addressing these dilemmas requires holistic approaches that integrate technical safeguards, policy frameworks, professional capacity-building, and participatory engagement with diverse stakeholders. By foregrounding these principles, libraries can navigate the trade-offs that are inherent in AI deployment, ensuring that technological innovation enhances access, equity, and trust rather than reinforcing existing inequalities or compromising the ethical stewardship of information.

3.3. The Role of Libraries and Librarians in Mitigating Ethical Challenges in AI Adoption

The integration of AI, particularly generative AI, in libraries introduces ethical considerations that extend beyond traditional information management, positioning libraries and librarians as critical actors in responsible technology use. Current research highlights the evolving role of these institutions in promoting ethical engagement with AI, consistent with the principles outlined in the World Summit on the Information Society Action Line C10 (WSIS C10) on Ethical Dimensions of the Information Society, which emphasizes equitable access to information, privacy protection, transparency, and the mitigation of digital divides [13,76].
Studies indicate that AI adoption presents risks to academic integrity, including potential misinformation and challenges to content authenticity. Diyaolu et al. [5] propose that librarians serve as facilitators of AI literacy and ethical awareness, enabling users to navigate AI-generated content responsibly. Their Library–AI Handshake model operationalizes these principles by bridging knowledge gaps and supporting informed participation in the information society.
Similarly, Panda et al. [77] underscore the necessity of maintaining core library values such as intellectual freedom, privacy, and inclusivity in AI contexts. They recommend establishing ethical oversight mechanisms, comprehensive staff training in AI ethics, and community engagement to ensure that AI deployment complements human expertise. These measures reflect C10’s call for institutional frameworks that embed ethical norms and participatory governance in ICT adoption.
The literature further identifies professional development and user education as essential for ethical AI integration. Thiruppathi [78] emphasizes continuous skill development and inclusive service strategies, aligning with C10’s emphasis on capacity building and human development. Emmanuel and Oladokun [79] highlight libraries’ roles in addressing algorithmic bias, intellectual property concerns, and transparency through ethical guidelines, multi-stakeholder collaboration, and professional training, supporting participatory governance and accountability in ICT use.
Finally, Monyela and Tella [60] position libraries as facilitators of sustainable knowledge organization through AI, advocating operational efficiency, user-centered services, and adherence to ethical standards in data privacy and algorithmic transparency. Their recommendations for staff training, collaboration with AI developers, and monitoring mechanisms operationalize C10’s vision of ICT as a tool for sustainable development and equitable information access.
Collectively, these findings indicate that libraries and librarians function as central agents in mitigating ethical risks associated with AI adoption. Their responsibilities encompass ethical oversight, the promotion of AI literacy, and the facilitation of inclusive, transparent, and sustainable AI integration. By embedding WSIS C10 principles into policy, professional development, and community engagement, libraries can enhance access, equity, and accountability, contributing to a responsible and ethically informed information society. A concise overview of these roles, key actions, and their alignment with WSIS C10 principles is provided in Table 3, which illustrates how libraries can operationalize ethical AI integration in practice.
Table 3 illustrates the multifaceted roles that libraries and librarians play in mitigating ethical challenges associated with AI adoption. Across focus areas, these roles encompass ethical oversight, AI literacy and awareness, professional development, risk management, sustainable knowledge organization, and community engagement. Collectively, the evidence emphasizes that effective AI integration requires both technical and ethical capacities, supported by continuous staff training, participatory governance, and multi-stakeholder collaboration. By linking each action to the WSIS C10 principles of ethics, equity, access, and human development, the table underscores how libraries can operationalize global ethical guidelines in context-sensitive ways, ensuring that AI adoption enhances service efficiency while promoting inclusivity, accountability, and sustainable knowledge practices.

3.4. Global Perspectives on the Adoption and Implementation of AI-Powered Technologies in Libraries

The adoption and implementation of AI-powered technologies in libraries reveal notable regional disparities and associated ethical challenges. Roche et al. [80] report that 88% of existing AI governance frameworks primarily reflect Euro-American values, potentially marginalizing perspectives from the Global South and contributing to inequitable policy outcomes. The authors advocate for more inclusive governance approaches that account for global power asymmetries.
In Europe, AI integration has been associated with process automation and personalized service delivery in specialized libraries [81]. Ethical concerns regarding privacy, transparency, and accountability are emphasized, highlighting the need for governance mechanisms to guide responsible implementation. In Southeast Asia, AI initiatives have enhanced accessibility in Indonesian libraries; however, limitations in digital promotion and infrastructural gaps restrict inclusivity [81].
Comparative studies indicate divergent regional approaches to AI adoption. Huang et al. [82] identify that UK libraries prioritize ethical use and data privacy, whereas Mainland Chinese libraries focus on large-scale deployment facilitated by government support. Both contexts experience challenges including skill shortages and infrastructural limitations, underscoring the need for targeted strategies and cross-regional collaboration to support responsible AI adoption.
In Sub-Saharan Africa, AI applications, including humanoid robots, have been deployed to augment library services and operational efficiency. Echedom and Okuonghae [83] document implementations at the University of Lagos and the University of Pretoria, where robots support routine tasks, user queries, cataloging, and data management. These cases demonstrate multifunctional AI applications but also indicate the necessity for comprehensive policy frameworks to guide ethical, accountable, and accessible use.
Despite these advancements, libraries in the Global South continue to face structural and ethical constraints. Studies highlight that limited infrastructure, resource shortages, and policy gaps undermine transparency, accountability, and equitable access [8,84]. Buitrago-Ciro et al. [85] further note disparities in digital literacy and resource availability between Global North and South libraries, which perpetuate inequalities in AI adoption. Barsha and Munshi [10] identify high costs, insufficiently trained personnel, weak infrastructure, and inadequate data protection as critical barriers, recommending regulatory reform, capacity building, strategic partnerships, and infrastructure investment to mitigate these challenges.
Overall, these findings indicate that while AI adoption in libraries is expanding globally, structural and ethical disparities in the Global South necessitate context-sensitive, ethically grounded approaches. Addressing these disparities is essential to ensure that AI contributes to accessible, inclusive, and accountable library services across diverse regional contexts. Table 4 provides a synthesis of case studies, highlighting global variations in technologies, applications, and associated ethical considerations.
The synthesis in Table 4 illustrates that AI adoption in libraries exhibits significant regional variation, influenced by infrastructural capacity, governance frameworks, and resource availability. Libraries in the Global North generally demonstrate advanced AI integration, supported by robust infrastructure, digital literacy, and established ethical oversight, whereas libraries in the Global South encounter structural and operational constraints that limit adoption and exacerbate the digital divide. Across contexts, common ethical considerations include privacy, transparency, accountability, and equitable access. These findings underscore the need for context-sensitive, ethically grounded strategies to guide AI implementation, ensuring that technological innovations enhance inclusivity and service delivery without reinforcing existing inequalities.

3.5. Contextual Considerations for AI Adoption in the Global South

Building on the disparities outlined in Section 3.4, most studies examining AI adoption in libraries in the Global South emphasize that structural and socio-technical constraints significantly shape outcomes. Reported challenges include limited infrastructure, gaps in digital literacy, shortages of skilled personnel, and weak regulatory environments, which collectively restrict the ability of libraries to realize the potential benefits of generative AI for automation, accessibility, and personalization [43]. Evidence further suggests that these constraints contribute to widening the digital divide and exacerbate vulnerabilities related to privacy, accountability, and equitable access.
Comparative analyses consistently show that AI systems are predominantly designed within Western corporate and cultural contexts, raising concerns about their applicability in different epistemic settings. Several studies argue that the reliance on such systems risks reinforcing global asymmetries by marginalizing local priorities and knowledge traditions. This concern is often framed through Habermas’s [14] concept of the “colonization of the lifeworld”, with empirical accounts noting the potential displacement of indigenous knowledge and the narrowing of epistemic diversity in library services.
Evidence also highlights the importance of normative frameworks in guiding AI adoption. The WSIS Action Line C10 is frequently referenced as a framework that emphasizes transparency, accountability, privacy protection, intellectual freedom, and equitable participation. Studies suggest that applying these principles requires context-sensitive approaches, including locally grounded safeguards, capacity building, professional development, improved data governance, and investment in infrastructure. International partnerships that promote equitable collaboration are also identified as critical mechanisms for addressing capacity gaps.
Overall, the literature indicates that without such measures, the adoption of generative AI in the Global South risks reproducing existing inequalities and perpetuating epistemic injustice. Conversely, when guided by ethical and context-sensitive strategies, libraries may leverage AI to strengthen inclusive access to information and support more equitable participation in the digital information society.

4. Conclusions, Limitations, and Future Research Directions

4.1. Conclusions

This systematic review has examined the integration of generative AI-powered technologies in academic libraries, emphasizing applications, ethical considerations, and the roles of libraries and librarians in mitigating associated risks. The evidence indicates that AI tools, including large language models, natural language processing, recommendation systems, digital storage, and AI-enabled research tools, can enhance operational efficiency, personalization, accessibility, and knowledge management. These technologies have demonstrated potential to transform library services, particularly in information processing, retrieval, dissemination, and scholarly workflows.
Concurrently, the review identifies persistent ethical dilemmas arising from AI adoption, including algorithmic bias, privacy and data security concerns, intellectual property issues, transparency and accountability challenges, digital divides, and risks to information integrity. Across these dimensions, recurring principles of fairness, inclusivity, transparency, and the safeguarding of knowledge integrity emerge as essential for ethically responsible AI deployment. Libraries and librarians are positioned as central agents in addressing these dilemmas, with responsibilities spanning ethical oversight, professional development, the promotion of AI literacy, and the facilitation of inclusive and accountable AI integration. Evidence from global case studies highlights notable regional disparities, particularly in the Global South, where limited infrastructure, digital literacy gaps, and policy constraints exacerbate vulnerabilities and may reproduce epistemic inequities if AI adoption is not guided by context-sensitive ethical frameworks, such as the WSIS C10 principles.

4.2. Study Limitations

Several limitations should be considered. First, the review relies predominantly on the published literature, which introduces a bias toward documented cases from regions with more robust research outputs, primarily the Global North. Second, the literature retrieval may be influenced by algorithmic biases that are inherent in search platforms, reflecting previous search patterns, indexing practices, and author preferences, potentially shaping the scope and composition of sources included. Third, the study focuses predominantly on academic libraries, limiting generalizability to other library types, such as public or special libraries. Fourth, variations in terminology, conceptual frameworks, and methodological rigor across studies may constrain the consistency of evidence synthesis. Fifth, the review does not incorporate primary empirical data, such as user perspectives or institutional case studies, which could offer deeper insights into lived experiences of AI deployment. Finally, the rapid evolution of AI technologies and emerging ethical challenges may not be fully represented, limiting the temporal scope of the findings.

4.3. Directions for Future Research

Building on the current evidence, several avenues for future research are evident. First, empirical studies incorporating primary data from diverse library contexts, including public, special, and hybrid libraries, would enhance the understanding of how AI impacts both staff and users. Second, research exploring culturally responsive AI integration in the Global South, with attention to local knowledge systems, language diversity, and socio-technical constraints, is critical for developing equitable and context-sensitive frameworks. Third, longitudinal studies examining the long-term effects of AI adoption on workforce dynamics, user engagement, and service quality would provide insights into sustainable implementation strategies. Fourth, case-based investigations into AI algorithm transparency and the mitigation of search- or platform-driven biases in systematic reviews would strengthen the reliability and inclusivity of evidence syntheses. Fifth, interdisciplinary research addressing the intersection of AI, ethics, policy, and law, particularly regarding intellectual property, privacy, and open access, can inform institutional practices and governance structures. Finally, studies evaluating the effectiveness of professional development, ethical oversight mechanisms, and participatory AI governance models in library environments will support evidence-based recommendations for responsible AI adoption globally.
Collectively, these research directions underscore the need for a multidimensional, ethically informed, and contextually grounded approach to AI integration in libraries, ensuring that technological innovation contributes to equitable, inclusive, and trustworthy information ecosystems rather than reinforcing existing disparities or epistemic inequities.

Author Contributions

Conceptualization, S.M. and M.M.; methodology, M.M.; validation, M.M. and S.M.; formal analysis, M.M.; investigation, M.M.; resources, S.M.; writing—original draft preparation, M.M.; writing—review and editing, S.M.; supervision, S.M.; project administration, S.M.; funding acquisition, S.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
ADMAlgorithmic decision-making
AR/VRAugmented and virtual reality
CoCcoherence-of-content
GDPRGeneral Data Protection Regulation
Generative AIGenerative artificial intelligence
HTRHandwritten Text Recognition
ICTsInformation and communication technologies
IoTInternet of Things
LDALatent Dirichlet Allocation
LISLibrary and Information Science
LLMsLarge Language Models
CNNsConvolutional Neural Networks
LSTMLong Short-Term Memory
RNNsRecurrent Neural Networks
SVMSupport Vector Machines
MLMachine Learning
NLPNatural Language Processing
OCROptical Character Recognition
PoPI ActProtection of Personal Information Act
RFIDRadio Frequency Identification
RQResearch question
UNESCOUnited Nations Educational, Scientific and Cultural Organization
WSISWorld Summit on the Information Society

References

  1. Nath, H.K. The information society. Space Cult. India 2017, 4, 19–28. [Google Scholar] [CrossRef]
  2. Hazan, H.; Ayub, Z. Information technology integration strategy in public library. J. Educ. Hum. Soc. Sci. 2024, 7, 424–431. [Google Scholar] [CrossRef]
  3. Feuerriegel, S.; Hartmann, J.; Janiesch, C.; Zschech, P. Generative AI. Bus. Inf. Syst. Eng. 2024, 66, 111–126. [Google Scholar] [CrossRef]
  4. Narayanan, N. The era of generative AI: Transforming academic libraries, education, and research. In Empowering Minds: Collaborative Learning Platform for Teachers, Librarians and Researchers; Venissa, A.C., Vishala, B.K., Gopakumar, V., Eds.; St. Augusten College: Mangaluru, Karnataka, 2024; pp. 282–293. [Google Scholar]
  5. Diyaolu, B.O.; Bakare-Fatungase, O.D.; Ajayi, K.D. Generative AI Ethical Conundrum: Librarians as Artificial Intelligence Literacy Apostle in the Educational Space. In Navigating AI in Academic Libraries: Implications for Academic Research; Sacco, K., Norton, A., Arms, K., Eds.; IGI Global: Pennsylvania, PA, USA, 2025; pp. 131–162. [Google Scholar]
  6. Mohamed, N.; Ramjani, S.; Hussain, A. Artificial intelligence in libraries: Benefits, challenges, and ethical considerations. Int. J. Adv. Appl. Res. 2024, 11, 72–76. [Google Scholar] [CrossRef]
  7. Nuechterlein, A.; Rotenberg, A.; LeDue, J.; Pavlidis, P.; Illes, J. Open science in play and in tension with patent protections. J. Law Biosci. 2023, 10, lsad016. [Google Scholar] [CrossRef] [PubMed]
  8. Zondi, N.P.; Epizitone, A.; Nkomo, N.; Mthalane, P.P.; Moyane, S.; Luthuli, M.; Khumalo, M.; Phokoye, S. A review of artificial intelligence implementation in academic library services. S. Afr. J. Libr. Inf. Sci. 2024, 90, 1–8. [Google Scholar] [CrossRef]
  9. Png, M.-T. At the tensions of south and north: Critical roles of global south stakeholders in AI governance. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, Seoul, Republic of Korea, 21–24 June 2022; pp. 1434–1445. [Google Scholar]
  10. Barsha, S.; Munshi, S.A. Implementing artificial intelligence in library services: A review of current prospects and challenges of developing countries. Libr. Hi Tech News 2023, 41, 7–10. [Google Scholar] [CrossRef]
  11. Cox, A.M.; Pinfield, S. Research data management and libraries: Current activities and future priorities. J. Librariansh. Inf. Sci. 2014, 46, 299–316. [Google Scholar] [CrossRef]
  12. WSIS. Geneva Plan of Action; International Telecommunication Union: Geneva, Switzerland, 2003. [Google Scholar]
  13. WSIS. Tunis Agenda for the Information Society; International Telecommunication Union: Geneva, Switzerland, 2005. [Google Scholar]
  14. Habermas, J. The Theory of Communicative Action, Volume 2: Lifeworld and System: A Critique of Functionalist Reason; Beacon Press: Boston, MA, USA, 1987. [Google Scholar]
  15. Folorunso, A.; Olanipekun, K.; Adewumi, T.; Samuel, B. A policy framework on AI usage in developing countries and its impact. Glob. J. Eng. Technol. Adv. 2024, 21, 154–166. [Google Scholar] [CrossRef]
  16. Al-kfairy, M. Strategic integration of generative AI in organizational settings: Applications, challenges and adoption requirements. IEEE Eng. Manag. Rev. 2025, 1–14. [Google Scholar] [CrossRef]
  17. Gupta, P.; Ding, B.; Guan, C.; Ding, D. Generative AI: A systematic review using topic modelling techniques. Data Inf. Manag. 2024, 8, 100066. [Google Scholar] [CrossRef]
  18. Liu, Y.; Sullivan, P.; Sinnamon, L. AI transparency in academic search systems: An initial exploration. Proc. Assoc. Inf. Sci. Technol. 2024, 61, 1002–1004. [Google Scholar] [CrossRef]
  19. Mupaikwa, E. The application of artificial intelligence and machine learning in academic libraries. In Encyclopedia of Information Science and Technology, 6th ed.; Khosrow-Pour, M., Ed.; IGI Global Scientific Publishing: Hershey, PA, USA, 2025; pp. 1–18. [Google Scholar]
  20. Reusens, M.; Adams, A.; Baesens, B. Large Language Models to make museum archive collections more accessible. AI Soc. 2025, 40, 4485–4497. [Google Scholar] [CrossRef]
  21. Zahid, I.A.; Joudar, S.S.; Albahri, A.; Albahri, O.; Alamoodi, A.; Santamaría, J.; Alzubaidi, L. Unmasking large language models by means of OpenAI GPT-4 and Google AI: A deep instruction-based analysis. Intell. Syst. Appl. 2024, 23, 200431. [Google Scholar] [CrossRef]
  22. Khan, R.; Gupta, N.; Sinhababu, A.; Chakravarty, R. Impact of conversational and generative AI systems on libraries: A use case large language model (LLM). Sci. Technol. Libr. 2024, 43, 319–333. [Google Scholar] [CrossRef]
  23. Sparkman, M.; Witt, A. Claude AI and literature reviews: An experiment in utility and ethical use. Libr. Trends 2025, 73, 355–380. [Google Scholar] [CrossRef]
  24. Allam, H.; Makubvure, L.; Gyamfi, B.; Graham, K.N.; Akinwolere, K. Text classification: How Machine Learning Is revolutionizing text categorization. Information 2024, 16, 130. [Google Scholar] [CrossRef]
  25. Hankar, M.; Kasri, M.; Beni-Hssane, A. A comprehensive overview of topic modeling: Techniques, applications and challenges. Neurocomputing 2025, 628, 129638. [Google Scholar] [CrossRef]
  26. Tarmizi, W.; Rashid, A.; Sapri, N.; Yangkatisal, M. Natural Language Processing (NLP) Application For Classifying and Managing Tacit Knowledge in Revolutionizing AI-Driven Library. Inf. Manag. Bus. Rev. 2024, 16, 1094–1110. [Google Scholar] [CrossRef]
  27. Osagie, O.; Oladokun, B. Usefulness of artificial intelligence to safeguard records in libraries: A new trend. S. Afr. J. Secur. 2024, 2, 16803. [Google Scholar] [CrossRef]
  28. Tawalbeh, A.K. The role of AI in improving digital archiving in university libraries. J. Syst. Manag. Sci. 2024, 14, 455–469. [Google Scholar] [CrossRef]
  29. Memon, J.; Sami, M.; Khan, R.A.; Uddin, M. Handwritten optical character recognition (OCR): A comprehensive systematic literature review (SLR). IEEE Access 2020, 8, 142642–142668. [Google Scholar] [CrossRef]
  30. Terras, M. The role of the library when computers can read: Critically adopting Handwritten Text Recognition (HTR) technologies to support research. In The Rise of AI: Implications and Applications of Artificial Intelligence in Academic Libraries; Wheatley, A., Hervieux, S., Eds.; ACRL-Association of College & Research Libraries: Chicago, IL, USA, 2022; pp. 137–148. [Google Scholar]
  31. Mallikarjuna, C. An analysis of integrating artificial intelligence in academic libraries. DESIDOC J. Libr. Inf. Technol. 2024, 44, 124–129. [Google Scholar] [CrossRef]
  32. Jyoti, S.; Kumar, P. Reshaping the library landscape: Exploring the integration of artificial intelligence in libraries. IP Indian J. Libr. Sci. Inf. Technol. 2024, 9, 29–36. [Google Scholar] [CrossRef]
  33. Alomran, A.I.; Basha, I. An AI-based classification and recommendation system for digital libraries. Scalable Comput. Pract. Exp. 2024, 25, 3181–3199. [Google Scholar] [CrossRef]
  34. Rathinasabapathy, G.; Swetha, R.; Veeranjaneyulu, K. Emerging artificial intelligence tools useful for researchers, scientists and librarians. Indian J. Inf. Libr. Soc. 2023, 36, 163–172. [Google Scholar]
  35. Sharma, J.; Deepmala. Research support services in libraries: An introduction. J. Namib. Stud. 2023, 33, 1139–1144. [Google Scholar]
  36. Dardas, L.A.; Sallam, M.; Woodward, A.; Sweis, N.; Sweis, N.; Sawair, F.A. Evaluating research impact based on Semantic Scholar highly influential citations, total citations, and altmetric attention scores: The quest for refined measures remains illusive. Publications 2023, 11, 5. [Google Scholar] [CrossRef]
  37. Foley, K.; McLean, C.; De Zylva, R.; Asa, G.; Maio, J.; Batchelor, S.; Dzando, G.; Dimassi, A. Developing a critical imagination for how researchers can use artificially intelligent tools reflexively and responsibly during qualitative literature reviews. Int. J. Qual. Methods 2025, 24, 1–17. [Google Scholar] [CrossRef]
  38. Jhajj, K.S.; Jindal, P.; Kaur, K. Use of artificial intelligence tools for research by medical students: A narrative review. Cureus 2024, 16, e55367. [Google Scholar] [CrossRef]
  39. Devi, A.; Barooah, P.; Ahmed, Z. Enhancing literature review through AI-based research tools: A comparative study of SciSpace and Semantic Scholar. TIJER–Int. Res. J. 2024, 11, a779–a785. [Google Scholar]
  40. Patterson, B. Can AI help with that? The limitations of AI tools for information discovery, search and reviews. J. Electron. Resour. Med. Libr. 2025, 22, 56–59. [Google Scholar] [CrossRef]
  41. Li, D. Adoption of Artificial Intelligence in public and private libraries of China: Determinants, challenges, and perceived benefits. Prof. Inf. 2024, 33, e330416. [Google Scholar] [CrossRef]
  42. Ibrahim, S.E.A. Lost in the algorithm: Navigating the ethical maze of AI in libraries. S. Afr. J. Libr. Inf. Sci. 2025, 91, 1–11. [Google Scholar]
  43. Ofosu-Asare, Y. Cognitive imperialism in artificial intelligence: Counteracting bias with indigenous epistemologies. AI Soc. 2024, 40, 3045–3061. [Google Scholar] [CrossRef]
  44. Yeon, J.; Smith, M.; Youngman, T.; Patin, B. Epistemicide beyond borders: Addressing epistemic injustice in global library and information settings through critical international librarianship. Int. J. Inf. Divers. Incl. 2023, 7, 1–28. [Google Scholar] [CrossRef]
  45. Helm, P.; Bella, G.; Koch, G.; Giunchiglia, F. Diversity and language technology: How language modeling bias causes epistemic injustice. Ethics Inf. Technol. 2024, 26, 8. [Google Scholar] [CrossRef]
  46. Younas, A. Epistemic inclusion is necessary for diverse, global, and meaningful research. J. Clin. Epidemiol. 2024, 171, 111385. [Google Scholar] [CrossRef]
  47. Ruttkamp-Bloem, E. Epistemic just and dynamic AI ethics in Africa. In Responsible AI in Africa: Challenges and Opportunities; Eke, D.O., Wakunuma, K., Akintoye, S., Eds.; Palgrave Macmillan: Cham, Switzerland, 2023; pp. 13–34. [Google Scholar]
  48. Pham, D.T.; Joubert, T. Mitigating Generative AI’s negative impact on indigenous knowledge from international and Vietnamese laws perspectives. Technol. Regul. 2025, 2025, 194–213. [Google Scholar] [CrossRef]
  49. Gupta, V. AI experimentation policy for libraries: Balancing innovation and data privacy. Public Libr. Q. 2025, 1–21. [Google Scholar] [CrossRef]
  50. Cox, A. The ethics of AI for information professionals: Eight scenarios. J. Aust. Libr. Inf. Assoc. 2022, 71, 201–214. [Google Scholar] [CrossRef]
  51. Ocks, Y.; Salubi, O.G. Privacy paradox in industry 4.0: A review of library information services and data protection. S. Afr. J. Inf. Manag. 2024, 26, a1845. [Google Scholar] [CrossRef]
  52. Persadha, P.D.; Judijanto, L.; Susanti, M.; Reza, H.K. Data privacy and security protection strategies in library electronic resources management. Holistik Anal. Nexus 2024, 1, 15–122. [Google Scholar] [CrossRef]
  53. ISO/IEC 27001:2005; Information Technology-Security Techniques-Information Security Management Systems-Requirements. International Organization for Standardization: Geneva, Switzerland, 2005.
  54. ISO/IEC 27701:2019; Security Techniques—Extension to ISO/IEC 27001 and ISO/IEC 27002 for Privacy Information Management Requirements and Guidelines. International Organization for Standardization: Geneva, Switzerland, 2019.
  55. Ikwuanusi, U.F.; Adepoju, P.A.; Odionu, C.S. Advancing ethical AI practices to solve data privacy issues in library systems. Int. J. Multidiscip. Res. Updates 2023, 6, 33–44. [Google Scholar] [CrossRef]
  56. Balakumar, A.; Sawant, P.D.; Nimma, D.; Khan, S.A.; Siddiqua, A. Impact of AI-driven automation on job displacement and skill development: A societal perspective. In Proceedings of the 2024 IEEE Silchar Subsection Conference (SILCON 2024), Agartala, India, 15–17 November 2024; pp. 1–5. [Google Scholar]
  57. George, A.S. Artificial intelligence and the future of work: Job shifting not job loss. Partn. Univ. Innov. Res. Publ. 2024, 2, 17–37. [Google Scholar] [CrossRef]
  58. Zhang, J. The multidimensional impact of artificial intelligence on the job market and infrastructure: Focusing on employment shifts and smart libraries. J. Comput. Signal Syst. Res. 2025, 2, 107–123. [Google Scholar] [CrossRef]
  59. Huang, Y.-H. Exploring the implementation of artificial intelligence applications among academic libraries in Taiwan. Libr. Hi Tech 2024, 42, 885–905. [Google Scholar] [CrossRef]
  60. Monyela, M.; Tella, A. Leveraging artificial intelligence for sustainable knowledge organisation in academic libraries. S. Afr. J. Libr. Inf. Sci. 2024, 90, 1–11. [Google Scholar] [CrossRef]
  61. Buick, A. Copyright and AI training data—Transparency to the rescue? J. Intellect. Prop. Law Pract. 2025, 20, 182–192. [Google Scholar] [CrossRef]
  62. Aswath, L.; Reddy, A.N. Copyright law and the academic libraries: A perspective. Trends Inf. Manag. 2012, 8, 111–122. [Google Scholar]
  63. Esteve, A. Copyright and open access to scientific publishing. IIC-Int. Rev. Intellect. Prop. Compet. Law 2024, 55, 901–926. [Google Scholar] [CrossRef]
  64. Kumar, N. Rethinking intellectual property rights in the era of open science. Interdiscip. Stud. Soc. Law Politics 2023, 2, 1–3. [Google Scholar] [CrossRef]
  65. Ogungbeni, J.I.; Obiamalu, A.R.; Ssemambo, S.; Bazibu, C.M. The roles of academic libraries in propagating open science: A qualitative literature review. Inf. Dev. 2018, 34, 113–121. [Google Scholar] [CrossRef]
  66. Yu, P.K. The algorithmic divide and equality in the age of artificial intelligence. Fla. Law Rev. 2020, 72, 331–390. [Google Scholar]
  67. Vesna, L.; Sawale, P.; Kaul, P.; Pal, S.; Murthy, B. Digital divide in AI-powered education: Challenges and solutions for equitable learning. J. Inf. Syst. Eng. Manag. 2025, 10, 300–3008. [Google Scholar] [CrossRef]
  68. Gajbhiye, C.K. Impact of artificial intelligence (AI) in library services. Int. J. Multidiscip. Res. 2024, 6, 1–13. [Google Scholar]
  69. Peters, U. Explainable AI lacks regulative reasons: Why AI and human decision-making are not equally opaque. AI Ethics 2023, 3, 963–974. [Google Scholar] [CrossRef]
  70. Larsson, S.; Heintz, F. Transparency in artificial intelligence. Internet Policy Rev. 2020, 9, 1–16. [Google Scholar] [CrossRef]
  71. Mannheimer, S.; Bond, N.; Young, S.W.; Kettler, H.S.; Marcus, A.; Slipher, S.K.; Clark, J.A.; Shorish, Y.; Rossmann, D.; Sheehey, B. Responsible AI practice in libraries and archives: A review of the literature. Inf. Technol. Libr. 2024, 43, 1–29. [Google Scholar] [CrossRef]
  72. Jaidka, K.; Chen, T.; Chesterman, S.; Hsu, W.; Kan, M.-Y.; Kankanhalli, M.; Lee, M.L.; Seres, G.; Sim, T.; Taeihagh, A. Misinformation, disinformation, and generative AI: Implications for perception and policy. Digit. Gov. Res. Pract. 2025, 6, 11. [Google Scholar] [CrossRef]
  73. Khyat, J.; Halburgi, S.; Mukarambi, P.; Kundaragi, S. Addressing AI-generated misinformation: Using libraries as guardians in digital age. Juni Khyat J. 2025, 15, 28–34. [Google Scholar] [CrossRef]
  74. Saeidnia, H.R.; Hosseini, E.; Lund, B.; Tehrani, M.A.; Zaker, S.; Molaei, S. Artificial intelligence in the battle against disinformation and misinformation: A systematic review of challenges and approaches. Knowl. Inf. Syst. 2025, 67, 3139–3158. [Google Scholar] [CrossRef]
  75. Walters, W.H.; Wilder, E.I. Fabrication and errors in the bibliographic citations generated by ChatGPT. Sci. Rep. 2023, 13, 14045. [Google Scholar] [CrossRef] [PubMed]
  76. Kim, J. Academic library with generative AI: From passive information providers to proactive knowledge facilitators. Publications 2025, 13, 37. [Google Scholar] [CrossRef]
  77. Panda, S.; Sharma, V.; Sati, P.P.; Kaur, N. Ensuring ethical intelligence: Guiding the integration of AI in modern libraries. Integr. Educ. 2024, 87–106. [Google Scholar] [CrossRef]
  78. Thiruppathi, K. Librarian’s role in the digital age: Reimagining the profession in the era of information abundance. Int. J. Libr. Inf. Sci. 2024, 13, 1–9. [Google Scholar]
  79. Oloniruha, E.A.; Emmanuel, V.O.; Oladokun, B.D. Role of Generative AI in publishing and librarianship: Addressing challenges and ethical dimensions. In Digital Technologies and Library Management in Higher Institutions of Learning in Nigeria; Chadick Printing Press: Port Harcourt, Nigeria, 2024. [Google Scholar]
  80. Roche, C.; Wall, P.; Lewis, D. Ethics and diversity in artificial intelligence policies, strategies and initiatives. AI Ethics 2023, 3, 1095–1115. [Google Scholar] [CrossRef]
  81. Rafiq, R.A.M. Evolving special libraries in the European Union: From traditional models to AI-driven technologies, facts, findings, and implementations. The Library 2024, 2023, 11. [Google Scholar]
  82. Huang, Y.; Cox, A.M.; Cox, J. Artificial Intelligence in academic library strategy in the United Kingdom and the Mainland of China. J. Acad. Librariansh. 2023, 49, 102772. [Google Scholar] [CrossRef]
  83. Echedom, A.U.; Okuonghae, O. Transforming academic library operations in Africa with artificial intelligence: Opportunities and challenges: A review paper. New Rev. Acad. Librariansh. 2021, 27, 243–255. [Google Scholar] [CrossRef]
  84. Ghosh, A. Recovering knowledge commons for the global south. J. Digit. Humanit. Assoc. S. Afr. 2024, 5, 1–8. [Google Scholar] [CrossRef]
  85. Buitrago-Ciro, J.; Samokishyn, M.; Moylan, R.; Pérez, J.H.; Bakare-Fatungase, O.; Firdawsi, C. Bridging the AI gap: Comparative analysis of AI integration, education, and outreach in academic libraries. IFLA J. 2025, 1–12. [Google Scholar] [CrossRef]
Table 1. Alignment of WSIS Action Line C10 principles with AI-related concerns in libraries.
Table 1. Alignment of WSIS Action Line C10 principles with AI-related concerns in libraries.
WSIS C10 PrinciplesDescriptionRelevance to AI in Libraries
Respect for peace and core valuesPromote freedom, equality, solidarity, tolerance, and environmental respectEnsures that AI tools do not propagate bias, discrimination, or exclusion; supports ethical decision-making in library services
Ethical awarenessStakeholders should be aware of ethical dimensions of ICT useEncourages librarians to critically assess AI algorithms, data sources, and deployment impacts
Protection and preventionSafeguard privacy, prevent misuse, and combat harmful ICT practicesGuides responsible data handling, privacy protection, and mitigation of misinformation or malicious AI use
Academic engagementPromote research on ethical ICT useSupports the development of context-sensitive guidelines and evidence-based AI policies for libraries
Equity and inclusivenessReduce digital divides and ensure broad participationEncourages AI adoption that improves access for marginalized groups and addresses local disparities
Respect for cultural and linguistic diversityRecognize and integrate diverse perspectivesEnsures that AI-generated content respects local languages, cultures, and knowledge systems
Promotion of peace and sustainable developmentLeverage ICTs for social good and sustainable growthGuides AI integration to advance equitable access, sustainability, and community-oriented knowledge practices
Table 3. The role of libraries and librarians in mitigating ethical challenges in AI adoption.
Table 3. The role of libraries and librarians in mitigating ethical challenges in AI adoption.
Focus AreaRole of Libraries/LibrariansKey Actions/RecommendationsLink to WSIS C10 PrinciplesKey References
Ethical OversightServe as ethical gatekeepers in AI useEstablish ethical oversight mechanisms; monitor AI deployment; develop institutional frameworks embedding ethical normsEthics, Transparency, Participatory Governance[77]
AI Literacy and Ethical AwarenessFacilitate responsible use of AI by usersPromote AI literacy; bridge knowledge gapsHuman Development, Equitable Access[5]
Professional DevelopmentBuild staff capacity to manage AI ethicallyContinuous training in AI ethics; skill development; and inclusive service strategiesCapacity Building, Human Development[78]
Addressing AI RisksMitigate misinformation, bias, and intellectual property issuesDevelop ethical guidelines; collaborate with multiple stakeholders; ensure algorithmic transparencyEthics, Accountability, Equitable Access[79]
Sustainable Knowledge OrganizationEnsure AI enhances service efficiency and sustainabilityOperational efficiency; user-centered services; collaborate with AI developers; monitor ethical complianceSustainable Development, Equitable Access[60]
Community EngagementPromote participatory and inclusive AI adoptionEngage users in decision-making; ensure transparency and inclusivity in AI servicesInclusivity, Participatory Governance[13,77]
Table 4. Global perspectives on AI adoption and ethical considerations in libraries.
Table 4. Global perspectives on AI adoption and ethical considerations in libraries.
Region/CountryAI Adoption and TechnologiesUse Cases/ApplicationsEthical and Operational ConsiderationsKey References
EuropeAI systems for service automation and personalizationSpecial libraries: automated cataloging, personalized recommendationsPrivacy, transparency, accountability, need for governance frameworks[81]
Southeast Asia (Indonesia)AI tools for accessibility enhancementDigital content promotion, library servicesInfrastructural limitations, limited inclusivity, accessibility gaps[81]
United KingdomAI prioritizing ethical use and data protectionService optimization, user data managementData privacy, ethical AI frameworks, skill gaps[82]
Mainland ChinaLarge-scale AI deployment supported by government initiativesLibrary management, service automationSkill shortages, infrastructural deficits, rapid deployment risks[82]
Sub-Saharan Africa (Nigeria, South Africa)Humanoid robots, AI-driven service managementRoutine task automation, user queries, cataloging, survey data collectionNeed for comprehensive policies, equitable access, accountability, resource constraints[83]
Global South (general)Emerging AI implementations constrained by resourcesLimited AI adoption due to infrastructural gapsHigh costs, insufficient training, weak infrastructure, inadequate data protection, digital divide[8,10,84,85]
Global North (general)Advanced AI adoption supported by infrastructure and digital literacyEnhanced service delivery, broader adoptionGenerally better governance and regulatory frameworks, fewer structural barriers[85]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Matsieli, M.; Mutula, S. Generative AI and the Information Society: Ethical Reflections from Libraries. Information 2025, 16, 771. https://doi.org/10.3390/info16090771

AMA Style

Matsieli M, Mutula S. Generative AI and the Information Society: Ethical Reflections from Libraries. Information. 2025; 16(9):771. https://doi.org/10.3390/info16090771

Chicago/Turabian Style

Matsieli, Molefi, and Stephen Mutula. 2025. "Generative AI and the Information Society: Ethical Reflections from Libraries" Information 16, no. 9: 771. https://doi.org/10.3390/info16090771

APA Style

Matsieli, M., & Mutula, S. (2025). Generative AI and the Information Society: Ethical Reflections from Libraries. Information, 16(9), 771. https://doi.org/10.3390/info16090771

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop