Previous Article in Journal
Interfacing Programming Language Semantics and Pragmatics: What Does “Hello, World” Mean?
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Principle of Shared Utilization of Benefits Applied to the Development of Artificial Intelligence

by
Camilo Vargas-Machado
and
Andrés Roncancio Bedoya
*,†
Faculty of Law, Cooperative University of Colombia, 30th Street Caribbean Trunk Road, Santa Marta 470003, Colombia
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Philosophies 2025, 10(4), 87; https://doi.org/10.3390/philosophies10040087 (registering DOI)
Submission received: 5 May 2025 / Revised: 7 July 2025 / Accepted: 8 July 2025 / Published: 5 August 2025

Abstract

This conceptual position is based on the diagnosis that artificial intelligence (AI) accentuates existing economic and geopolitical divides in communities in the Global South, which provide data without receiving rewards. Based on bioethical precedents of fair distribution of genetic resources, it is proposed to transfer the principle of benefit-sharing to the emerging algorithmic governance in the context of AI. From this discussion, the study reveals an algorithmic concentration in the Global North. This dynamic generates political, cultural, and labor asymmetries. Regarding the methodological design, the research was qualitative, with an interpretive paradigm and an inductive method, applying documentary review and content analysis techniques. In addition, two theoretical and two analytical categories were used. As a result, six emerging categories were identified that serve as pillars of the studied principle and are capable of reversing the gaps: equity, accessibility, transparency, sustainability, participation, and cooperation. At the end of the research, it was confirmed that AI, without a solid ethical framework, concentrates benefits in dominant economies. Therefore, if this trend does not change, the Global South will become dependent, and its data will lack equitable returns. Therefore, benefit-sharing is proposed as a normative basis for fair, transparent, and participatory international governance.

1. Definition and Origin of the BSP Principle in Biomedical and Biogenetic Contexts

The present document examines the bioethical principle of benefit-sharing 1 applied to the development and advancement of artificial intelligence.
Vargas defines a bioethical principle as the original and essential foundation, based on ethical and moral ideals and rules, that guides and regulates human activities and serves as an axis for action and decision. According to the author, such a principle derives from the rational, emotional, and affective frameworks characteristic of social beings [1] (pp. 1–2).
From this perspective, this conceptual exploration, grounded in diverse currents of thought, demonstrates how the bioethical principle of benefit sharing (hereinafter, BSP) can prevent technology from exacerbating inequalities. Without equitable distribution mechanisms, innovation tends to concentrate wealth and restrict access to key resources, thereby widening the social gap. Consequently, the arguments outline normative pathways toward equitable AI and stimulate academic debate.

1.1. Structural Inequalities and Profit Concentration in AI Development

Sulasa and Kumar [2] show how artificial intelligence (hereafter, AI) redefines productivity and economic growth, albeit with a significant bias: those with advanced infrastructure gain a greater advantage, while the rest remain lagging behind. This trend is confirmed by noting that the absence of regulations places the development of AI in the hands of private interests, thereby diminishing its positive effect on society [3]. Consequently, ensuring an equitable distribution of the benefits of AI requires promoting inclusive development and, in turn, strengthening global technological governance. Because, far from becoming a privilege reserved for a few, this technology can function as a tool for collective transformation.

1.2. Normative and Ethical Bases of Fair Distribution in Science and Technology

The BSP bioethical principle demands an open and transparent dialogue among the parties involved. Agreements on intellectual property and benefit-sharing must address the needs and particularities of each context [4]. This orientation is grounded in equity and reciprocity in scientific research and guarantees that the benefits derived from knowledge, genetic resources, and biomedical innovations are distributed fairly among all parties involved [5,6].
Rheeder [7] argues that the proposal responds to the historical exploitation of communities and developing countries, whose scientific contributions have been used without adequate compensation [8]. Consequently, those who provide traditional knowledge, biological materials, or essential data for research should receive proportional returns, whether through preferential access to treatments, the strengthening of healthcare infrastructures, or meaningful participation in decision-making.

1.3. Innovation, Unequal Access and Private Appropriation of Knowledge

UNESCO [9] underscores that distributive justice 2 is a guiding axis of international cooperation, as it prevents abuses in the transfer of biotechnological resources [10]. The same UNESCO [11] established an ethical, cross-cutting, and universal regulatory framework for the development and use of artificial intelligence, with the aim of guaranteeing human rights, social justice, and sustainable well-being. Although it begins with abstract principles, it precisely defines responsible implementation mechanisms. It advocates a human-centered AI committed to mitigating risks such as algorithmic discrimination and the global imbalance in technological access.

1.4. International Regulatory Frameworks and Inclusive Governance Mechanisms

The OECD [12] published five value-based principles addressed to all actors in the AI ecosystem, along with five practical recommendations aimed at governments and policymakers. Together, these build a global regulatory architecture that, without resorting to technical rigidity, guides the development of trustworthy, sustainable, and human-centered systems.
Khazieva, Pauliková, and Chovanov [13] add that the ISO/IEC 42001:2023 standard [14] for artificial intelligence management promotes the responsible development of AI across various sectors and institutions. The document is linked to Sustainable Development Goals 9 and 10 [15], which focus on equitable access to technology and the reduction of inequalities.
Nevertheless, the application of these guidelines faces obstacles, among them the resistance of pharmaceutical industries to modify their profit models and the absence of binding universal regulations [16]. Hence, the BSP reinforces equity and, in parallel, aims at the sustainability of biomedical research; in this way, it consolidates a system in which scientific progress does not exclude those who make it possible [17].

Theoretical and Critical Approaches to Distributive Justice and Emerging Technologies

Drawing on Rawls’s theory of justice [18] and Von Schomberg’s ethics of innovation [19], the BSP seeks to balance access to scientific advances with the protection of intellectual property, with the aim of fostering a more equitable knowledge economy [20] (pp. 153–154). Potential outcomes include funding new lines of research, access to innovative treatments, the strengthening of healthcare infrastructure, and the development of scientific capacities.
Convergently, Ten [21] argues that the absence of effective enforcement mechanisms sustains an asymmetry between the countries of the Global North3 and South [22].
The concept of BSP is dynamic and depends on the socioeconomic and cultural context; therefore, scientific cooperation must be based on models of equitable access to innovation, ensuring that technological advances are not concentrated exclusively in countries with greater economic power.
From this perspective, international regulation must strike a balance between the rights of innovators and the need to share knowledge, in order to promote global well-being and sustainable development.

1.5. Ethical Risks of Unregulated AI Development

Hinkelammert, through his critique of modern rationality [23] and his emphasis on human dignity as the core of the economy, influences the conceptualization of the BSP by giving it an ethical and redistributive dimension. The author argues that neoliberal economics subordinates’ life to the logic of the market, a situation that reinforces structural inequality. From this perspective, the BSP is conceived both as an economic distribution mechanism [24] and as a principle of social justice that prevents the exclusive appropriation of resources, especially biogenetic ones [25].
As Escobar [26] points out, Shiva’s work on biopiracy denounces the appropriation of biogenetic resources [27] while questioning the epistemological foundations of modern development and its colonial bias toward Indigenous knowledge.
In this vein, Shiva [27] warns that the patenting of Indigenous knowledge consolidates new forms of colonialism; therefore, she calls for the design of regulatory frameworks that ensure an equitable sharing of benefits. Accordingly, in the debate over the intellectual property of Indigenous knowledge, Shiva highlights the need for more inclusive global governance.

1.5.1. Evolution of the Principle of Profit-Sharing in International Frameworks

The projection of the BSP is oriented toward the consolidation of international agreements that prioritize sustainability and shared responsibility [28]. In reciprocity, the Bonn Declaration [6] establishes that equity in the sharing of biological benefits constitutes a condition for sustainable development. This instrument emphasizes that fair access to the fruits of knowledge and human progress must benefit humanity as a whole.
As can be seen, the BSP has ceased to be a compensatory notion and has become an essential principle of distributive justice in access to biogenetic resources and the associated knowledge. In this regard, Rourke [29] states that this mechanism compensates communities for the use of their biodiversity and redefines the relationships among global actors according to criteria of equity. Therefore, the BSP seeks to harmonize access to genetic resources, traditional knowledge, and technological advances [30].

1.5.2. Legal Instruments and Implementation Challenges

The Nagoya Protocol [31] represents a regulatory advance that sets guidelines to ensure the BSP in the use of genetic resources; however, its implementation faces practical obstacles, such as the lack of rigorous monitoring and the non-compliance of transnational companies [29].
At present, the BSP is oriented toward distributive justice and seeks to ensure that resource-providing communities participate effectively in the benefits generated [32]. Consequently, the contemporary debate centers on perfecting regulatory frameworks and transforming the extractivist paradigm into one based on bioethical co-responsibility4 and long-term sustainability.
From this perspective, Kamau and Winter [33] show that the unilateral appropriation of biological and cultural goods by transnational corporations has generated power asymmetries, a situation that spurred regulations such as the aforementioned Nagoya Protocol. In parallel, Morgera [34] indicates that emerging regulatory frameworks foster transparency in benefit-sharing and promote ethical business practices.

1.5.3. Bioprospecting, Innovation, and the Risks of Neocolonial Exploitation

Various studies highlight the urgency of designing effective mechanisms against biopiracy [35,36] so that developing countries can capitalize on their resources without suffering undue exploitation.
In light of the foregoing, the BSP prompts intense debate owing to tensions among equity, access to genetic resources, and intellectual property rights. While some regard it as an instrument of distributive justice, others warn that existing structures favor States with greater technological capacity and relegate local and Indigenous communities. Kamau and Winter [33] maintain that the appropriation of traditional knowledge without effective distribution mechanisms reinforces unequal power relations, a fact that has driven the strengthening of international regulation.
For instance, Gitter examines biotechnology patent litigation [37] and, similarly to Shiva [27], warns that the privatization of natural resources perpetuates the dispossession of ancestral knowledge. From this perspective, the current discussion is oriented toward establishing legal frameworks capable of ensuring that innovation and bioprospecting do not culminate in new forms of exploitation.

1.6. The BSP Applied to Artificial Intelligence

Originally conceived in the context of equitable access to natural resources, the BSP is linked to the development of AI by demanding the fair and sustainable distribution of its fruits.
Following Ostrom’s approach [38], the management of common resources requires governance systems that balance individual and collective interests; in the realm of AI, this entails public-private cooperation models that prevent the concentration of knowledge and technological power in a few entities and ensure the effective participation of all stakeholders.
Therefore, the adoption of policies that facilitate the exchange of data and trained models under open licenses fosters transparency and scientific reproducibility. Initiatives such as AI for Good articulate collaborative schemes designed to ensure that advances in AI benefit both productive sectors and vulnerable communities [39], promoting the redistribution of the value generated.
Another standout initiative is India’s Data Empowerment and Protection Architecture (DEPA), which transformed the ideal of equitable data sharing into a concrete framework, requiring banks, insurers and fintechs to release financial data through the Account Aggregator network, subject to user authorization. As of 31 March 2025, 179.7 million consents and 2.13 billion eligible accounts had been recorded [40]. This trove feeds machine-learning models applied to rural micro-enterprises and underserved urban neighborhoods; historical bias shrinks and credit flows. In parallel, the ecosystem has channelled ₹1.328 trillion (USD 15.9 billion) in loans to 9.7 million customers—10.5% of the personal-credit market—with fintechs reporting 65% reductions in analytics costs thanks to AI and shared data [41]. Thus, artificial intelligence drives inclusion.
Without adequate regulations, technological innovations deepen economic inequalities by privileging those who already possess capital and access to information.
Brynjolfsson and McAfee [42] argue that AI redefines productivity and wealth creation in favor of economies with advanced digital infrastructure, while others remain anchored in obsolete industrial models.
Stiglitz [43] warns that innovation, when lacking redistributive policies, widens economic gaps—a phenomenon observable in AI when access to its applications is restricted to those with abundant resources. Consistently, he points out that innovation without redistribution concentrates power and resources, a reality that manifests itself in the centralization of algorithmic capabilities in the Global North.

1.6.1. Inclusive Governance and Equitable Access

It was found that, in the absence of clear rules on the transfer and use of technology, innovations end up benefiting a small group instead of reducing economic disparities between States; hence the urgency of signing agreements that ensure equitable access to these tools.
On the ethical plane, Floridi and Cowls [44] remind us that the benefits of automation must be balanced against its social costs and warn that mass job displacement can be mitigated through reskilling programs. For this reason, a regulatory framework grounded in ethical principles is indispensable, one that compels artificial intelligence to be governed by criteria of transparency and social responsibility.
Floridi and Cowls [44] conceive AI as a public good [45] and maintain that its design must guarantee transparency, accountability, and justice so as to avoid biases that perpetuate inequalities. Floridi himself warns that, without robust ethical structures, artificial intelligence may normalize inequities and replace principles of justice with parameters of technical efficiency [46].

1.6.2. Inclusion of the Global South in Technological Governance

The exclusion of Global South5 countries from artificial-intelligence governance processes leaves them voiceless in the regulation of algorithms that affect their populations. Accordingly, applying the BSP to AI requires regulatory frameworks that promote transparency and equity, ensuring that technological impact is inclusive and contributes to sustainable development rather than deepening structural inequalities.
Taherdoost and Madanchian [47] analyze the positive effects of AI; hence, applying the BSP to its advances demands an equitable distribution of those benefits to prevent their concentration in a handful of corporations or governments. In this regard, Conn [48] adds that breakthroughs in this field should benefit the greatest possible number of people.
Open AI models, developed under public-access licenses, facilitate collaboration and spur technological progress without exclusive market barriers. Likewise, shared-data policies drive inclusive innovation and foster improvements in health, education, and sustainability that yield collective benefits.
Embedding the BSP in AI development bolsters equity in the distribution of its outcomes and ensures its responsible integration into the social fabric, while simultaneously preventing power concentration and encouraging technological progress that is accessible to all.
Equity—considered the central axis—ensures that benefits reach diverse social sectors and prevents technology from reproducing structural inequalities.
Lin [49] underscores that robotics and AI pose unprecedented moral dilemmas that cannot be resolved solely through regulations or algorithms. This concern centers on the need to anticipate unintended consequences of robotic behavior, ranging from the military use of robots to risks associated with privacy and automated discrimination [49] (pp. 29–30).

1.7. Ethical and Participatory Governance of Artificial Intelligence

With the aim of anticipating the risks associated with AI, various initiatives are being promoted. One of the most significant is the set of Asilomar AI principles, developed by the Future of Life Institute during the Beneficial AI conference [50]. This ethical compendium consists of twenty-three guidelines that establish a non-binding anticipatory framework for regulating artificial intelligence on the basis of justice, prevention, and international cooperation. These principles are grouped into three axes: open research, human-rights-based ethics, and mitigation of future risks. Among their most notable provisions are the prohibition of the military use of AI and the global demand for the non-deployment of lethal autonomous weapons.
According to Morandín-Ahuerma [51], these twenty-three principles constitute an essential normative architecture for ensuring the ethical and safe development of AI, grounded in human autonomy, dignity, the intrinsic value of life, and a notion of both technical and social justice.
Within this framework, Morandín-Ahuerma advocates international-scale scientific cooperation aimed at halting the arms race and guaranteeing a globally distributed benefit.
Added to this is the incorporation of the principle of redress for algorithmic harms [52,53], a key component of post-implementation justice. Morandín-Ahuerma holds that the evolution of artificial intelligence must rest on human ideals such as freedom, equity, and compassion, together with an ethical concern for the well-being of all sentient beings.
Although the Asilomar principles fall within the realm of soft law, they can be projected into the legal sphere through a multilevel articulation. The open-research principle converges with UNESCO’s Recommendation on the Ethics of AI [11], which elevates transparency and access to the category of obligations for sustainable development. The guidelines on dignity and autonomy find a counterpart in the Council of Europe’s Convention on Artificial Intelligence [54], whose conventional nature imposes mandatory compliance safeguards. In cases of unacceptable risk, the European regulation envisages explicit prohibitions and proportionate sanctions.
The twenty-three guidelines influence the policies of the European Commission, UNESCO, various national legislations, and debates on algorithmic ethics, digital justice, and automated rights [55] (pp. 7–8). At the level of nation-states, a tiered legislation is proposed according to risk levels, with oversight agencies endowed with sanctioning powers and clauses oriented toward the BSP, particularly to guarantee the effective inclusion of the Global South.

1.7.1. Structural Risks of Automation and Political Manipulation

Another risk associated with AI stems from automation, which is redefining employment by replacing human tasks; however, its effect depends on compensatory forces such as productivity, capital, deep automation, and the creation of new tasks. In this regard, Acemoglu and Restrepo [56] argue that it is advisable to avoid income concentration and productivity stagnation arising from excessive automation in a capital-centered model.
According to Yampolskiy [57], political manipulation through AI is one of the most immediate and serious risks to democratic stability. For this reason, the author proposes establishing ethical limits, international governance frameworks, and greater transparency in the use of AI in political and media contexts.

1.7.2. Technocolonialism and Digital Sovereignty from the Global South

Ávila [58] argues that cyberspace has taken shape under a regime of technocolonialism that reconfigures domination through invisible infrastructures, adhesion contracts, and a platform economy based on data extractivism and surveillance. This computational colonialism imposes an order that reduces cognitive diversity, silences minority languages, and deprives peoples of their digital sovereignty, replacing public governance with private control. In response, Ávila advocates a policy of digital re-existence articulated from the Global South, grounded in community governance, free and sovereign licenses, technological disobedience, and the reclamation of infrastructural control.
Nur & Muntasir [59] note that AI is undergoing a process of structural de-democratization driven by the concentration of computational power, rising training costs, and hegemony in specialized conferences.
Thus, corporate hegemony generates its own epistemic agenda in which technological elites decide what is researched, how it is evaluated, and with what data. For this reason, it is essential to prevent the lack of technical reproducibility from fostering opaque ethical environments and unsupervised algorithmic surveillance practices, so that AI neither loses its orientation toward democratization nor becomes an instrument of cognitive and economic control.
Confronting this danger, Latonero [60] observes that the expansion of AI without binding international regulation threatens essential principles such as human dignity, individual autonomy, and algorithmic justice. Because automated systems operate without transparency, they contribute to mass digital surveillance and display structural biases that disproportionately affect vulnerable groups.
All of this occurs in the absence of corporate accountability and without the possibility of effective legal redress. According to Latonero, governing AI is tantamount to governing power and must therefore be exercised with mandatory transparency, democratic controls, and the right to redress.
Zaheer and Jones [61] criticize information nationalism and propose using healthcare blockchain together with quantum cryptography to safeguard privacy during the ethical exchange of information. As an example of effective interoperability, they cite the NHSH model implemented in the U.S. and highlight the role of platforms such as GISAID in accelerating emergency responses.
Beyond this, interconnected cities should act as dynamic nodes for epidemiological analysis, avoiding becoming centers of propagation, since mere technological integration is insufficient without global coordination. Equity in data access and processing is essential for strengthening urban resilience [61].

1.7.3. An AI Focused on Human Well-Being and the Common Good

With the aim of achieving algorithmic justice, Villagra [62] promotes a truly human and sustainable AI, grounded in fundamental digital rights, appropriate human oversight, and structural citizen participation. Conceived as a public good, AI must be governed by European ethical principles in order to aspire to democratic technological sovereignty and to operate on critical digital infrastructures that are accessible and socially controlled. For Villagra, the European regulatory proposal constitutes a starting point, yet it is insufficient without a profound cultural and institutional transformation. In Villagra’s view, responsible innovation is not a technical option but a political imperative: either the AI of the future is democratically regulated to serve the common good, or it consolidates invisible structures of domination in opaque intelligent systems.
For Orose [63], AI should be assessed by its capacity to generate genuine human happiness—autonomy, life purpose, meaningful relationships, and the satisfaction of basic needs—and not merely by its technical sophistication. Drawing on the Easterlin paradox and hedonic adaptation, Orose argues that the material benefits of AI are insufficient if they are not directed toward human flourishing. Consequently, Orose proposes an education for life meaning that prepares citizens to find direction and significance in a digital environment.
Ünver’s report [64] shows how technologies such as generative AI, multilingual natural language processing, and sentiment analysis are employed to fabricate emotionally effective disinformation. Examples include Russian bots in Ukraine, polarizing campaigns in India, trolls in the Philippines, and the Chinese Communist Party’s strategic propaganda. This manipulation is orchestrated through platforms such as Facebook, Twitter, and WhatsApp, which train AI systems to tailor narratives according to the region and the user’s profile.
Ünver warns that the current challenge lies not only in identifying false content but in understanding its contextual, emotional, and strategic insertion: emotional design, the imitation of discursive tones, and idiomatic adaptations are employed to influence on a massive, persistent, and imperceptible scale.

1.7.4. Posthumanism, Artificial Sovereignty and Algorithmic Biopolitics

Andrade [65] explains that general AI will consolidate a technogenic order in which post-human agencies capable of intervening in vital, legal, and cognitive processes will emerge. This horizon heralds a post-human sovereignty that redistributes power toward artificial systems with decision-making autonomy.
This outlines an algorithmic biopolitics that redefines the management of the living, deepens technical dehumanization, and dismantles traditional normative frameworks.
In the face of this scenario, Filippi, Bannò, and Trento [66] critically review the linkage between automation and employment and highlight its contextual complexity. Their task-based typology of labor impacts delineates processes of substitution, human-machine complementarity, and occupational transformation. The authors propose an agenda that addresses skill, gender, and sectoral gaps (manufacturing, routine services, and logistics), as well as issues of job quality, stability, and meaning, paying special attention to informal sectors and emerging countries.
Kamande and Stanworth show that AI reinforces monopolies through data concentration and a “winner-takes-all” logic [67] (p. 4), which weakens traditional antitrust law.
Thus, platforms, erected as essential digital infrastructures, manage information with algorithmic opacity that undermines consumer privacy and perpetuates structural biases.
In response, the authors suggest implementing alternative competition metrics, algorithmic audits, data portability, and dynamic regulation guided by algorithmic justice. However, this requires creating a global regulatory architecture that, through international coordination, balances the information-power asymmetry and incorporates the concept of algorithmic sovereignty 6 understood as the state’s capacity to regulate systems that affect its citizenry.

1.8. Obstacles and Tensions in the Implementation of the Principle

The debate presented affects intellectual property regimes. Tully [68] observes that generative AI operates without a defined human authority or creative purpose, weakening the notion of originality and facilitating the extraction of value by platforms that concentrate control over information.
Krasodomski et al. [69] point out that AI, under an apparent neutrality, reproduces and exacerbates inequalities. To counter this, they propose multi-level governance that ensures algorithmic sovereignty, redistributive justice, democratic evaluation, and binding citizen participation.
For their part, Fjeld et al. [70] introduce the concept of structural normative resistance, defined as the institutional capacity to reject AI uses contrary to fundamental principles even under economic or political pressure. They also coin the category of infrastructural legitimacy to assess the norms and the material conditions that underpin them.

1.8.1. Digital Colonialism and Structural Inequalities

The discussion converges on a structural warning: the advance of AI, under conditions of unequal governance, threatens to deepen global disparities, as benefits concentrate in regions with greater infrastructure and technological capital while the Global South remains sidelined without equitable access to knowledge.
This dynamic constitutes a new digital colonialism 7 based on data extraction, epistemic exclusion, and the imposition of external values.
Along these lines, Hinkelammert [24] denounces that systems governed exclusively by market logic subordinate human life, a pattern now reproduced by AI models that extract data from the Global South without equitable return. Consistently, Benkler [71] and Raymond [72] describe open infrastructures as a path to democratize knowledge and prevent its corporate appropriation. In turn, Morgera [34] and Sen [73] emphasize that only a regulatory architecture articulating redistributive justice, epistemic inclusion, and international cooperation can ensure that technology serves the collective interest.
For this reason, the BSP operates as a systemic response to persistent imbalances: without it, AI would function as a device for the expanded reproduction of asymmetries when, in fact, it should serve human flourishing.
The findings highlight that AI, as currently developed, operates within an unequal global architecture that favors the concentration of benefits in advanced economies and relegates countries in the Global South to structural dependency. This configuration alters the distribution of resources, redefines the knowledge map, and establishes new forms of technological exclusion. Simultaneously, data generated in peripheral contexts are appropriated without equitable return, reinforcing Hinkelammert’s [24] critique of the extractive logic of cognitive capital.
It is also evident that the absence of binding mechanisms ensuring the effective participation of marginalized actors in algorithmic decision-making displaces local knowledge, homogenizing technological solutions and undermining epistemic plurality. In response, the BSP must be embraced as an ethical and functional imperative for building a technological ecosystem grounded in equity, transparency, and cooperation. Its adoption is a minimum condition for AI development to cease perpetuating historical inequalities and begin addressing humanity’s collective challenges.
On this basis, AI research must be oriented toward an international normative architecture that combines distributive justice, technological transparency, and equitable participation. The course cannot remain under the logic of concentrated intellectual and computational capital; it must be redirected toward principles that prioritize accessibility, cooperation, and collective well-being. This demands the integration of inclusive governance frameworks that enable countries of the Global South, Indigenous communities, and marginalized sectors to participate actively in the production, implementation, and oversight of algorithmic developments, ensuring the openness of models and data while safeguarding collective rights and digital security.

1.8.2. Multidimensional Impacts of the AI Deficit in the Global South

The lack of AI development in the Global South creates political, cultural, economic, and technological repercussions that perpetuate historical inequality. Kallot [74] notes that the concentration of AI in the North reinforces power asymmetries, since developing nations lack influence in international governance and are subjected to regulations designed for advanced economies, with risks of authoritarian surveillance and political manipulation [75].
Economically, AI-based automation threatens to erode the Global South’s comparative advantages in labor-intensive sectors, while the high costs of technological infrastructure constrain its competitiveness [76].
From a cultural perspective, adopting models developed in the North can deepen cultural subordination by imposing exogenous values; Manvi et al. [77] show how geographic biases in these systems reinforce pre-existing hegemonies.
Socially, AI-induced labor displacement heightens tensions and internal inequalities, while hindering a just transition toward inclusive digital economies [78,79].
Technologically, the digital divide prevents Southern countries from harnessing AI opportunities, as they lack adequate infrastructure, specialized talent, and access to advanced models controlled by Northern corporations [80]. This dependence fosters new forms of digital colonialism [81], where data from the South feed Northern systems without equitable return.
In this unequal scenario, Krasodomski et al. [69] emphasize that the concentration of benefits in a few countries widens existing gaps and constrains AI’s transformative potential to tackle global challenges such as poverty and climate change.

1.8.3. Ethical and Political Foundations of Fair AI

According to the considerations expressed, BSP in the field of AI is founded on the creation of an equitable technological ecosystem in which advances are accessible and generate distributed value for all stakeholders, from large corporations to the least advantaged communities.
Rawls [18] proposed structuring systems that maximize benefits for the most disadvantaged sectors. This approach is linked to the responsible development of AI, as it prevents the concentration of power in private entities and promotes open collaboration.
Rawls’s reflection on distributive justice has been taken up by Montero [82], who notes the need to protect vulnerable groups from the ethical risks of technological research; Rodrigues [83] reinforces this perspective by insisting on the urgency of policies that mitigate digital exclusion and facilitate affordable infrastructures.
In the field of AI, such an objective requires public policies aimed at reducing the digital divide and eliminating economic barriers to access advanced tools. An equitable system demands that the opportunities arising from technological progress be universally accessible [84,85].

1.8.4. Proposals for Inclusive and Sustainable Artificial Intelligence

Building AI models that integrate shared resources, open data, and collaborative governance frameworks constitutes a strategic pathway to democratize access to this technology. In this regard, Benkler [71] argues that information should be configured as a common good capable of driving collective innovation, because collaborative models promote an equitable distribution of knowledge and reduce dependence on centralized structures, which underscores the urgency of creating open platforms that guarantee the shared use of AI tools and stimulate decentralized innovation.
Transparency is indispensable for generating trust and ensuring that AI development is governed by verifiable ethical principles [86]. Ethics in the design and application of AI is crucial to prevent structural inequalities and guarantee accountability in its use. Such ethics requires public–private collaboration frameworks capable of ensuring that AI models benefit investors, technology companies, and society equally.
Therefore, AI development must be grounded in BSP so that algorithms do not reproduce exclusionary dynamics or discriminatory biases. Open access to data and models strengthens trust and enables public audits that verify system performance. In this sense, Mounkaila [87] insists that AI models must be subject to explainability and accountability criteria applicable to all sectors.

1.9. Participatory Governance and Equity in Benefit Distribution

Sen [73] argues that expanding human capabilities is essential for inclusive development; in the realm of AI, this requires decentralized platforms that guarantee universal access to knowledge and tools, reduce negative externalities, and promote BSP.
The notion of an «open-source community», defended by Raymond [88] (p. 2), is crucial for the advancement of collaborative AI because it enables multiple actors to optimize and evolve algorithms [89]. Open-source systems create a robust innovation environment where constant feedback facilitates error correction and the rapid, secure improvement of models.
Nonetheless, incorporating BSP faces significant challenges, such as the resistance of companies that wish to retain competitive advantages based on proprietary developments, the security risks involved, and privacy issues in handling sensitive data. These challenges call for well-defined governance frameworks and independent audits.
The case of OpenAI, which began with an open-access model and migrated toward more restrictive positions [90], illustrates the tension between accessibility and the protection of strategic advances.

Common Infrastructures and Digital Redistributive Justice

On a global scale, the BSP in AI encompasses the equitable distribution of innovations and the fostering of environments where technology reinforces sustainable development and collective empowerment [91]. To achieve that vision, international regulatory frameworks that encourage inclusive access to AI tools and auditing systems that verify their social utility are indispensable [92].
Open innovation ecosystems are decisive for democratizing technological development, because by providing access to data, algorithms, and models to diverse actors—from academic institutions to governmental entities—they prevent the privatization of knowledge and promote applications aimed at collective needs [93]. Consequently, the governance of these technologies requires participatory and decentralized structures that align their evolution with social priorities [94].
Within this framework, it was determined that the BSP in AI must integrate six basic components—equity, accessibility, transparency, sustainability, participation, and cooperation—that guarantee its effective application and prevent the concentration of advances within minority groups.
Properly supported, these six components make the BSP an ideal ethical tool for guiding innovation toward the common good and reducing disparities [95].

2. Current Context of AI Development

The World Economic Forum estimated that investment in AI could reach 52 billion dollars [96].
Companies such as OpenAI and Google pay up to USD $1 million per year to AI experts, intensifying competition and affecting startups without access to key talent. Consequently, the shortage of specialists impacts research, innovative projects, and technological development [97].
Governments in the U.S. and Europe are adjusting regulations to prevent monopolies and balance the market [54].
The United States dominates AI with 40% of the market and more than USD 1 billion in government funding [98].
China plans to invest 150 billion dollars, while Israel holds 11% thanks to its military-digital sector [99].
The United Kingdom and Canada, for their part, are making strides in research and private-capital investment [100].
In Latin America, warnings have already been issued about the lack of academia-industry integration [101], which will slow AI development in the region.
In Africa, AI poses a major challenge [102], which is why its leaders have proposed a continental strategy to create high-value jobs, new industries, and promote innovation [103].
In this regard, the exclusion of developing countries from AI progress widens the technology gap, perpetuates economic dependence, and deepens structural lag in strategic sectors. As noted, the concentration of innovation in developed nations intensifies inequalities, restricts access to disruptive technologies, and limits emerging countries’ ability to compete in an increasingly automated global market [104].
For this reason, exclusion from AI deprives developing nations of key tools for their growth [105], consolidating a new form of digital colonialism in which knowledge and automation are monopolized by countries of the global North.

3. Research Design

Based on the developments presented in the preceding sections, it is pertinent to note that the research underpinning this article adopted a qualitative design, guided by an inductive approach and framed within an interpretive paradigm. Employing documentary and bibliographic review techniques and using purposive sampling, it applied systematic content analysis.
Two reviewers carried out a manual content analysis in three successive cycles (open, axial, and selective), which were guided by a shared rubric of 18 codes. Twenty percent of the excerpts were double-coded; discrepancies were discussed.
The initial search (Scopus, Web of Science, SSRN, UN repositories) yielded 87 records (1964–2025). After they applied relevance, quality, and currency filters, 26 key sources remained: 14 peer-reviewed articles, 5 policy papers (e.g., UNESCO [9,11], OECD [12]), 4 treaties, and 3 monographs. Full citations appeared in the “References” section; Supplementary Materials listed the 26 documents with coding identifiers and access links.
A detail of what was mentioned is presented below in Table 1.
The inquiry was structured around two theoretical categories: the BSP and theories of distributive justice. Correspondingly, two analytical categories were defined: the algorithmic governance of artificial intelligence and the gaps and inequities in the Global South.
As shown in Figure 1, the interaction between these dimensions gave rise to six emergent categories, the product of a rigorous inductive process based on manual coding applied to twenty-six academic and regulatory sources.
First, open coding was conducted, from which scattered ethical concepts emerged that, after systematic comparison, enabled the construction of the initial codes.
Subsequently, axial coding was implemented, whereby those codes were reorganized around recurring conceptual cores.
In the final phase, selective coding integrated the relationships among categories, revealing consolidated interpretive structures.
The empirical strategy consisted of a systematic review of books, book chapters, and scientific articles selected through detailed reading. The corpus was formed using a sample of sources related to the BSP and theories of distributive justice, supplemented by materials addressing the algorithmic governance of AI and structural inequalities in the Global South.
The inclusion and exclusion criteria focused on thematic relevance, currency, and academic rigor, as detailed in Figure 2.
Based on these parameters, the exploration of indexed databases and university catalogues formed an initial corpus of 87 sources, which was refined to 26 documents that offered substantive contributions to the thematic focus. Using these, a structured matrix was developed to organize the key information from each work consulted, facilitating comparative analysis, the identification of gaps and patterns, and theoretical coding.
Each source was coded through content analysis, enabling the linkage of theoretical and analytical categories, from which the six categories structuring the proposal emerged. The coding stages—open, axial, and selective—formed part of a progressive qualitative analysis process, particularly suited to inductive approaches:
At first, the texts were examined without preset labels; each fragment was analysed for ethically charged ideas, allowing conceptual patterns to emerge freely. Later, through axial coding, the categories were conceptually reorganized, establishing connections among notions such as equity, participation, and transparency. In the final phase, these findings were integrated into a coherent ethical structure. The entire procedure was conducted according to contextual density and its linkage to the defined theoretical frameworks.
Three cross-tabulation matrices were used to compare the six emergent categories, complemented by reflective notes that reinforced conceptual robustness. This strategy was strengthened by sustained critical reading that supported the analysis.
The aim was to describe how the BSP can guide AI development, critically addressing that principle with the purpose of outlining a normative trend that fosters the development of technologies centered on collective well-being.

4. Results

The coding process concluded with the identification of six emerging categories that reflect fundamental dimensions of the BSP as applied to artificial intelligence.
The category of equity was the most recurrent, with 16 mentions, grouping codes related to distributive justice, the recognition of gaps, and the differential approach.
To the same extent, cooperation received 16 mentions, bringing together codes linked to multisectoral alliances, inter-institutional coordination, and shared governance models.
In third place, the category of participation registered 14 mentions, encompassing content referring to citizen inclusion, informed consultation, and the diversity of voices.
In fourth and fifth place, the categories of transparency and sustainability appeared on 12 occasions each. The former comprised codes such as clear access to algorithms, decision traceability, and responsible disclosure; the latter incorporated references to long-term balance, harm reduction, and an ecosystemic approach.
With somewhat less presence, the category of accessibility was identified in 8 records, associated with the elimination of technological barriers, universal availability, and sociotechnical adaptability.
The distribution of their frequencies made it possible to identify nuclei of high conceptual density, shaping a semantic architecture of a relational and convergent nature, rather than a hierarchical one. Hence, interdependent categories emerge, built from a systematic analysis of theoretical sources and pertinent documents.
The triangulation process revealed a substantive pattern of convergence among the sources analyzed, reinforcing the validity of the six emerging categories.
Equity and cooperation—owing to their high frequency—establish themselves as the backbone of the ethical discourse on artificial intelligence, featuring prominently in both documentary analysis and theoretical interpretations.
In line with this finding, participation acquires a cross-cutting character, supported by multiple pieces of evidence that point to inclusive and deliberative processes.
Transparency adds to this picture: its repeated appearance in the corpora underscores the need to ensure traceability in technological decisions.
Sustainability, in complementary fashion, integrates ecological concerns with long-term projections, confirming its normative relevance across various contexts.
Accessibility, while less frequent, appears consistently in records that highlight the reduction of technological gaps.
Although these recurrences are not distributed uniformly, together they delineate a solid semantic field characterized by interdependent relationships structured as a network rather than a linear sequence.
This procedure made it possible to uncover a dense, cohesive network of latent meanings which, when broken down into interpretive units, enabled the identification of recurring conceptual patterns as well as thematic connections that configure a coherent semantic horizon.

5. Discussion

The debate emerging from the analyzed documents challenges the structures that allow benefits to concentrate in the hands of a few technological actors. It can be asserted that, without a redistributive ethic, AI deepens the gaps between the Global North and the world’s margins. In response, the BSP appears as a normative solution, integrating justice, equity, and sustainability into algorithmic governance.
In this context, the recommendations of UNESCO and the OECD provide international ethical frameworks, yet a tension persists between non-binding guidelines and structural needs for transformation. From this perspective, the six proposed pillars—equity, cooperation, participation, transparency, sustainability, and accessibility—function as minimum conditions for democratizing AI development. Implementing these BSP pillars requires overcoming corporate resistance, crafting effective regulations, and generating collaborative frameworks that restore agency to historically excluded actors, recognizing that governing AI is tantamount to governing power.

5.1. Interpretation of Prior Positions and Results

The results obtained are understood as empirical validation of earlier warnings issued by thinkers such as Stiglitz [43], Hinkelammert [23,24], and Floridi [46], who, from different disciplines, cautioned about the regressive effects of innovation without redistribution.
The concentration of benefits in the Global North identified in the research reinforces theses that denounce a structural digital neocolonialism. Added to this is Shiva’s critique [26] of the epistemic dispossession of Indigenous communities, now mirrored in data extraction without reciprocity.
Aligned with the BSP, the findings extend proposals originally biomedical in nature to AI, linking the normative legacy of treaties such as the Nagoya Protocol with contemporary principles from the OECD and UNESCO. Confronting an exclusionary paradigm, the results converge with critical lines that advocate global legal frameworks and plural ethical governance so that AI promotes technological justice rather than multiplying inequalities.

5.2. Reflection on the Results

The corpus reveals a conceptual turn toward digital distributive justice; the BSP proposal fuses Floridi’s relational ethics [46] with Rawlsian principles [18,82], providing a framework that legitimizes equitable transfer of algorithmic value.
Alongside this, the OECD guidelines [12] and UNESCO recommendation [11] consolidate principles of transparency, accountability, and multilevel co-regulation. Nevertheless, the findings expose regulatory gaps in Global South contexts. Thus, the proposed system integrates operational glossaries, coding matrices, and concrete pilot cases—such as the India Data Access Policy—to translate abstractions into verifiable protocols.
Ultimately, the findings project an adaptable regulatory architecture: they reinforce the theory of algorithmic justice and furnish scalable tools for public policy, technical audits, and academic programs that foster socio-technological co-responsibility.

5.3. Proposal for a Global Regulatory Architecture

A Framework Convention on Safe and Equitable AI, promoted by Global South states and adopted by the UN, could serve as the guiding axis. It should include annexes for health, defense, finance, and education, as implementation would rest with a Global Council on Algorithmic Governance, currently non-existent. In this council, governments, the private sector, civil society, and Indigenous peoples would occupy rotating seats, advised by a UNESCO-WHO Scientific-Ethical Panel.
Within this body, a Secretariat empowered to inspect code that exceeds predefined compute thresholds could ensure oversight.
With interoperable public registration, ex-ante algorithmic impact assessment, and ISO 42001 certification, accredited audits become feasible, alongside a reparations fund financed by a levy on the energy consumption of data centers.
Disputes should be referred to a Digital Chamber of the International Court of Justice, authorized to fine or suspend deployments.
This scheme could interface with WIPO, the WTO, and the ITU, while the OECD principles would act as a bridge across jurisdictions.

6. Central Findings and Problem Solution

The study’s findings reveal that AI development is shaped by an asymmetric global architecture that concentrates benefits in advanced economies while pushing the Global South into structural dependency. This concentration restricts access to key technologies and perpetuates an extractive logic based on data exploitation without equitable return.
As a structural remedy, the BSP should be implemented through an international regulatory framework that prioritizes six components: equity, accessibility, sustainability, cooperation, transparency, and participation. Each component thus functions as an ethical condition for distributing the fruits of innovation without reproducing exclusion.
Moreover, designing collaborative ecosystems, adopting open licenses, and fostering multisector participation allow algorithmic governance to be reconfigured toward inclusive models. The overarching perspective redefines AI as a common good aimed at collective well-being.

Orientation of Future Research

Future research must deepen the design of binding legal mechanisms that operationalize the BSP in artificial intelligence, prioritizing schemes balancing innovation and distributive justice. It is urgent to develop comparative models among Global South countries to assess the differential impact of algorithmic governance policies.
Beyond this, studies can explore the effectiveness of open platforms for democratizing technical knowledge, while investigating how collaborative infrastructures alter value flows among unequal actors.
Accordingly, regulatory simulations modeling scenarios of algorithmic redistribution under cooperative frameworks would be advisable.
Parallel inquiries should examine the interplay between Indigenous knowledge systems and disruptive technologies, assessing ethical pathways for digital co-creation.

7. Conclusions

The rapid expansion of AI without a robust ethical framework threatens to deepen existing inequalities. Under current conditions, technological benefits tend to concentrate in advanced economies, relegating the Global South to structural dependency.
The study’s findings show that the present architecture of algorithmic power reinforces an extractive logic: data originating in peripheral contexts are exploited without equitable return, consolidating new forms of digital exclusion.
In this scenario, the BSP and its six pillars offer a normative foundation to redirect AI development toward the common good. This perspective demands an international legal architecture that promotes distributive justice, technological transparency, and plural participation in the creation and control of intelligent systems.
AI must therefore be conceived as a global common good rather than an exclusive corporate asset. The BSP can serve as an indispensable normative response, ensuring that algorithmic innovation becomes an equitable, democratic, and sustainable transformative force capable of preventing historical exclusions and fostering genuinely human progress. It provides a framework that rigorously addresses ethical and distributive tensions.
This reflection drives the adoption of criteria that transform each advance into an opportunity to democratize access and favor fairness in the distribution of technological outcomes. Consequently, it calls for shifting from centralized models to collaborative and open structures, enabling regulatory mechanisms that curb power concentration and prevent extractive practices.
Critiquing existing asymmetries encourages strategies that address the needs of both established actors and historically marginalized communities. The urgent challenge is to integrate ethical governance practices and operational transparency, guiding AI development toward an inclusive system that maximizes collective benefits and aligns innovation with social justice and sustainability objectives, ensuring technology meets the aspirations of all humanity within a framework of shared progress.

Supplementary Materials

The following supplementary information can be downloaded at: https://www.mdpi.com/article/10.3390/philosophies10040087/s1.

Author Contributions

Conceptualization, resources, funding, methodology, data curation, writing, review and editing, C.V.-M.; validation, writing, formal analysis, preparation of the original draft, A.R.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the “Universidad Cooperativa de Colombia” through project No. 3339, for an amount of $4,526.85 dollars.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and was approved by the Universidad Cooperativa de Colombia through a project commencement certificate dated 30 January 2023, with project code INV3434. No human subjects participated in the research.

Informed Consent Statement

No subjects participated in the development of the research that gave rise to this article.

Data Availability Statement

The data generated are available in Supplementary Materials.

Acknowledgments

We would like to thank the “Universidad Cooperativa de Colombia” for its support in carrying out this research.

Conflicts of Interest

The authors declare that they have no conflict of interest. Furthermore, the funding institution had no role in the design of the study, in the collection, analysis or interpretation of the data, in the writing of the manuscript or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial intelligence

Notes

1.
The bioethetic principle of the shared use of benefits is expressed in article 15 of the Universal Declaration on Bioethics and Human Rights, issued by UNESCO in 2005. This promotes the fair and equitable distribution of fruits derived from the use of knowledge, data or recovery actors.
2.
Distributive justice is an ethical doctrine that seeks to assign equitable resources and opportunities, prioritizing the well of historically disadvantaged groups. Rawls, J, is his work “A Theory of Justice” de 1971, argues that a fair society is one that structurally improves the situation of its less favored members.
3.
The Global North is a geopolitical concept that refers to countries with high technological, economic and political capacity, which lead the global dynamics of production, innovation and governance. Although its delimitation does not depend on the geographical location, it commonly includes Western Europe, the United States, Canada, Japan and certain advanced economies of Asia-Pacific. In this regard, Escobar, A, in his 1995 book “Encountering Development: The Making and a Maging of the Third World”, deeply analyzes the power relationship between the “First World” (North) and the “Third World” (South).
4.
Co-responsibility is an ethical approach that requires the equitable participation of all actors involved in the generation, control and distribution of technological benefits. In this field, the German philosopher Karl-Beto Apel is widely recognized as the main theoretical that systematized this concept in the framework of its universalist discursive ethics.
5.
The Global South is an analytical category that identifies the historically relegated countries and communities within the international system, characterized by low levels of industrialization, technological dependence and limited influence on global decision -making processes. Although it is not strictly geographical, the concept points to structural inequalities in access to resources, knowledge and power.
6.
Algorithmic sovereignty is described as the institutional and legal capacity of states to regulate the design, implementation and supervision of algorithmic systems that affect its population. In the body of this article, appointments are indicated where the concept can be deepened.
7.
Digital colonialism is a contemporary form of domination that operates through data extraction, surveillance and information control by corporations of the global northern on peripheral countries. Kwet, M, in his book “Digital Colonialism: Us Empire and the New Imperialism in the Global South”, makes an interesting proposal that can expand this concept.

References

  1. Vargas, C. Tendencias y principios en las corrientes bioéticas. Rev. Colomb. Bioét. 2021, 16, 1–22. Available online: http://www.scielo.org.co/scielo.php?script=sci_arttext&pid=S1900-68962021000200119 (accessed on 16 May 2025).
  2. Sulasa, T.; Kumar, R. Redefining the impetus of opulence and artificial intelligence on human productivity and work ethos. Russ. Law J. 2023, 11, 2628–2635. Available online: https://www.russianlawjournal.org/index.php/journal/article/view/3119/1914 (accessed on 9 May 2025).
  3. O’Sullivan, S.; Nevejans, N.; Allen, C.; Blyth, A.; Leonard, S.; Pagallo, U.; Holzinger, K.; Holzinger, A.; Sajid, M.I.; Ashrafian, H. Legal, regulatory, and ethical frameworks for development of standards in artificial intelligence (AI) and autonomous robotic surgery. Int. J. Med. Robot. Comput. Assist. Surg. 2019, 15, 1–19. [Google Scholar] [CrossRef] [PubMed]
  4. Bernal, D. Bioderecho internacional. Derecho y Realidad. 2015, 13, 33–54. [Google Scholar] [CrossRef]
  5. UN. Convention on Biological Diversity; Secretary-General of the United Nations: New York, NY, USA; Available online: https://treaties.un.org/doc/treaties/1992/06/19920605%2008-44%20pm/ch_xxvii_08p.pdf (accessed on 29 June 2024).
  6. UNEP. Bonn Guidelines on Access to Genetic Resources and Fair and Equitable Sharing of the Benefits Arising out of their Utilization; Secretariat of the Convention on Biological Diversity, United Nations Environment Programme: Montreal, QC, Canada; Available online: https://www.cbd.int/doc/publications/cbd-bonn-gdls-en.pdf (accessed on 19 September 2024).
  7. Rheeder, A. Benefit-sharing as a global bioethical principle: A participating dialogue grounded on a Protestant perspective on fellowship. Sabinet Afr. J. 2019, 51, 1–11. Available online: https://hdl.handle.net/10520/EJC-1b39c2c633 (accessed on 21 May 2025). [CrossRef]
  8. CNB. El Concepto de Justicia en Bioética. Volnei Garrafa. In Secretaría de Salud de los Estados Unidos Mexicanos, Comisión Nacional de Bioética, México D.F. Available online: https://www.gob.mx/salud%7Cconbioetica/videos/el-concepto-de-justicia-en-bioetica-dr-volnei-garrafa (accessed on 31 July 2024).
  9. UNESCO. Universal Declaration on Bioethics and Human Rights: Background, Principles and Application; UNESCO: Paris, France; Available online: https://unesdoc.unesco.org/ark:/48223/pf0000179844 (accessed on 24 October 2024).
  10. CEPAL. Protocolo de Nagoya sobre Acceso a los Recursos Genéticos y Participación Justa y Equitativa en los Beneficios que se Deriven de su Utilización al Convenio sobre la Diversidad Biológica. Montreal, Quebec, Canadá: Comisión Económica para América Latina y el Caribe. Available online: https://observatoriop10.cepal.org/es/media/148 (accessed on 10 July 2024).
  11. UNESCO. Recommendation on the Ethics of Artificial Intelligence; General Conference of the United Nations Educational, Scientific and Cultural Organization: Paris, France; Available online: https://unesdoc.unesco.org/ark:/48223/pf0000380455 (accessed on 24 November 2021).
  12. OECD. OECD AI Principles; Recommendation of the Council on Artificial Intelligence (OECD/LEGAL/0449); Organisation for Economic Co-operation and Development: Paris, France; Available online: https://www.oecd.org/en/topics/ai-principles.html (accessed on 12 May 2019).
  13. Khazieva, N.; Pauliková, A.; Chovanová, H. Maximising synergy: The benefits of a joint implementation of knowledge management and artificial intelligence system standards. Mach. Learn. Knowl. Extr. 2024, 6, 2282–2302. [Google Scholar] [CrossRef]
  14. ISO/IEC 42001:2023; Information Technology—Artificial Intelligence—Management System. International Organization for Standardization: Geneva, Switzerland, 2023. Available online: https://www.iso.org/standard/81296.html (accessed on 12 July 2025).
  15. United Nations. Sustainable Development Goals: Transforming Our World, 2030; United Nations: Nueva York, NY, USA, 2014; Available online: https://www.undp.org/sites/g/files/zskgke326/files/migration/gh/SDGs-Booklet_Final.pdf (accessed on 12 January 2025).
  16. Vallejo, F.; Álvares, D. Distribución justa y equitativa de los beneficios derivados del uso de los recursos genéticos y los conocimientos tradicionales. Revisión de algunas propuestas normativas y doctrinarias para su implementación. Rev. Ius Praxis 2023, 29, 184–203. [Google Scholar] [CrossRef]
  17. Garrafa, V.; Porto, D. Bioética de intervención. In Diccionario Latinoamericano de Bioética; Tealdi, J., Ed.; UNESCO: Bogotá, Colombia, 2008; p. 161. Available online: https://unesdoc.unesco.org/ark:/48223/pf0000161848 (accessed on 16 May 2025).
  18. Rawls, J. A Theory of Justice; Harvard University Press: Cambridge, MA, USA, 1999; pp. 1–560. [Google Scholar] [CrossRef]
  19. Von Schomberg, R. A vision of responsible research and innovation. In Responsible Innovation: Managing the Responsible Emergence of Science and Innovation in Society; Owen, R., Bessant, J., Heintz, M., Eds.; John Wiley & Sons: Hoboken, NJ, USA, 2013; pp. 51–74. [Google Scholar] [CrossRef]
  20. Henk, T.; Do Céu, M. Dictionary of Global Bioethics; Springer Nature: Cham, Switzerland, 2021; pp. 1–688. [Google Scholar] [CrossRef]
  21. Ten, H. Justice and global health: Planetary considerations. In Environmental/Ecological Considerations and Planetary Health; Benatar, S., Brock, G., Eds.; Cambridge University Press: Cambridge, UK, 2021; pp. 316–325. [Google Scholar] [CrossRef]
  22. UNCTAD. United Nations Conference on Trade and Development; United Nations: New York, NY, USA, 1964; Available online: https://unctadstat.unctad.org/EN/FromHbsToDataInsights.html (accessed on 1 July 2024).
  23. Hinkelammert, F. Crítica a la Razón Utópica; Desclée de Brouwer: Bilbao, España, 2002; pp. 1–406. Available online: http://repositorio.uca.edu.sv/jspui/bitstream/11674/1756/1/Cr%C3%ADtica%20de%20la%20raz%C3%B3n%20ut%C3%B3pica.pdf (accessed on 5 July 2025).
  24. Hinkelammert, F. El Huracán de la Globalización: La Exclusión y la Destrucción del Medio Ambiente Vistos Desde la Teoría de la Dependencia, 1st ed.; Universidad Centroamericana José Simeón Cañas: San Salvador, El Salvador, 1999; pp. 17–33. Available online: http://hdl.handle.net/11674/1006 (accessed on 4 July 2025).
  25. Hinkelammert, F. Aprovechamiento compartido de los beneficios del progreso. In Proceedings of the VI Congreso Internacional de la Red Bioética-UNESCO, 1st ed.; UNESCO: Ciudad de Panamá, Panamá, 2016; pp. 1–7. Available online: http://repositorio.uca.edu.sv/jspui/bitstream/11674/824/1/Aprovechamiento%20compartido%20de%20los%20beneficios%20del%20progreso.pdf (accessed on 5 July 2025).
  26. Escobar, A. Una Minga Para el Postdesarrollo: Lugar, Medio Ambiente y Movimientos Sociales en las Transformaciones Globales; Programa Democracia y Transformación Global, Fundación Ford.: Lima, Perú, 2010; pp. 1–222. Available online: http://bdjc.iia.unam.mx/items/show/46 (accessed on 1 July 2025).
  27. Shiva, V. Biopiracy: The Plunder of Nature and Knowledge; South End Press: Boston, MA, USA, 1997; pp. 127–133. ISBN 9780896085553. [Google Scholar]
  28. De Campos Rudinsky, T. Multilateralism and the global co responsibility of care in times of a pandemic: The legal duty to cooperate. Ethics Int. Aff. 2023, 37, 206–231. [Google Scholar] [CrossRef]
  29. Rourke, M. Access and Benefit Sharing in Practice: Fostering Equity Between Users and Providers of Genetic Resources. J. Sci. Policy Gov. 2016, 13, 1–20. Available online: https://www.sciencepolicyjournal.org/uploads/5/4/3/4/5434385/rourke.pdf (accessed on 5 July 2025).
  30. Lavelle, J.; Wynberg, R. Benefit Sharing Under the BBNJ Agreement in Practice; Decoding Marine Genetic Resource Governance Under the BBNJ Agreement; Springer Nature: Cham, Switzerland, 2025; pp. 271–281. [Google Scholar] [CrossRef]
  31. United Nations. Nagoya Protocol on Access to Genetic Resources and the Fair and Equitable Sharing of Benefits Arising from their Utilization to the Convention on Biological Diversity; United Nations: Nueva York, NY, USA, 2010; Available online: https://treaties.un.org/pages/ViewDetails.aspx?src=TREATY&mtdsg_no=XXVII-8-b&chapter=27 (accessed on 4 July 2025).
  32. Craik, N. Equitable marine carbon dioxide removal: The legal basis for interstate benefit sharing. In Climate Policy; Taylor & Francis: Abingdon, UK, 2025; pp. 1–5. [Google Scholar] [CrossRef]
  33. Kamau, E.; Winter, G. Equity and Innovation in International Biodiversity Law. Common Pools of Genetic Resources; Routledge: London, UK, 2013; pp. 1–456. [Google Scholar] [CrossRef]
  34. Morgera, E. Corporate Environmental Accountability in International Law; Oxford University Press: Oxford, UK, 2020; pp. 1–352. ISBN 0198738048. [Google Scholar] [CrossRef]
  35. Aubertin, C. Biopiraterie. In L’Encyclopédie du Développement Durable; Piéchaud, J., Ed.; Association 4D: Paris, France, 2006; pp. 117–120. Available online: https://horizon.documentation.ird.fr/exl-doc/pleins_textes/divers17-03/010069240.pdf (accessed on 5 July 2025).
  36. Ribeiro, S. Biopiraterie und geistiges Eigentum—Zur Privatisierung von gemeinschaftlichen Bereichen. In Mythen Globalen Umweltmanagements; Görg, C., Brand, U., Eds.; Westfälisches Dampfboot: Münster, Germany, 2002; pp. 118–136. Available online: https://www.ufz.de/export/data/2/81980_596-goerg-brand.pdf (accessed on 2 July 2025).
  37. Gitter, D. International conflicts over patenting human DNA sequences in the United States and the European Union: An argument for compulsory licensing and a fair use exemption. N. Y. Univ. Law Rev. 2001, 16, 1623–1691. Available online: https://heinonline.org/HOL/LandingPage?handle=hein.journals/nylr76&div=54&id=&page= (accessed on 5 July 2025).
  38. Ostrom, E. Governing the Commons; Cambridge University Press: Cambridge, UK, 1990; pp. 1–181. [Google Scholar] [CrossRef]
  39. International Telecommunication Union (ITU). AI for Good. Available online: https://aiforgood.itu.int/ (accessed on 4 February 2025).
  40. ADBI Institute. The APAC State of Open Banking and Open Finance Reporte; Cambridge Judge Business School, Cambridge Centre for Alternative Finance & Asian Development Bank Institute: Cambridge, UK, 2025; p. 105. Available online: https://www.adb.org/sites/default/files/publication/1065246/apac-state-open-banking-and-open-finance-report.pdf (accessed on 3 June 2025).
  41. Forbes India. More Data Access, and a New Way of Lending. Forbes India [Online] 16 May 2025. Available online: https://www.forbesindia.com/article/take-one-big-story-of-the-day/more-data-access-and-a-new-way-of-lending/95985/1 (accessed on 3 June 2025).
  42. Brynjolfsson, E.; McAfee, A. The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies; W. W. Norton & Company: New York, NY, USA, 2014; pp. 1–86. Available online: http://digamo.free.fr/brynmacafee2.pdf (accessed on 20 June 2025).
  43. Stiglitz, J. Globalization and Its Discontents; W. W. Norton & Company: New York, NY, USA, 2002; pp. 1–142. [Google Scholar] [CrossRef]
  44. Floridi, L.; Cowls, J. A unified framework of five principles for AI in society. Harv. Data Sci. Rev. 2019, 1, 1–14. [Google Scholar] [CrossRef]
  45. Hintze, A.; Adami, C. Artificial intelligence for social good. arXiv 2019, arXiv:2412.05450. [Google Scholar] [PubMed]
  46. Floridi, L. Ethics of Artificial Intelligence; Oxford University Press: Oxford, UK, 2019; pp. 1–414. [Google Scholar] [CrossRef]
  47. Taherdoost, H.; Madanchian, M. Artificial intelligence and knowledge management: Impacts, benefits, and implementation. Computers 2023, 12, 72. [Google Scholar] [CrossRef]
  48. Conn, A. AI Should Provide a Shared Benefit for as Many People as Possible. Future Life Inst. 2018, pp. 1–5. Available online: https://futureoflife.org/recent-news/shared-benefit-principle/ (accessed on 20 June 2025).
  49. Lin, P. Why ethics matters for robots. In Robot Ethics: The Ethical and Social Implications of Robotics; Lin, P., Abney, K., Bekey, G., Eds.; MIT Press: Cambridge, MA, USA, 2014; ISBN 978-0-262-01666-7. [Google Scholar]
  50. Future of Life Institute (FLI). Asilomar AI principles. Available online: https://futureoflife.org/ (accessed on 21 December 2024).
  51. Morandín-Ahuerma, F. Twenty-three Asilomar principles for artificial intelligence and the future of life. In Principios Normativos Para una Ética de la Inteligencia Artificial; Center for Open Science: Charlottesville, VA, USA, 2023; pp. 1–25. [Google Scholar] [CrossRef]
  52. Cotino, L. Riesgos e impactos del big data, la inteligencia artificial y la robótica y enfoques, modelos y principios de la respuesta del Derecho. Rev. Gen. Derecho Adm. 2019, 50, 1–37. Available online: https://dialnet.unirioja.es/servlet/articulo?codigo=6823516 (accessed on 18 June 2025).
  53. Suárez, F. Inteligencia artificial: Una mirada crítica al concepto de personalidad electrónica de los robots. SADIO Electron. J. Inform. Oper. Res. 2024, 32, 214–229. [Google Scholar] [CrossRef]
  54. European Commission (EC). Making artificial intelligence available to all—How to avoid big tech’s monopoly on AI? Eur. Comm. 2024, 1–2. Available online: https://europa.eu/newsroom/ecpc-failover/pdf/speech-24-931_en.pdf (accessed on 20 October 2024).
  55. Bartlett, B. Towards accountable, legitimate and trustworthy AI in healthcare: Enhancing AI ethics with effective data stewardship. New Bioeth. 2025, 1–25. [Google Scholar] [CrossRef] [PubMed]
  56. Acemoglu, D.; Restrepo, P. Artificial intelligence, automation, and work. In The Economics of Artificial Intelligence; Agrawal, A., Gans, J., Goldfarb, A., Eds.; University of Chicago Press: Chicago, IL, USA, 2018; pp. 197–236. Available online: http://www.nber.org/books/agra-1 (accessed on 20 June 2025).
  57. Yampolskiy, R. Artificial Intelligence Safety and Security; Taylor & Francis: New York, NY, USA, 2018; ISBN 9780815369820. [Google Scholar]
  58. Ávila, R. ¿Soberanía digital o colonialismo digital? Sur Rev. Int. Derechos Hum. 2018, 15, 15–28. Available online: https://sur.conectas.org/wp-content/uploads/2018/07/sur-27-espanhol-renata-avila-pinto.pdf (accessed on 12 June 2025).
  59. Nur, A.; Muntasir, W. The de-democratization of AI: Deep learning and the compute divide in artificial intelligence research. arXiv 2020, arXiv:2010.15581. [Google Scholar]
  60. Latonero, M. Governing Artificial Intelligence: Upholding Human Rights & Dignity; Data & Society: New York, NY, USA, 2018; pp. 1–37. Available online: https://datasociety.net/wp-content/uploads/2018/10/DataSociety_Governing_Artificial_Intelligence_Upholding_Human_Rights.pdf (accessed on 21 June 2025).
  61. Allam, Z.; Jones, D. On the coronavirus (COVID-19) outbreak and the smart city network: Universal data sharing standards coupled with artificial intelligence to benefit urban health monitoring and management. Healthcare 2020, 8, 46. [Google Scholar] [CrossRef] [PubMed]
  62. Villagra, J. Robótica e inteligencia artificial más humanas y sostenibles. Pap. Econ. Esp. 2021, 169, 165–177. Available online: https://www.funcas.es/wp-content/uploads/2021/09/PEE-169_11.pdf (accessed on 20 June 2025).
  63. Orose, L. The potential impacts of artificial intelligence on the happiness of human beings. Asian J. Econ. Financ. Manag. 2021, 3, 151–159. Available online: https://www.journaleconomics.org/index.php/AJEFM/article/view/98 (accessed on 20 June 2025).
  64. Ünver, A. The Role of Technology: New Methods of Information Manipulation and Disinformation; Centre for Economics and Foreign Policy Studies: Istanbul, Turkey, 2023; pp. 1–211. [Google Scholar] [CrossRef]
  65. Andrade, R. Problemas filosóficos de la inteligencia artificial general: Ontología, conflictos éticos-políticos y astrobiología. Argum. Razón Téc. 2023, 26, 275–302. [Google Scholar] [CrossRef]
  66. Filippi, E.; Bannò, M.; Trento, S. Automation technologies and their impact on employment: A review, synthesis and future research agenda. Technol. Forecast. Soc. Change 2023, 191, 1–21. [Google Scholar] [CrossRef]
  67. Kamande, J.; Stanworth, M. AI and Data Monopolies: Navigating the legal and ethical implications of antitrust regulations; Researchgate: Boston, MA, USA, 2024; pp. 1–8. Available online: https://www.researchgate.net/publication/384628540 (accessed on 25 June 2025).
  68. Tully, R. Who owns artificial intelligence? In Proceedings of the 26th International Conference on Engineering and Product Design Education, Birmingham, UK, 5–6 September 2024; Technological University Dublin: Dublin, Ireland; Aston University: Birmingham, UK, 2024; Volume 131, pp. 1–6. Available online: https://www.designsociety.org/1353/E%26PDE+2024+-+26th+International+Conference+on+Engineering+%26+Product+Design+Education (accessed on 20 June 2025).
  69. Krasodomski, A.; Gwagwa, A.; Jackson, B.; Jones, E.; King, S.; Lane, M.; Mantegna, M.; Schneider, T.; Siminyu, K.; Tarkowski, A. Artificial Intelligence and the Challenge for Global Governance; Chatham House: London, UK, 2024; pp. 1–70. Available online: https://www.chathamhouse.org/sites/default/files/2024-06/2024-06-07-ai-challenge-global-governance-krasodomski-et-al.pdf (accessed on 29 June 2025).
  70. Fjeld, J.; Achten, N.; Hilligoss, H.; Nagy, A.; Srikumar, M. Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI; Berkman Klein Center: Cambridge, MA, USA, 2024; Available online: https://cyber.harvard.edu/publication/2020/principled-ai (accessed on 20 June 2025).
  71. Benkler, Y. The Wealth of Networks: How Social Production Transforms Markets and Freedom; Yale University Press: New Haven, CT, USA, 2006; Available online: https://www.benkler.org/Benkler_Wealth_Of_Networks.pdf (accessed on 10 June 2025).
  72. Raymond, E. The cathedral and the bazaar. Know. Techn. Pol. 1999, 12, 23–49. [Google Scholar] [CrossRef]
  73. Sen, A. Development as Freedom; Oxford University Press: Oxford, UK, 1999. [Google Scholar] [CrossRef]
  74. Kallot, K. The Global South Needs to Own Its AI Revolution. Available online: https://www.project-syndicate.org/commentary/global-south-must-use-ai-to-build-better-world-by-kate-kallot-2025-02 (accessed on 12 February 2025).
  75. Nadisha, A.; Kester, L.; Yampolskiy, R. Transdisciplinary AI observatory—Retrospective analyses and future-oriented contradistinctions. Philosophies 2021, 6, 6. [Google Scholar] [CrossRef]
  76. Korinek, A.; Stiglitz, J. Artificial intelligence and its implications for income distribution and unemployment. In The Economics of Artificial Intelligence; Agrawal, A., Gans, J., Goldfarb, A., Eds.; University of Chicago Press: Chicago, IL, USA, 2018; pp. 349–390. [Google Scholar] [CrossRef]
  77. Manvi, R.; Khanna, S.; Burke, M.; Lobell, D.; Ermon, S. Large language models are geographically biased. arXiv 2024. [Google Scholar] [CrossRef]
  78. Silva, V. The global governance of artificial intelligence and the future of work. In Handbook on the Global Governance of AI; Edward Elgar: Cheltenham, UK, 2024; pp. 1–18. [Google Scholar] [CrossRef]
  79. Warr, M.; Rather, S. Sociological analysis of artificial intelligence, benefits, concerns and its future implications. Int. J. Indian Psychol. 2024, 12, 1825–1830. [Google Scholar] [CrossRef]
  80. Muldoon, J.; Cant, C.; Graham, M.; Spilda, F.U. The poverty of ethical AI: Impact sourcing and AI supply chains. AI Soc. 2023, 40, 529–543. [Google Scholar] [CrossRef]
  81. Couldry, N.; Mejias, U. Data colonialism: Rethinking big data’s relation to the contemporary subject. Televis. New Media 2019, 20, 336–349. [Google Scholar] [CrossRef]
  82. Montero, A. Contexto histórico del origen de la ética de la investigación científica y su fundamentación filosófica. Ethika+ 2020, 1, 11–29. [Google Scholar] [CrossRef]
  83. Rodrigues, R. Legal and human rights issues of AI: Gaps, challenges and vulnerabilities. J. Responsib. Technol. 2020, 4, 1–12. [Google Scholar] [CrossRef]
  84. Vega, L. Conocimiento, diferencia y equidad. In Políticas Educativas, Diferencia y Equidad; Fuentes, L., Jiménez, B., Eds.; Universidad Central: Bogotá, Colombia, 2015; pp. 32–42. Available online: https://repositorio.unal.edu.co/bitstream/handle/unal/56990/9789582602277.pdf?sequence=1#page=32 (accessed on 20 June 2025).
  85. Ciruzzi, M. La “Cenicienta Bioética”: Justicia distributiva en un mundo injusto e inequitativo. In Anuario de Bioética y Derechos Humanos; Tinan, E., Ed.; Instituto Internacional de Derechos Humanos: Buenos Aires, Argentina, 2020; pp. 99–116. Available online: https://www.iidhamerica.org/pdf/16039938717206anuario-de-bioetica-2020-final609c3be0d60ae.pdf (accessed on 26 June 2025).
  86. Larsson, S.; Heintz, F. Transparency in artificial intelligence. Internet Policy Rev. 2020, 9, 1–16. [Google Scholar] [CrossRef]
  87. Mounkaila, O. Artificial Intelligence and Ethics; Institut d’électronique et d’informatique Gaspard Monge, Université Gustave Eiffel: Paris, France, 2024; pp. 1–8. Available online: https://www.researchgate.net/publication/381574628 (accessed on 13 June 2025).
  88. Raymond, E. Cultivando la Noosfera. Available online: http://sindominio.net/biblioweb/telematica/noosfera.html (accessed on 12 June 2000).
  89. Forti, M. A legal identity for all through artificial intelligence: Benefits and drawbacks in using AI algorithms to accomplish SDG 16.9. In The Ethics of Artificial Intelligence for the Sustainable Development Goals; Mazzi, F., Floridi, L., Eds.; Springer: Berlin/Heidelberg, Germany, 2023; pp. 253–267. [Google Scholar] [CrossRef]
  90. Bhattacharya, P.; Kumar, V.; Verma, A.; Gupta, D.; Sapsomboon, A.; Viriyasitavat, W.; Dhiman, G. Demystifying. ChatGPT: An in-depth survey of OpenAI’s robust large language models. Arch. Comput. Methods Eng. 2024, 31, 4557–4600. [Google Scholar] [CrossRef]
  91. Ramakrishnan, R.; Pillai, S. Open innovation and crowd sourcing: Harnessing collective intelligence. In Evolving Landscapes of Research and Development: Trends, Challenges, and Opportunities; Datta, D., Jain, V., Halder, B., Raychaudhuri, U., Kumar, S., Eds.; IGI Global: Hershey, PA, USA, 2025; pp. 235–260. [Google Scholar] [CrossRef]
  92. Broekhuizen, T.; Dekker, H.; Faria, P.; Firk, S.; Khoi, D.; Sofka, W. AI for managing open innovation: Opportunities, challenges, and a research agenda. J. Bus. Res. 2023, 167, 1–14. [Google Scholar] [CrossRef]
  93. Fasnacht, D. Open innovation ecosystems. In Open Innovation Ecosystems: Creating New Value Constellations in the Financial Services; Springer International Publishing: Cham, Switzerland, 2018; pp. 131–172. Available online: https://link.springer.com/chapter/10.1007/978-3-319-76394-1_5 (accessed on 20 June 2025).
  94. Calzada, I. Artificial intelligence for social innovation: Beyond the noise of algorithms and datafication. Sustainability 2024, 16, 8638. [Google Scholar] [CrossRef]
  95. Gao, J.; Wang, D. Quantifying the use and potential benefits of artificial intelligence in scientific research. Nat. Hum. Behav. 2024, 8, 2281–2292. [Google Scholar] [CrossRef] [PubMed]
  96. World Economic Forum (WEF). Emerging Technologies. To Fully Appreciate AI Expectations, Look to the Trillions Being Invested. Available online: https://www.weforum.org/stories/2024/04/appreciate-ai-expectations-trillions-invested/ (accessed on 3 April 2024).
  97. Tech Giants Are Paying Huge Salaries for Scarce A.I. Talent. The New York Times, 2017; 1–3. Available online: https://www.nytimes.com/2017/10/22/technology/artificial-intelligence-experts-salaries.html (accessed on 2 September 2024).
  98. Lynch, S. AI Index: State of AI in 13 Charts. Stanford Institute for Human-Centered Artificial Intelligence (HAI); Stanford University: Stanford, CA, USA, 2024; pp. 1–14. Available online: https://hai.stanford.edu/news/ai-index-state-ai-13-charts (accessed on 3 February 2025).
  99. CNBC. China wants to be a $150 billion world leader in AI in less than 15 years. CNBC, 2017; 1–8. Available online: https://www.cnbc.com/2017/07/21/china-ai-world-leader-by-2030.html (accessed on 10 February 2025).
  100. Cusson, S.; Shea, P. AI and Private Capital in Canada—Context and Legal Outlook; McCarthy Law Firm: Toronto, Canada, 2025; pp. 1–3. Available online: https://www.mccarthy.ca/en/insights/blogs/spotlight-can-asia/ai-and-private-capital-canada-context-and-legal-outlook (accessed on 8 January 2025).
  101. Rodríguez, L.; Trujillo, G.; Egusquiza, J. Revolución industrial 4.0: La brecha digital en Latinoamérica. Rev. Interdiscip. Koin. 2021, 6, 147–162. [Google Scholar] [CrossRef]
  102. Ade-Ibijola, A.; Okonkwo, C. Artificial intelligence in Africa: Emerging challenges. In Responsible AI in Africa: Challenges and Opportunities; Springer International Publishing: Cham, Switzerland, 2023; pp. 101–117. [Google Scholar] [CrossRef]
  103. African Union (AU). Continental Artificial Intelligence Strategy; African Union: Adís Abeba, Etiopía, 2024; Available online: https://au.int/en/documents/20240809/continental-artificial-intelligence-strategy (accessed on 5 February 2025).
  104. Padmashree, G. Governing artificial intelligence in an age of inequality. Glob. Policy 2021, 12, 21–31. [Google Scholar] [CrossRef]
  105. Coleman, B. Human–machine communication, artificial intelligence, and issues of data colonialism. In The SAGE Handbook of Human-Machine Communication; Guzmán, A., McEwen, R., Jones, S., Eds.; Sage: New York, NY, USA, 2023; pp. 350–356. [Google Scholar] [CrossRef]
Figure 1. Detail of the list of emerging categories. Source: Prepared by the authors based on the data obtained. [The numbers printed on each card (16, 16, 14, 12, 12, 8) indicated how many text fragments were assigned to the corresponding emerging category during the qualitative analysis. The coding process was carried out in three successive stages: (i) open coding, which identified the basic units of meaning related to the principle of benefit-sharing; (ii) axial coding, which regrouped synonyms and eliminated redundancies; and (iii) selective coding, during which six emerging categories—Equity, Accessibility, Transparency, Sustainability, Participation, and Cooperation—consolidated the findings. Therefore, each frequency reflected the conceptual density observed in twenty-six primary sources and corroborated the relative relevance of each dimension in the study].
Figure 1. Detail of the list of emerging categories. Source: Prepared by the authors based on the data obtained. [The numbers printed on each card (16, 16, 14, 12, 12, 8) indicated how many text fragments were assigned to the corresponding emerging category during the qualitative analysis. The coding process was carried out in three successive stages: (i) open coding, which identified the basic units of meaning related to the principle of benefit-sharing; (ii) axial coding, which regrouped synonyms and eliminated redundancies; and (iii) selective coding, during which six emerging categories—Equity, Accessibility, Transparency, Sustainability, Participation, and Cooperation—consolidated the findings. Therefore, each frequency reflected the conceptual density observed in twenty-six primary sources and corroborated the relative relevance of each dimension in the study].
Philosophies 10 00087 g001
Figure 2. Inclusion and exclusion criteria of the research. Source: Prepared by the authors based on the data obtained. [Figure 2 shows, in parallel, the six inclusion criteria (period 1964–2025): thematic relevance to justice, bioethics, and AI in the Global South, qualitative-theoretical methodology, demonstrable scholarly impact, explicit ethical framework, and accessibility in English/Spanish) and the five complementary exclusion criteria (outdated bibliography, peripheral focus, lack of methodological rigor, lack of a North-South perspective, and non-peer-reviewed sources) used to refine the initial corpus from 87 to 26 documents].
Figure 2. Inclusion and exclusion criteria of the research. Source: Prepared by the authors based on the data obtained. [Figure 2 shows, in parallel, the six inclusion criteria (period 1964–2025): thematic relevance to justice, bioethics, and AI in the Global South, qualitative-theoretical methodology, demonstrable scholarly impact, explicit ethical framework, and accessibility in English/Spanish) and the five complementary exclusion criteria (outdated bibliography, peripheral focus, lack of methodological rigor, lack of a North-South perspective, and non-peer-reviewed sources) used to refine the initial corpus from 87 to 26 documents].
Philosophies 10 00087 g002
Table 1. Codebook Snapshot.
Table 1. Codebook Snapshot.
Code IDLabelDefinitionSample Quote
C03Human-centred AISystems designed for human flourishing“AI must be human-centred…” UNESCO [9,11]
C11Transparency dutyObligation to disclose algorithmic logic“Stakeholders should ensure transparency…” OECD [12]
Source: Prepared by the authors based on the methodological design.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Vargas-Machado, C.; Bedoya, A.R. The Principle of Shared Utilization of Benefits Applied to the Development of Artificial Intelligence. Philosophies 2025, 10, 87. https://doi.org/10.3390/philosophies10040087

AMA Style

Vargas-Machado C, Bedoya AR. The Principle of Shared Utilization of Benefits Applied to the Development of Artificial Intelligence. Philosophies. 2025; 10(4):87. https://doi.org/10.3390/philosophies10040087

Chicago/Turabian Style

Vargas-Machado, Camilo, and Andrés Roncancio Bedoya. 2025. "The Principle of Shared Utilization of Benefits Applied to the Development of Artificial Intelligence" Philosophies 10, no. 4: 87. https://doi.org/10.3390/philosophies10040087

APA Style

Vargas-Machado, C., & Bedoya, A. R. (2025). The Principle of Shared Utilization of Benefits Applied to the Development of Artificial Intelligence. Philosophies, 10(4), 87. https://doi.org/10.3390/philosophies10040087

Article Metrics

Back to TopTop