Next Article in Journal
Scaling Linearizable Range Queries on Modern Multi-Cores
Previous Article in Journal
Integrating Design Thinking Approach and Simulation Tools in Smart Building Systems Education: A Case Study on Computer-Assisted Learning for Master’s Students
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Towards Navigating Ethical Challenges in AI-Driven Healthcare Ad Moderation †

by
Abraham Abby Sen
1,‡,
Jeen Mariam Joy
2,*,‡ and
Murray E. Jennex
1,*,‡
1
Computer Information Systems, Paul & Virginia Engler School of Business, West Texas A&M University, 2501 4th Ave, Canyon, TX 79016, USA
2
Foundations Department, School of Education, Virginia Commonwealth University, Richmond, VA 23284, USA
*
Authors to whom correspondence should be addressed.
This paper is an extended version of our paper published in 58th Hawaii International Conference on System Sciences, Big Island, HI, USA, 7–10 January 2025.
These authors contributed equally to this work.
Computers 2025, 14(9), 380; https://doi.org/10.3390/computers14090380
Submission received: 17 August 2025 / Revised: 5 September 2025 / Accepted: 5 September 2025 / Published: 11 September 2025
(This article belongs to the Section AI-Driven Innovations)

Abstract

The growing use of AI-driven content moderation on social media platforms has intensified ethical concerns, particularly in the context of healthcare advertising and misinformation. While artificial intelligence offers scale and efficiency, it lacks the moral judgment, contextual understanding, and interpretive flexibility required to navigate complex health-related discourse. This paper addresses these challenges by integrating normative ethical theory with organizational practice to evaluate the limitations of AI in moderating healthcare content. Drawing on deontological, utilitarian, and virtue ethics frameworks, the analysis explores the tensions between ethical ideals and real-world implementation. Building on this foundation, the paper proposes a set of normative guidelines that emphasize hybrid human–AI moderation, transparency, the redesign of success metrics, and the cultivation of ethical organizational cultures. To institutionalize these principles, we introduce a governance framework that includes internal accountability structures, external oversight mechanisms, and adaptive processes for handling ambiguity, disagreement, and evolving standards. By connecting ethical theory with actionable design strategies, this study provides a roadmap for responsible and context-sensitive AI moderation in the digital healthcare ecosystem.

1. Introduction

This conceptual paper discusses the ethical considerations and challenges that social media platforms must consider when approving policies on online healthcare marketing. However, the talks on tackling them are still in their infancy [1,2]. The article’s motivation and the observed phenomenon that sparked this discussion was the appearance on social media of several healthcare ad campaigns from seemingly legitimate health service providers [3,4]. The ads in question are well-made, featuring an authentic-looking logo with the name of a health service provider and individuals dressed as physicians presenting information. However, these ads promote false medical information, and fraudsters who create misleading ads that pose severe public health risks [5,6]. For example, one such ad suggests a natural lung cleansing spray that allegedly allows people to smoke without health risks. While domain experts might recognize the ads as fake, many people are convinced by them. Some people even consider smoking due to the supposed health benefits of the spray [7]. The AI that moderates these ads fails to effectively filter them or call for human review [8].
Ethics is a construct of the human consciousness and our way of perceiving the environment around us from a moral standpoint [9]. Framing our discussion is challenging, since ethics depends on an observer’s frame of reference. The immediate question is on whom we must place the ethical burden of moral action. One approach is to develop AI with a moral compass, or must we leave the moral considerations in the hands of humans? Studies also posit that ethical issues must be left with humans as AI, which cannot question itself, is fundamentally incapable of morality. We extend this idea further by asking which humans [10,11]? Is it the responsibility of the consumer to be fully aware of their environment? Some would argue that this is the case; however, false marketing (now using AI) is improving rapidly and blurring the boundaries between fact and fiction. Fraudsters use AI-generated images and speech that are rather compelling. Is it reasonable for a non-expert to have sufficient domain knowledge to spot irregularities that even an expert might overlook? Alternatively, perhaps the social media platforms that use AI to filter ads, serving as the stage for presenting misinformation, must bear the burden of ensuring their platform is not used for immoral gains [12]. Perhaps the AI developer or researcher must consider the ethical underpinnings before advancing technology. The objective of this paper is not to fully describe this complex issue of moral responsibility involving AI and misleading healthcare ads on social media from all human perspectives or to prescribe a one-stop solution to the problem [13,14]. We hope to expand existing literature [10,11] and explore some ideas from the perspective of the social media platform, using an interpretivist paradigm so that the problem is better understood and open to study. Our focus is on social media platforms rather than fraudsters, users, or developers. Platforms like Facebook, Twitter, and Instagram are primary information sources for billions, facilitating both communication and the rapid spread of misinformation, especially in healthcare [15]. This misinformation can cause public health crises, erode trust in medical institutions, and harm individuals. Therefore, social media companies have a legal, moral, and ethical responsibility to regulate healthcare misinformation [16,17,18].
The US Department of Health and Human Services defines health misinformation as “information that is false, inaccurate, or misleading according to the best available evidence at the time” [19]. Dr. Vivek H. Murthy (Surgeon General of the United States) states, “Health misinformation is a serious threat to public health. It can cause confusion, sow mistrust, harm people’s health, and undermine public health efforts.” The US Food and Drug Administration [20] claims, “Health fraud scams refer to products that claim to prevent, treat, or cure diseases or other health conditions, but are not proven safe and effective for those uses. Health fraud scams waste money and can lead to delays in getting proper diagnosis and treatment. They can also cause serious or even fatal injuries.” We point out that some literature differentiates between health misinformation and disinformation. Both concern the spread of false information; the latter is intended to deceive the user, while the former is unintentional [21,22]. However, regardless of intention, social media companies’ ethical and moral responsibility to their users to adopt countermeasures remains essentially the same. Therefore, for simplicity, we refer to the spreading of wrongful healthcare information as misinformation for the remainder of this paper. Our discussion is timely and impactful, as we usher in the new age of AI in healthcare.
Social media platforms like Facebook, YouTube, and Instagram use AI to regulate the ads and posts on their platforms. However, can AI take on an ethical responsibility on behalf of these platforms? Since (at present) AI cannot doubt itself, it cannot develop a moral compass [10,11,23]. AI filtering programs depend on their training data to identify potentially offensive, politically or racially charged language. However, it does not filter out misinformation even though the consequences can be severe. To make matters worse, the AI algorithm promotes ads that generate user engagement if they pass the filtering system. Controversial health ads often do not use offensive language and remain undetected. Since they generate large user engagement, they are promoted by AI to reach even more users. Fraudsters in effect have the AI algorithm that is supposed to filter them working in their favor.
Human moral paradigms mainly depend on our ability to question ourselves, justify our viewpoint, and relate to others. AI cannot explain its decision-making, have genuine empathy or doubt; therefore, AI remains unachieved by the human notion of ethics and morality [11]. So, if we leave moral considerations in human hands, when does a human step in? For example, AI is defined as machines that can do tasks that a human achieves with thought and planning [24]. In essence, AI is a technology that seeks to imitate human intelligence. However, we have yet to develop an AI that accomplishes tasks resembling human thinking and reasoning [25]. Therefore, how does an AI know when to ask for human help? The processing behind AI in deep learning (DL) models is hidden behind layers, which makes it impossible to understand why and how AI makes its decisions. Given the black-box nature of AI, how are humans supposed to actively moderate or intervene in an AI decision-making process? Currently, certain rule/heuristic-based systems flag specific situations that a human moderator manually reviews. However, such a system does not circumvent AI limitations, but only serves to account for the contexts described by the rule/heuristics.
To advance this argument, the paper begins by outlining foundational ethical frameworks and evaluating their applicability to healthcare content moderation on social media platforms. We examine the ethical complexities of health misinformation, emphasizing its potential harms and the tensions it creates between public health, free expression, and platform responsibility. Building on this, we assess the ethical limitations of AI systems and the organizational challenges of translating ethical intent into practice. We then develop a set of normative guidelines grounded in ethical theory and informed by real-world constraints, followed by a governance framework that operationalizes these guidelines through institutional mechanisms. The paper concludes by reflecting on its limitations and proposing directions for future research.

2. Ethical Theories and Its Uses and Challenges

Ethics is a branch of philosophy that aims to study morality scientifically. Similarly, morality refers to issues concerning right and wrong, good and evil, and obligations and responsibilities. The normative schools of thought are a prescription approach that aims to reveal potential actions, such as what is and is not to be done [26,27]. However, behavioral schools seek to describe what people do. Study also summarizes several ethical viewpoints on the topic of ethics in AI [11]. We shortlist three based on their application to a large number of users for our discussion. Since social media is used by billions worldwide, such ethical paradigms (for the masses) fit our discussion.

2.1. The Utilitarian Perspective

Utilitarianism is a consequentialist ethical theory that evaluates the morality of actions based on their outcomes, aiming to maximize overall happiness and minimize suffering for the majority of individuals [28,29]. When applied to healthcare misinformation on social media, the negative consequences, such as increased public health risks, strained healthcare resources, economic costs, psychological distress, and eroded trust in health authorities, far outweigh any potential positive effects like increased engagement on the website [19,30,31]. From a utilitarian standpoint, combating healthcare misinformation is ethically justified and a moral necessity. Social media platforms can implement robust moderation mechanisms to identify and remove misinformation, thus reducing public health risks and economic burdens. Providing accurate, accessible health information and promoting credible health sources can improve health literacy and public trust [32]. However, utilitarianism faces challenges, including objectively measuring and calculating utility, as happiness and suffering are subjective. It may also lead to outcomes that violate justice or individual rights, as it could justify sacrificing minority interests for the greater good. Nevertheless, allowing unchecked healthcare misinformation contradicts utilitarian principles because the harm it causes outweighs any benefits of unrestricted free speech.

2.2. Deontological Perspective

The deontological perspective is an ethical theory that emphasizes the importance of duty, rules/heuristics, and principles in determining the morality of actions rather than their consequences [33]. Deontological ethics, often associated with philosophers such as Kant, focuses on actions’ inherent rightness or wrongness, irrespective of their outcomes [11]. Deontological ethics posits that individuals have specific moral duties and obligations that they are bound to fulfill, regardless of the consequences. These duties derive from rational principles or rule/heuristics that govern moral behavior and are considered inherently right or wrong. Kant introduced the categorical imperative concept, a fundamental deontological ethics tenet. According to Kant, the categorical imperative commands certain morally obligatory actions without reference to specific goals or outcomes. The ontological school of ethics would seem to support [11,34] the suggestion that morality is best left with humans, given the inability of AI to doubt its decision-making. Therefore, in line with such reasoning, social media companies are morally obligated to regulate harmful ads and health information on their platforms. Challenges include perceived inflexibility and absolutism, potentially leading to conflicts between duties or morally undesirable outcomes if consequences are ignored. This approach often conflicts with utilitarianism, which focuses on outcomes. In social media, deontological ethics demands that platforms uphold truth and prevent harm by ensuring a safe and reliable information-sharing environment.

2.3. Virtue Ethics

Virtue ethics emphasizes the importance of moral character and virtues like honesty, integrity, and responsibility [35,36]. As influential entities, social media platforms should embody these virtues by actively combating healthcare misinformation, enhancing credibility, and contributing to a more informed and ethical online community. Virtues are habitual dispositions to do good, leading to a fulfilling life [26]. They are known as eudaimonia, which Aristotle described as the highest human good [37]. In virtue ethics, moral decision-making involves practical wisdom and considering what a virtuous person would do in different situations. This approach values truthfulness, making misinformation morally problematic because it undermines virtues like honesty, integrity, responsibility, and compassion. Social media platforms are morally obligated to ensure the accuracy of shared information, promote trustworthy content, and foster a culture of virtuous behavior. Challenges include the lack of clear guidance for specific actions and the potential for cultural relativism, where different cultures have varying conceptions of virtue.

3. Theoretical Ethics vs. Applied Ethics

Theories provide abstract, consistent guidelines for right and wrong [38]. Ethical practice involves applying these principles in complex, ambiguous, real-world contexts with conflicting interests [39]. Practitioners must navigate practical constraints, cultural norms, and unforeseen consequences, often requiring compromises and judgment calls. While theories strive for objectivity and predictability, practice is influenced by personal biases and situational factors. This disparity highlights the challenge of adhering to ideal ethical standards amid real-world practicalities, where ethical decision-making involves subjective judgment and emotional engagement.
Additionally, while ethical theories provide prescriptive norms on how people ought to behave, ethical behavior in practice often involves descriptive accounts of how people behave [36]. This gap between prescription and practice highlights the challenges of implementing ethical standards in real-world settings. Table 1 details some of the common differences in ethical theory and practice [40,41] when considering utilitarian, deontological, and ethical perspectives.
The distinction between theoretical ethics and applied ethics is not a philosophical detour but a central challenge in the governance of AI-driven healthcare ad moderation. Platforms must translate broad principles, such as maximizing welfare, fulfilling duty, and cultivating integrity into workflows that handle billions of posts and ads in real time. Without this translation, ethical theory risks remaining an academic ideal, while practice devolves into expediency, leaving harmful content unchallenged. This gap is especially consequential in healthcare, where misleading claims, such as advertisements for a so-called lung-cleansing spray or other fraudulent remedies, circulate widely and exploit vulnerable audiences. AI systems that should protect users often amplify such content, since engagement-driven algorithms reward sensationalism even when it undermines public health.
Bridging this gap requires acknowledging that ethical theories offer principled direction but not ready-made solutions. Utilitarian reasoning may justify intervention, yet opaque algorithms complicate implementation. Deontological duties demand honesty and responsibility, yet profit motives often conflict with these obligations. Virtue ethics call for institutional integrity, yet such virtues are difficult to embed at scale. These tensions frame the central challenge of translating ideals into practice, and the following sections outline strategies and governance approaches for embedding ethical commitments into healthcare content moderation.
Foundational ethical theories such as utilitarianism, deontology, and virtue ethics have remained remarkably constant over centuries, and we expect them to continue serving as guiding frameworks even as the nature of AI evolves. What changes is not the core principles, such as minimizing harm, fulfilling duties, or cultivating integrity, but the contexts in which they must be applied. For instance, the rise of anthropomorphic AI systems, such as conversational health chatbots that adopt a physician-like tone, creates new challenges for perception and trust. Users may interpret algorithmic outputs as authoritative advice, even when they lack scientific grounding. While this complicates the operationalization of ethics in practice, the underlying obligations to truthfulness, harm reduction, and accountability remain unchanged. In this respect, the continuity of ethical theory ensures that the insights offered here will remain relevant despite technological shifts. As AI raises new kinds of implementation challenges, but these are best understood as novel test cases for enduring ethical concepts rather than grounds for entirely new theories [66,70]. Our study therefore contributes durable insights, bridging stable theoretical commitments with evolving socio-technical realities.

4. Strategies to Apply Ethics in Organizational Practice

Theories provide detailed insights into ethical frameworks, but they are often loosely interpreted in practice [71,72]. Scientific research examines issues through a specific ethical lens that defines problems and consistent, logical solutions [73]. A study on organizational ethics might use a utilitarian framework to explore issues/solutions [74], but organizations seldom strictly adhere to a single ethical framework for defining guidelines. For example, the International Information System Security Certification Consortium (ISC2), a leading IT security organization, has well-defined ethical guidelines emphasizing protecting society, acting honorably, and advancing the profession, with strict consequences for non-compliance [75]. In contrast, non-certified organizations often follow loosely defined ethical codes, focusing on general standards like integrity, responsible behavior, and fairness, with guidelines that encourage legal adherence and avoid conflicts of interest [76].
Ethical frameworks in organizations are put into practice through various strategies and mechanisms designed to ensure that ethical principles guide decision-making, behavior, and culture [51]. For example, to define and implement a Code of Ethics and Conduct, a written document that outlines employees’ values, principles, and expected behaviors is distributed to all employees and requires them to acknowledge and agree to it. Table 2 details different organizational ethical strategies from a utilitarian, deontological, and virtue ethics standpoint, along with their definitions and implementation strategies [77,78].

5. AI and Ethics

The growing reliance on AI systems in digital content moderation, particularly within healthcare, has intensified the urgency of ethical scrutiny [89,90]. While normative ethical theories have been adapted to the digital age, the ethical evaluation of AI presents a unique class of challenges [91,92]. AI systems are ethically inert: while they can execute commands with precision, they lack the capacity for moral reasoning, contextual judgment, or reflective self-correction [68,93]. It can execute commands, but not weigh moral trade-offs or interrogate the implications of its own decisions [11]. Contemporary AI development often leans on performance-oriented models, such as deep neural networks (DNNs), which prioritize prediction accuracy and computational efficiency [94]. Yet, these models operate as black boxes, incapable of clarifying why certain outputs were generated. Their reliance on vast datasets, which may contain unintentional gaps or distortions common in real-world data collection, introduces significant ethical risks, particularly when applied to high-stakes domains like healthcare [94,95,96]. As developers may continue to deploy such systems despite their known limitations because they align with customer expectations or business incentives [97]. This reflects not only a technical bias but also an ethical design failure, where the absence of human-centered guardrails allows harm to persist under the guise of automation.
Moreover, AI systems cannot recognize when their own interpretations are incomplete or harmful [98,99]. For example, a model may flag a health ad for removal based on statistical patterns, without understanding the cultural significance of traditional remedies or the context in which claims are made [100]. This echo concerns raised in Section 6.1: How does an AI know when to ask for human help? Without a framework for escalation or reflection, AI moderation becomes brittle, efficient in volume, but ethically blind in nuance. Ultimately, the ethical limitations of AI are not secondary design flaws; they are foundational constraints. AI ethics must therefore shift from merely optimizing implementation to restructuring the design premise. This requires systems that anticipate uncertainty, invite human judgment, and embed moral reasoning into their decision-making workflows, not as optional interventions, but as institutional norms. As Table 1 emphasizes, ethical tensions are not bugs in the system; they are inherent in the system, and ethical AI must be designed to confront them head-on.

Transparency and the Limits of Explainability

Among the most pressing ethical challenges in AI systems is the question of transparency. As Section 6.2 argues, transparency is not just a technical property; it is a moral prerequisite for accountability, trust, and human intervention [10,101]. Without clear insight into how decisions are made, human reviewers cannot meaningfully correct, escalate, or contextualize automated judgments [102]. Transparency is the enabling condition that makes ethically sound hybrid moderation possible [103]. Yet, achieving meaningful transparency, especially in deep learning models, is notoriously difficult [104]. Complex AI architectures often resist interpretation, and even when outputs can be traced to specific features, the reason behind a decision remains opaque. This opacity creates a dangerous asymmetry: platforms can delegate ethical labor to algorithms, but users and regulators are denied the ability to challenge or scrutinize those decisions.
Explainable AI (XAI) techniques attempt to mitigate this problem by offering post hoc interpretability. These may include model-agnostic explanations, feature importance scores, or surrogate models that approximate the logic of complex systems [105,106]. While promising, such approaches are limited: they provide fragments of reasoning, not a full ethical context. Moreover, there is often a trade-off between explainability and performance, where increasing transparency risks reducing algorithmic efficiency by placing ethical clarity in conflict with operational scale [107]. In practical terms, transparency must extend beyond model architecture. It involves documentation practices, training data disclosures, and structured communication protocols between AI systems and human reviewers [104]. Similarly, governance should institutionalize transparency through audit trails, decision rationale logs, and stakeholder feedback mechanisms [108]. Transparency is not just a technical attribute but a processual one, enabled by traceability tools, review workflows, and stakeholder-facing documentation (see Table 2). These tools allow platforms to surface not only what AI systems decide, but how and why and under what conditions those decisions should be reexamined. Responsibility for transparency cannot be relegated to developers alone. Regulators, platform operators, civil society organizations, and end users all have roles to play. Governments and international bodies can set minimum standards for transparency and interpretability. Platforms can publish public-facing transparency reports and support appeal mechanisms. Users and independent experts can evaluate whether the stated logic aligns with broader ethical and public health goals.
In summary, transparency is not a static achievement, but a continuous obligation. It requires not only technical explainability but also ethical intentionality, ensuring that human beings can meaningfully engage with, question, and override AI systems when necessary. As the foundation for both guideline implementation (Section 6) and institutional governance (Section 7), transparency remains a critical, but often fragile, pillar of ethical AI moderation in healthcare.

6. Ethical Guidelines for Social Media Platforms

The widespread adoption of AI-driven content moderation systems by social media platforms has created both operational efficiencies and unprecedented ethical risks, particularly in the healthcare domain. As explored in prior sections, ethical theories (Section 2), the gap between ethics in theory and practice (Section 3), and organizational strategies (Section 4) all illustrate that mere technical implementation is insufficient. Furthermore, Section 5 shows that AI, while powerful, lacks the moral capacity for self-reflection, value alignment, and contextual understanding. Therefore, this section outlines normative ethical guidelines that social media platforms should adopt to ensure the ethical moderation of healthcare content. These guidelines derive from both philosophical principles and pragmatic challenges, offering a foundation upon which platforms can base their design and governance choices.

6.1. Embrace Hybrid Human–AI Moderation Frameworks

As established in Section 5, AI lacks the capacity for self-doubt, contextual empathy, or moral reasoning [11]. This limitation renders AI inherently incapable of recognizing when a situation exceeds its interpretive scope, a concern raised in the introduction: How does an AI know when to ask for human help? To address this, platforms must implement hybrid moderation systems that combine AI’s scalability with rule/heuristic-based escalation protocols and trained human oversight. As discussed earlier, AI can efficiently triage large volumes of content; it must be governed by predefined triggers that direct certain categories, such as unverifiable medical claims, fringe treatments, or conflicting expert opinions to human review [99,100,109]. This mirrors the use of the structured ethical decision-making processes recommended in Table 2, ensuring that moderation reflects both moral scrutiny and procedural accountability.
This hybrid model aligns with the Idealism vs. Pragmatism tension in Table 1. While ideal ethical systems advocate for consistent, principled behavior, real-world moderation, particularly in healthcare, entails ambiguity and trade-offs. Hybrid systems accept the ambiguity of practice, allowing for human interpretation in edge cases that AI cannot reliably assess. Moreover, it reflects the Principle vs. Compromise dynamic: ethical purity may not be feasible under conditions of scale, but compromise structures can still preserve foundational values. Rooted in a deontological ethical stance, this model respects the intrinsic moral duty to prevent harm, rather than relying solely on consequentialist logic. Human reviewers act as ethical agents, applying normative judgment to assess the trustworthiness, intent, and public health implications of flagged content.
A relevant example illustrates the need for this hybrid escalation framework: consider an AI system reviewing a so-called ‘lung cleansing spray’ advertisement that claims to detoxify the respiratory system post-COVID exposure. Similar cases include fraudulent promotions of miracle weight-loss supplements, herbal cancer cures lacking regulatory approval, and greenwashed cigarette ads that downplay harm [7]. Recent reports show that healthcare misinformation on social media is surging, with TikTok and other platforms hosting misleading posts on diets, diabetes, and mental health, underscoring how fraudulent medical claims routinely bypass automated moderation [110,111]. While such content may not explicitly violate platform policies, their implied medical efficacy, lack of scientific consensus, and potential to mislead vulnerable users require ethical evaluation beyond automated detection. In this scenario, the system’s escalation protocol should have directed the content to a human moderator trained to evaluate health misinformation risks through a deontological lens, considering user safety as an overriding moral obligation. From an organizational standpoint, implementation should include
  • A taxonomy of escalation rule/heuristics, based on real-world case patterns
  • Ethics training for moderators, grounded in public health, and misinformation dynamics (see more in Table 2)
  • Audit trails to log AI decisions and human interventions for accountability
In recognizing the Predictability vs. Uncertainty gap from Table 1, hybrid frameworks accept that while AI promises efficiency, it cannot resolve ethically uncertain cases alone. Instead, these systems institutionalize human discretion where algorithmic clarity fails, thus creating a pragmatic, ethically grounded foundation for responsible healthcare ad moderation.

6.2. Prioritize Transparency Through Explainable AI

Transparency is a cornerstone of ethical decision-making and a recurring deficiency in AI-driven systems. As discussed in Section Transparency and the Limits of Explainability, Explainable AI (XAI) techniques can help demystify algorithmic decision-making, allowing developers, users, and regulators to understand how content is classified, promoted, or removed. However, transparency is not merely a technical goal; it is a moral imperative grounded in ethical theory and organizational responsibility [103,104]. A critical ethical concern, as raised in the introduction, is not only when AI should ask for human help, but also how human reviewers can meaningfully intervene without access to the AI’s internal logic. Without transparency, human oversight risks becoming performative, lacking the context necessary to evaluate whether a decision aligns with ethical or public health standards.
This underscores the need for systems that generate interpretable rationales at key decision points. For instance, when an AI flags a misleading healthcare ad, it should explain what content features or risk indicators contributed to the decision. This could allow human reviewers to verify whether automated judgment aligns with ethical reasoning and platform policies. It also helps calibrate when human intervention is necessary, particularly in cases involving medical uncertainty or evolving scientific consensus. This guideline corresponds to the “Clarity vs. Ambiguity” axis in Table 1, wherein transparency becomes the bridge between automated opacity and ethical accountability. In organizational terms, it draws from Table 2’s emphasis on documentation and decision-review mechanisms, helping to institutionalize reflective ethical practice across moderation workflows.
Moreover, this form of explainability is rooted in virtue ethics. It reflects intellectual humility, encourages continuous improvement, and allows platforms to admit uncertainty, a critical stance in contexts where public health and misinformation intersect. A transparent platform models ethical integrity, earns user trust, and invites legitimate scrutiny from civil society and regulators alike. Absent these capacities, platforms risk disempowering their human reviewers, alienating users, and perpetuating opaque systems of harm. Transparency is thus not simply a tool for diagnosis, but it is a precondition for meaningful, accountable, and ethically sound human–AI collaboration. Transparent systems enable humans to intervene meaningfully when needed, serve as a safeguard against opaque error, and uphold ethical standards amid rapid technological change.
As AI systems for healthcare content moderation continue to evolve, transparency remains an indispensable condition for ethical accountability. Different machine learning and deep learning approaches offer distinct strengths, for example, rule/heuristic-based systems provide predictability and auditability, while neural and transformer-based models excel at capturing subtle patterns in text and imagery. Hybrid approaches that combine rule/heuristic-based logic with machine learning can offer greater control, reducing the risks posed by purely black-box systems. Yet, no architecture can fully eliminate opacity, and the potential for hidden biases or unexplained errors persists. This reality underscores that the ethical use of AI is not guaranteed by technical design alone. It requires organizations to allocate sufficient resources for explainability, documentation, and, critically, human oversight. Transparent systems, paired with well-trained reviewers, ensure that automated judgments can be interrogated, corrected, and aligned with enduring ethical principles. In this way, transparency is not simply a desirable feature, but a moral imperative for accountable and trustworthy healthcare ad moderation.

6.3. Redefine Platform Success Metrics

Current algorithmic moderation systems are typically optimized around engagement metrics, which include likes, shares, watch time, etc., that reward visibility rather than veracity. In the context of healthcare advertising, this architecture can elevate emotionally charged or misleading content over sober, evidence-based guidance. From a utilitarian perspective, as outlined in Section 2.1, this prioritization of virality over accuracy becomes ethically untenable when it fosters public harm. Redefining platform success requires more than revising key performance indicators; it demands a shift in the ethical architecture of algorithmic design. Moderation outcomes should be evaluated based on their contribution to public health integrity, not merely audience interaction. Metrics such as misinformation suppression rates, content reversal consistency, and demographic fairness offer more ethically aligned alternatives. Yet these new benchmarks introduce tensions, highlighted in Table 1’s Principle vs. Compromise dynamic, between doing what is ethically right and what is operationally expedient. The path to responsible moderation may involve prioritizing truth over reach, accuracy over engagement, and equity over efficiency.
Table 2 emphasizes that embedding such metrics into institutional workflows is not simply a reporting task, but a cultural one. Ethical KPIs must inform the design choices of engineers, the priorities of content strategists, and the incentives of leadership teams. This includes surfacing moments where algorithmic changes that boost retention might also elevate borderline or unverified health claims. Addressing these trade-offs directly, rather than obscuring them in backend experimentation reinforces a culture of ethical clarity. Moreover, the act of defining success should not remain an internal process. In fields as contested and variable as healthcare, success metrics require dialogue with external stakeholders, public health bodies, patient advocacy groups, and independent ethicists who can speak to evolving risks and normative expectations. This brings a dimension of contextual pluralism, resisting the temptation to impose static or universal standards in environments that demand responsiveness.
Ultimately, no metric, however advanced, can fully capture the ethical complexity of moderating health-related discourse. It is precisely in these gray zones that human judgment must reassert itself, not only in interpreting violations but in shaping what platforms choose to measure in the first place.

6.4. Address Ethical Ambiguity and Disagreements

Healthcare discourse rarely presents itself in neatly defined categories. Platforms are increasingly confronted with content that falls into ethical gray zones, fringe medical interventions, culturally rooted health beliefs, or emerging treatments still under investigation [112,113]. These cases defy simple classification and resist conclusive judgments. They raise not only factual questions but also normative disagreements about what counts as care, risk, or harm. Rather than treating such ambiguity as a failure of policy, platforms should view it as an ethical reality that demands institutional preparedness. Ethical disagreement between experts, between cultures, or even within communities is not a reason to defer action, but a call to design systems capable of principled deliberation [113]. This begins with cultivating interpretive capacity within content moderation workflows. Reviewers must be equipped to recognize when a claim falls into a contested space, and when rigid application of existing rule/heuristics may be ethically insufficient or unjust. Such judgment cannot be trained solely through manuals or machine learning; it requires interdisciplinary knowledge, sensitivity to context, and a willingness to accommodate dissent.
The need for such capacity is underscored by the Universality vs. Contextualization dilemma presented in Table 1. Ethical theories often strive for universal norms, but in practice, moderation occurs across culturally plural landscapes where health knowledge, trust in institutions, and acceptable risk vary widely. Attempting to impose a singular standard across this diversity can result in epistemic exclusion or moral overreach. Table 2 highlights the importance of creating formal channels for ethical reflection and challenge. Ethics boards, internal dissent mechanisms, and cross-functional review committees should be empowered to examine not just whether content violates a policy but also whether the policy itself needs to evolve. This dynamic feedback loop is essential in fast-changing or ideologically contested health environments.
While such mechanisms may appear slow or burdensome, they serve a deeper ethical function: they keep the platform responsive to complexity rather than beholden to simplicity. The goal is not perfect agreement, but moral robustness, the ability to act with humility and integrity in conditions of uncertainty. By building infrastructure that welcomes, rather than suppresses, ethical disagreement, platforms model a culture of principled flexibility. It is in these difficult cases where AI confidence falters and expert consensus fragments that the true character of a platform’s ethical commitments is revealed.

6.5. Foster a Virtue-Oriented Platform Culture

The ethical challenges posed by AI-driven content moderation cannot be addressed through rule/heuristics and metrics alone [114,115]. They also depend on the character of the individuals and institutions interpreting those rule/heuristics and designing those metrics. Virtue ethics is rooted in the cultivation of moral character, which offers a valuable framework for building internal cultures that are resilient to ethical drift, especially under the pressures of scale and profit [116]. A virtue-oriented platform culture prioritizes qualities such as intellectual humility, practical wisdom, courage, and honesty. These are not merely aspirational traits, but operational necessities in an environment where decisions often lack clear right answers. Moderators must decide whether to flag emerging treatments. Engineers must weigh accuracy against speed. Leaders must decide when to intervene in policy or accept public critique. These are moments where rule/heuristics end, and character begins. Unlike rule/heuristic-based systems that rely on enforcement or oversight, virtue ethics emphasizes habitual practice. It encourages a culture in which doing the right thing is not an exception triggered by escalation protocols, but a norm embedded in everyday decision-making. This principle extends to organizational structures as well: hiring practices, reward systems, performance reviews, and leadership expectations should reflect and reinforce ethical virtues, not just productivity or compliance.
This orientation directly addresses what Table 1 frames as the Principle vs. Compromise tension. In situations where platforms must navigate between conflicting priorities, such as user growth versus public harm, a virtue-anchored culture can provide the internal compass needed to resist ethically corrosive shortcuts. It also complements the procedural tools in Table 2, serving as a moral substrate beneath formal governance mechanisms. Additionally, this orientation also reflects the tension between Standardization and Discretion from Table 1. While platforms often prioritize scalable, rule/heuristic-based systems to ensure uniform enforcement, ethical behavior, particularly in healthcare moderation, requires situational awareness and moral flexibility. A virtue-oriented culture empowers individuals to exercise practical wisdom, allowing ethically sound discretion when rigid rule/heuristics or automated decisions fall short of addressing the complexities of real-world contexts.
Crucially, virtue cannot be outsourced. It must be practiced and demonstrated by leadership. Ethical culture begins with example: when platform leaders publicly uphold integrity over expediency, when they acknowledge uncertainty, or when they transparently revise flawed policies, they set the tone for the rest of the organization. Fostering a virtue-oriented platform is not about perfection, but about cultivating the moral stamina to make hard choices well, to reflect on failure, and to adapt in pursuit of a more trustworthy digital public sphere. In the context of healthcare moderation, where the stakes are often life-altering, this cultural foundation is not optional; it is essential.

7. Governance Framework for Ethical AI Moderation in Healthcare

If ethical guidelines articulate what platforms ought to do, then governance defines how those principles are operationalized, monitored, and enforced. It answers questions that guidelines alone cannot: Who is responsible? What processes ensure accountability? What recourse exists when harm occurs? And how do these systems evolve under pressure or uncertainty? In the context of AI-driven healthcare moderation, governance cannot rely on voluntary codes or after-the-fact corrections. The risks are too high, and the dynamics too complex. What is needed is a robust governance framework that aligns internal decision-making with external expectations, embeds ethical reflection into institutional processes, and ensures that ethical failures are neither silent nor consequence-free. This section outlines five core pillars of governance necessary for trustworthy, ethically sound healthcare moderation at scale.

7.1. Clarifying the Role of Governance

While Section 6 provides ethical direction, emphasizing transparency, human judgment, and value alignment, governance addresses the how. It moves from moral aspiration to institutional design. This distinction is crucial because well-meaning ethical codes often fail without corresponding mechanisms of oversight and accountability. In AI contexts, governance must overcome two unique challenges: first, the opacity and rapid evolution of algorithmic systems; and second, the tendency of organizations to prioritize efficiency, growth, or public relations over ethical reflection, particularly when incentives are misaligned. Governance frameworks thus serve a dual function: they constrain unethical drift and create channels for reflection, correction, and systemic learning [68,73]. Ethically mature organizations do not simply rely on the good intentions of designers or the vigilance of regulators [117]. Instead, they structure themselves to expect failure, build in redundancy, and respond constructively to ambiguity and critique. Governance, then, is not an add-on but a foundation for ethical resilience.

7.2. Internal Governance Structures

Internal governance ensures that ethical commitments are not siloed in public statements or compliance documents but are integrated into the day-to-day logic of platform operations [118,119,120]. First, platforms should establish cross-functional ethics boards empowered to review, revise, and veto high-impact moderation policies. These boards must include not only legal and product leads, but also ethicists, clinicians, engineers, and user representatives. Crucially, these bodies should operate with independence and institutional protection, ensuring that ethical concerns can override commercial or operational pressures when necessary. Second, there must be transparent audit trails for both AI and human moderation decisions. These trails should log how content was flagged, what rule/heuristics were applied, and whether escalation occurred. This enables after-the-fact review, facilitates organizational learning, and prepares the platform for external accountability.
Third, escalation protocols must go beyond technical triage. Human moderators should have the ability to elevate morally ambiguous cases for secondary review, supported by ethical guidance documents and feedback loops. This echoes Section 6.1’s emphasis on hybrid frameworks, but governance turns that vision into policy: who decides, how appeals are handled, and how precedents are formed. Internal governance also requires that ethical performance be measurable. Just as platforms track engagement or latency, they must track ethical indicators: rates of content reversal, alignment with external expert consensus, reviewer consistency, and trust ratings among affected users. These indicators (outlined partially in Section 6.3) should influence performance evaluations, promotions, and team incentives. Ethical quality must become part of what teams are rewarded for. Finally, governance includes protection. Whistleblower policies, anonymous reporting channels, and ethical dissent protocols should be in place for staff who observe unethical practices or feel moral unease about moderation outcomes. Governance, in this sense, enables institutional conscience.

7.3. External Oversight and Public Accountability

Internal governance, however rigorous, is insufficient without external scrutiny. Public accountability not only reinforces ethical norms but also builds legitimacy and trust in the platform’s commitments. Healthcare misinformation is not merely a technical or organizational issue; it is a public risk, and as such, platforms have a civic duty to submit to external governance [121,122]. One essential mechanism is the independent audit. Platforms should be required, whether by regulation or public expectation, to submit their moderation systems to periodic third-party audits. These audits should assess both technical efficacy (e.g., false positives/negatives) and ethical performance (e.g., bias detection, harm mitigation). Like financial audits, ethical audits ensure that the internal record matches external impact. Second, platforms should publish regular transparency reports detailing
  • The volume and categories of health content moderated
  • The number of escalations and reversals
  • Changes in policy or algorithmic design
  • Error correction statistics and ethical dilemmas encountered
These reports must be written in accessible language, accompanied by independent commentary, and open to public challenge. Transparency here is not a PR function, but a governance act. Third, platforms should collaborate with external advisory councils, including public health experts, patient advocates, and representatives from marginalized communities disproportionately affected by health misinformation. These councils should not be symbolic but have structured input into content policies, enforcement thresholds, and appeals mechanisms. Their role is to keep platforms attuned to real-world impact and ethical blind spots. Public accountability also includes mechanisms for user redress. Individuals who believe they were harmed by content moderation decisions, either by exposure to misinformation or unjust removal, should have access to appeals systems reviewed by independent ethics panels. Just as data protection laws grant individuals rights over their personal information, ethical governance requires platforms to provide individuals procedural justice in moderation decisions.

7.4. Adaptive Governance for Evolving Ethical Risks

Healthcare content moderation exists in a rapidly evolving informational environment. Medical consensus shifts, new crises emerge, and previously fringe treatments may gain legitimacy [17,30]. Static governance structures quickly become outdated, risking either overreach or under-protection. Platforms must therefore implement adaptive governance, a model that accepts uncertainty and builds in flexibility. One approach is the use of periodic ethical impact assessments, modeled after environmental impact assessments. These assessments would evaluate how changes in moderation rule/heuristics or AI architecture could affect different populations, risk categories, or epistemic communities. Additionally, platforms should maintain emergency ethics protocols that can be triggered during high-risk moments (e.g., pandemics, vaccine rollouts). These protocols would allow for temporary adjustments in enforcement thresholds, external expert consultations, or even suspensions of certain rule/heuristics while impact is assessed. The goal is to balance responsiveness with responsibility. Adaptive governance also includes policy updating workflows. When internal staff, external advisors, or users raise concerns about a policy’s ethical validity, there must be a structured process for review, stakeholder consultation, and potential revision. Governance is not just about rule/heuristics, it is about rule/heuristic maintenance.

7.5. Governing Across Borders: Ethical Pluralism and Regulatory Interoperability

Social media platforms operate across diverse cultural, legal, and epistemological landscapes [123]. A one-size-fits-all governance approach risks ethical imperialism, applying a single moral standard to contexts where it may not fit. Effective governance must therefore balance global ethical consistency with regional adaptability. This begins with establishing a global ethical baseline, namely principles such as harm prevention, transparency, and respect for medical consensus that guide all content decisions. These should reflect internationally recognized bioethical norms and human rights standards. Above this baseline, platforms should enable regional ethical customization. Local ethics committees, public health institutions, and civil society actors should be invited to advise on culturally sensitive applications of content rule/heuristics. In some regions, for instance, traditional medicine may be integrated into public health systems; in others, it may be marginalized or contested. Governance must reflect these nuances.
This approach echoes the Universality vs. Contextualization axis where ethical governance cannot be universal in form but can be in intent (see Table 1). Building regulatory interoperability, i.e., the capacity to comply with different ethical and legal regimes without fracturing trust, is thus a central governance challenge. Collaborative structures like regional ethics alliances, regulator-platform roundtables, and open-source policy hubs can facilitate this complexity. They ensure that governance is not merely a matter of enforcement but of ethical listening and shared moral infrastructure. Governance is not the negation of ethics, but is its material expression. Without governance, ethical guidelines remain advisory at best and performative at worst. With it, they gain teeth, memory, and direction. This governance framework offers platforms a way to institutionalize moral responsibility, adapt to shifting risks, and embed ethical integrity into the core of their operations. It ensures that content moderation, especially in the sensitive domain of healthcare, does not drift into opacity, automation, or unchecked harm.

8. Limitations and Directions for Future Research

This paper has proposed a comprehensive ethical and governance framework for AI-driven moderation of healthcare content on social media platforms. However, several limitations must be acknowledged to place the framework in an appropriate context and to guide further scholarly and practical exploration. While the recommendations are grounded in ethical theory and organizational practice, they remain primarily normative. Their implementation across real-world systems, especially at the platform scale, will require empirical validation. Future studies should examine how specific mechanisms, such as hybrid moderation models, ethical KPIs, and escalation protocols, perform when embedded into live moderation environments, including their efficacy, user impact, and unintended consequences.
Moreover, the paper focuses on social media platforms, which, while highly influential, represent only one arena where health-related misinformation and ethical dilemmas arise. Other digital infrastructures, such as search engines, private messaging apps, e-commerce platforms selling health products, and telemedicine services, pose their own governance challenges. The generalizability of the proposed framework to these domains remains to be tested. A further area deserving attention is stakeholder inclusion. This paper outlines the importance of interdisciplinary review and advisory structures, yet a deeper participatory model, especially one that foregrounds users, patients, and marginalized voices, has not been fully developed. Future research should explore mechanisms for co-creating moderation standards and appeal processes with those most affected by health misinformation and content removal. In addition, future studies could undertake comparative analyses of different AI approaches, examining, for example, the trade-offs between rule/heuristic-based, machine learning, and hybrid models across various domains to evaluate their relative strengths and weaknesses in ensuring transparency, accountability, and public trust.
Finally, it is worth noting that the pursuit of ethical governance carries its own risks. Without careful design, well-intentioned moderation frameworks may veer into epistemic overreach or paternalism, suppressing emerging research, culturally embedded practices or legitimate dissent. Governance systems must remain open to critique and epistemic humility, ensuring that their commitment to public health does not foreclose responsible contestation or innovation. In sum, while this paper advances a structured and theoretically grounded vision for ethical AI moderation in healthcare, the work is necessarily provisional. Its claims invite further empirical testing, contextual refinement, and participatory expansion. Responsible governance, like ethical reasoning itself, must remain an ongoing, adaptive, and inclusive endeavor.

9. Conclusions

The rise of AI-driven content moderation in healthcare presents a paradox: while these systems offer scale and speed in managing vast volumes of information, they often lack the moral capacity, contextual awareness, and interpretive flexibility required to make ethically sound decisions [124,125]. As this paper has argued, addressing this paradox demands more than improved algorithms or expanded datasets; it requires the deliberate integration of ethical theory, organizational practice, and enforceable governance. Through an interdisciplinary synthesis of deontological, utilitarian, and virtue ethics, we outlined not only why platforms must engage in responsible moderation but also how they might do so. By mapping the tensions between ethical principles and real-world compromises and grounding them in structured frameworks (Table 1 and Table 2), we proposed a set of guidelines that balance aspirational ideals with institutional pragmatism. These included the adoption of hybrid human–AI systems, investment in transparency through explainable AI, and the recalibration of success metrics to reflect public health values.
Yet ethical guidelines alone are insufficient [126]. Without governance, ethical intent risks becoming symbolic [127]. Accordingly, this paper advances a governance framework encompassing internal controls, external oversight, adaptive mechanisms, and cross-cultural ethical pluralism. Together, these elements aim to institutionalize ethical reflection, safeguard against harm, and uphold public trust in an era where health information and misinformation flow faster than ever. The ethical moderation of healthcare content is not a solved problem, nor can it be left to technical systems alone. It is an ongoing moral task, one that requires shared responsibility, critical reflection, and resilient institutional design. This paper offers a foundation for that task, rooted in theory, informed by practice, and guided by a commitment to ethical integrity in digital public life.

Author Contributions

Conceptualization, A.A.S., J.M.J. and M.E.J.; methodology, A.A.S., J.M.J. and M.E.J.; formal analysis, A.A.S., J.M.J. and M.E.J.; investigation, A.A.S.; data curation, A.A.S.; writing—original draft preparation, A.A.S.; writing—review and editing, J.M.J.; supervision, M.E.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

This work was originally presented at the Hawaii International Conference on System Sciences (HICSS). We thank HICSS for their support.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Breton, M.; Lamothe, L.; Denis, J.L. How healthcare organizations can act as institutional entrepreneurs in a context of change. J. Health Organ. Manag. 2014, 28, 77–95. [Google Scholar] [CrossRef] [PubMed]
  2. Resseguier, A.; Ufert, F. AI research ethics is in its infancy: The EU’s AI Act can make it a grown-up. Res. Ethics 2024, 20, 143–155. [Google Scholar] [CrossRef]
  3. Fanarioti, A.K.; Karpouzis, K. Artificial Intelligence and the Future of Mental Health in a Digitally Transformed World. Computers 2025, 14, 259. [Google Scholar] [CrossRef]
  4. Werner, A. TikTok Scam Promises Popular Weight Loss Drugs Without a Prescription. CBS News, 15 March 2024. [Google Scholar]
  5. Cai, S.; Cai, Y.; Liu, L.; Han, H.; Bao, F. A Service-Driven Routing Algorithm for Ad Hoc Networks in Urban Rail Transit. Computers 2023, 12, 252. [Google Scholar] [CrossRef]
  6. Kitchens, B.; Claggett, J.L.; Abbasi, A. Timely, Granular, and Actionable: Designing A Social Listening Platform for Public Health 3.0. Mis Q. 2024, 48, 899–930. [Google Scholar] [CrossRef]
  7. Moran, M.B.; Ibrahim, M.; Czaplicki, L.; Pearson, J.; Thrul, J.; Lindblom, E.; Robinson-Mosley, S.; Kennedy, R.D.; Balaban, A.; Johnson, M. Greenwashed cigarette ad text and imagery produce inaccurate harm, addictiveness, and nicotine content perceptions: Results from a randomized online experiment. Nicotine Tob. Res. 2025, 27, 271–281. [Google Scholar] [CrossRef]
  8. Abby Sen, A.; Joy, J.; Jennex, M. Illuminating Ethical Dilemmas in AI-Driven Regulation of Healthcare Marketing on Social Media. In Proceedings of the HICCS 58, Big Island, HI, USA, 7–10 January 2025. [Google Scholar]
  9. Stahl, B.C. Artificial Intelligence for a Better Future: An Ecosystem Perspective on the Ethics of AI and Emerging Digital Technologies; Springer Nature: Berlin/Heidelberg, Germany, 2021; p. 124. [Google Scholar]
  10. Berente, N.; Gu, B.; Recker, J.; Santhanam, R. Managing Artificial Intelligence. MIS Q. 2021, 45, 1433–1450. [Google Scholar] [CrossRef]
  11. Dörfler, V.; Cuthbert, G. Dubito ergo sum: Exploring AI ethics. In Proceedings of the HICCS 57, Oahu, HI, USA, 3–6 January 2024. [Google Scholar]
  12. Campbell, C.; Plangger, K.; Sands, S.; Kietzmann, J.; Bates, K. How deepfakes and artificial intelligence could reshape the advertising industry: The coming reality of AI fakes and their potential impact on consumer behavior. J. Advert. Res. 2022, 62, 241–251. [Google Scholar] [CrossRef]
  13. Chiejina, E.; Xiao, H.; Christianson, B. A Dynamic Reputation Management System for Mobile Ad Hoc Networks. Computers 2015, 4, 87–112. [Google Scholar] [CrossRef]
  14. Marabelli, M. Communications of the Association for Information Systems. Commun. Assoc. Inf. Syst. 2025, 24, 303–314. [Google Scholar] [CrossRef]
  15. Appel, G.; Grewal, L.; Hadi, R.; Stephen, A.T. The future of social media in marketing. J. Acad. Mark. Sci. 2020, 48, 79–95. [Google Scholar] [CrossRef]
  16. Reisach, U. The responsibility of social media in times of societal and political manipulation. Eur. J. Oper. Res. 2021, 291, 906–917. [Google Scholar] [CrossRef]
  17. Kington, R.S.; Arnesen, S.; Chou, W.Y.S.; Curry, S.J.; Lazer, D.; Villarruel, A.M. Identifying credible sources of health information in social media: Principles and attributes. NAM Perspect. 2021, 2021, 10–31478. [Google Scholar] [CrossRef]
  18. Morley, J.; Cowls, J.; Taddeo, M.; Floridi, L. Public health in the information age: Recognizing the infosphere as a social determinant of health. J. Med. Internet Res. 2020, 22, e19311. [Google Scholar] [CrossRef] [PubMed]
  19. Murthy, V.H. Surgeon General: Why I’m Calling for a Warning Label on Social Media Platforms. The New York Times, 17 June 2024. [Google Scholar]
  20. FDA Health Fraud Scams. 2024. Available online: https://www.fda.gov/consumers/health-fraud-scams (accessed on 29 August 2025).
  21. Jennex, M.E.; Durcikova, A.; Ilvonen, I. Modifying knowledge risk strategy using threat lessons learned from COVID-19 in 2020–21 in the United States. Electron. J. Knowl. Manag. 2022, 20, 138–151. [Google Scholar] [CrossRef]
  22. Salem, M.A.; Zakaria, O.M.; Aldoughan, E.A.; Khalil, Z.A.; Zakaria, H.M. Bridging the AI Gap in Medical Education: A Study of Competency, Readiness, and Ethical Perspectives in Developing Nations. Computers 2025, 14, 238. [Google Scholar] [CrossRef]
  23. Coeckelbergh, M. Narrative responsibility and artificial intelligence: How AI challenges human responsibility and sense-making. AI Soc. 2023, 38, 2437–2450. [Google Scholar] [CrossRef]
  24. Korteling, J.; van de Boer-Visschedijk, G.C.; Blankendaal, R.A.; Boonekamp, R.C.; Eikelboom, A.R. Human-versus artificial intelligence. Front. Artif. Intell. 2021, 4, 622364. [Google Scholar] [CrossRef]
  25. Lebovitz, S.; Levina, N.; Lifshitz-Assaf, H. Is AI ground truth really true? The dangers of training and evaluating AI tools based on experts’ know-what. MIS Q. 2021, 45, 1501–1526. [Google Scholar] [CrossRef]
  26. Rallis, S.F.; Rossman, G.B. The Research Journey: Introduction to Inquiry; Guilford Press: New York, NY, USA, 2012. [Google Scholar]
  27. Gattiker, U.E.; Kelley, H. Morality and Computers: Attitudes and Differences in Moral Judgments. Inf. Syst. Res. 1999, 10, 233–254. [Google Scholar] [CrossRef]
  28. Söllner, M.; Mishra, A.N.; Becker, J.-M.; Leimeister, J.M. Use IT again? Dynamic roles of habit, intention and their interaction on continued system use by individuals in utilitarian, volitional contexts. Eur. J. Inf. Syst. 2024, 33, 80–96. [Google Scholar]
  29. Wu, J.; Lu, X. Effects of extrinsic and intrinsic motivators on using utilitarian, hedonic, and dual-purposed information systems: A meta-analysis. J. Assoc. Inf. Syst. 2013, 14, 1. [Google Scholar] [CrossRef]
  30. Nutter, S.; Saunders, J.F. Weight stigma and health misinformation: A systematic review of research examining correlates associated with viewing The Biggest Loser. Stigma Health 2023, 9, 337–348. [Google Scholar] [CrossRef]
  31. Peng, W.; Lim, S.; Meng, J. Persuasive strategies in online health misinformation: A systematic review. Inf. Commun. Soc. 2023, 26, 2131–2148. [Google Scholar] [CrossRef]
  32. Abbasi, A.; Parsons, J.; Pant, G.; Sheng, O.R.L.; Sarker, S. Pathways for Design Research on Artificial Intelligence. Inf. Syst. Res. 2024, 35, 441–459. [Google Scholar] [CrossRef]
  33. Alexander, L.; Moore, M. Encyclopedia of Philosophy, Winter Edition, Metaphysics Research Lab, Sandford University, San Francisco, Deontological Ethics. 2007. Available online: https://plato.stanford.edu/entries/ethics-deontological/#Bib (accessed on 31 May 2025).
  34. Gill, G.; Bhattacherjee, A. Whom are we informing? Issues and recommendations for MIS research from an informing sciences perspective. MIS Q. 2009, 33, 217–235. [Google Scholar] [CrossRef]
  35. Crisp, R.; Slote, M.; Slote, M.A. Virtue Ethics; Oxford University Press: Oxford, UK, 1997; Volume 10. [Google Scholar]
  36. Van Hooft, S. Understanding Virtue Ethics; Routledge: London, UK, 2014. [Google Scholar]
  37. Kraut, R. Aristotle on the Human Good; Princeton University Press: Princeton, NJ, USA, 2021; ISBN 9780691225128. [Google Scholar]
  38. Constantinides, P.; Chiasson, M.W.; Introna, L.D. The ends of information systems research: A pragmatic framework. MIS Q. 2012, 36, 1–20. [Google Scholar] [CrossRef]
  39. Cram, W.A.; D’Arcy, J.; Benlian, A. Time Will Tell: The Case For an Idiographic Approach to Behavioral Cybersecurity Research. MIS Q. 2024, 48, 95–136. [Google Scholar] [CrossRef]
  40. LaFollette, H. Ethics in Practice: An Anthology; John Wiley & Sons: Hoboken, NJ, USA, 2020. [Google Scholar]
  41. World Health Organization. Towards a Global Guidance Framework for the Responsible Use of Life Sciences: Summary Report of Consultations on the Principles, Gaps and Challenges of Biorisk Management; World Health Organization: Geneva, Switzerland, 2022. [Google Scholar]
  42. Haj-Bolouri, A.; Conboy, K.; Gregor, S. Research Perspectives: An Encompassing Framework for Conceptualizing Space in Information Systems: Philosophical Perspectives, Themes, and Concepts. J. Assoc. Inf. Syst. 2024, 25, 407–441. [Google Scholar] [CrossRef]
  43. Williams, B.; Lear, J. Ethics and the Limits of Philosophy; Routledge: London, UK, 2011. [Google Scholar]
  44. Palmer, A.; Schwan, D. More process, less principles: The ethics of deploying ai and robotics in medicine. Camb. Q. Healthc. Ethics 2024, 33, 121–134. [Google Scholar] [CrossRef] [PubMed]
  45. Davis, J.C. Utopia and the Ideal Society: A Study of English Utopian Writing 1516–1700; Cambridge University Press: Cambridge, UK, 1983. [Google Scholar]
  46. Tessman, L. Idealizing morality. Hypatia 2010, 25, 797–824. [Google Scholar] [CrossRef]
  47. Adams, C.A. Internal organizational factors influencing corporate social and ethical reporting: Beyond current theorizing. Account. Audit. Account. J. 2002, 15, 223–250. [Google Scholar] [CrossRef]
  48. Prem, E. From ethical AI frameworks to tools: A review of approaches. AI Ethics 2023, 3, 699–716. [Google Scholar] [CrossRef]
  49. Hurst, S.A.; Hull, S.C.; DuVal, G.; Danis, M. How physicians face ethical difficulties: A qualitative analysis. J. Med. Ethics 2005, 31, 7–14. [Google Scholar] [CrossRef] [PubMed]
  50. Harsanyi, J.C. Morality and the theory of rational behavior. Soc. Res. 1977, 44, 623–656. [Google Scholar]
  51. Mingers, J.; Walsham, G. Toward ethical information systems: The contribution of discourse ethics. MIS Q. 2010, 34, 833–854. [Google Scholar] [CrossRef]
  52. Berente, N.; Hansen, S.; Pike, J.C.; Bateman, P.J. Arguing the value of virtual worlds: Patterns of discursive sensemaking of an innovative technology. MIS Q. 2011, 35, 685–709. [Google Scholar] [CrossRef]
  53. Murphy, L.B. Moral Demands in Nonideal Theory; Oxford University Press: Oxford, UK, 2003. [Google Scholar]
  54. Huang, C.; Zhang, Z.; Mao, B.; Yao, X. An overview of artificial intelligence ethics. IEEE Trans. Artif. Intell. 2022, 4, 799–819. [Google Scholar] [CrossRef]
  55. Sunstein, C.R.; Kahneman, D.; Schkade, D.; Ritov, I. Predictably incoherent judgments. Stan. L. Rev. 2001, 54, 1153. [Google Scholar] [CrossRef]
  56. Drnevich, P.L.; Croson, D.C. Information technology and business-level strategy: Toward an integrated theoretical perspective. MIS Q. 2013, 37, 483–509. [Google Scholar] [CrossRef]
  57. Kendall, J.E.; Kendall, K.E. Metaphors and methodologies: Living beyond the systems machine. MIS Q. 1993, 17, 149–171. [Google Scholar] [CrossRef]
  58. Markus, M.L.; Rowe, F. Is IT changing the world? Conceptions of causality for information systems theorizing. MIS Q. 2018, 42, 1255–1280. [Google Scholar] [CrossRef]
  59. Chae, B.; Paradice, D.; Courtney, J.F.; Cagle, C.J. Incorporating an ethical perspective into problem formulation: Implications for decision support systems design. Decis. Support Syst. 2005, 40, 197–212. [Google Scholar] [CrossRef]
  60. Strong, D.M.; Volkoff, O. Understanding Organization—Enterprise system fit: A path to theorizing the information technology artifact. MIS Q. 2010, 34, 731–756. [Google Scholar] [CrossRef]
  61. Nash, R.J. “Real World” Ethics: Frameworks for Educators and Human Service Professionals; Teachers College Press, Columbia University: New York, NY, USA, 2002. [Google Scholar]
  62. Sarioguz, O.; Miser, E. Data-Driven Decision-Making: Revolutionizing Management in the Information Era. J. Artif. Intell. Gen. Sci. (JAIGS) 2024, 4, 179–194. [Google Scholar] [CrossRef]
  63. Gilligan, C. Moral orientation and moral development [1987]. In Justice and Care; Routledge: London, UK, 1995; pp. 31–46. [Google Scholar]
  64. Mertz, M.; Prince, I.; Pietschmann, I. Values, decision-making and empirical bioethics: A conceptual model for empirically identifying and analyzing value judgements. Theor. Med. Bioeth. 2023, 44, 567–587. [Google Scholar] [CrossRef] [PubMed]
  65. Ekman, I. Practising the ethics of person-centered care balancing ethical conviction and moral obligations. Nurs. Philos. 2022, 23, e12382. [Google Scholar] [CrossRef]
  66. Hare, R.M. Ethical theory and utilitarianism. In Contemporary British Philosophy; Routledge: London, UK, 2014; pp. 113–131. [Google Scholar]
  67. Tsou, J.Y.; Walsh, K.P. Ethical Theory and Technology. In Technology Ethics; Routledge: London, UK, 2023; pp. 62–72. [Google Scholar]
  68. DeTienne, K.B.; Ellertson, C.F.; Ingerson, M.C.; Dudley, W.R. Moral development in business ethics: An examination and critique. J. Bus. Ethics 2021, 170, 429–448. [Google Scholar] [CrossRef]
  69. Koocher, G.P.; Keith-Spiegel, P. Ethics in Psychology and the Mental Health Professions: Standards and Cases; Oxford University Press: Oxford, UK, 2008. [Google Scholar]
  70. Baggini, J.; Fosl, P.S. The Ethics Toolkit: A Compendium of Ethical Concepts and Methods; John Wiley & Sons: Hoboken, NJ, USA, 2024. [Google Scholar]
  71. Jamali, D. A stakeholder approach to corporate social responsibility: A fresh perspective into theory and practice. J. Bus. Ethics 2008, 82, 213–231. [Google Scholar] [CrossRef]
  72. Walsham, G. Ethical theory, codes of ethics and IS practice. Inf. Syst. J. 1996, 6, 69–81. [Google Scholar] [CrossRef]
  73. Böhm, S.; Carrington, M.; Cornelius, N.; de Bruin, B.; Greenwood, M.; Hassan, L.; Jain, T.; Karam, C.; Kourula, A.; Romani, L.; et al. Ethics at the center of global and local challenges: Thoughts on the future of business ethics. J. Bus. Ethics 2022, 180, 835–861. [Google Scholar] [CrossRef]
  74. Farayola, O.A.; Olorunfemi, O.L. Ethical decision-making in IT governance: A review of models and frameworks. Int. J. Sci. Res. Arch. 2024, 11, 130–138. [Google Scholar]
  75. Bouke, M.A.; Abdullah, A.; Udzir, N.I.; Samian, N. Overcoming the challenges of data lack, leakage, and dimensionality in intrusion detection systems: A comprehensive review. J. Commun. Inf. Syst. 2024, 39, 22–34. [Google Scholar] [CrossRef]
  76. Novelli, C.; Taddeo, M.; Floridi, L. Accountability in artificial intelligence: What it is and how it works. AI Soc. 2024, 39, 1871–1882. [Google Scholar] [CrossRef]
  77. Martin, F.; Sun, T.; Westine, C.D. A systematic review of research on online teaching and learning from 2009 to 2018. Comput. Educ. 2020, 159, 104009. [Google Scholar] [CrossRef]
  78. Matook, S.; Lee, G.; Fitzgerald, B. MISQ research curation on information systems development. MIS Q. 2021, 1–20. [Google Scholar]
  79. Alizadeh, A.; Dirani, K.M.; Qiu, S. Ethics, code of conduct and ethical climate: Implications for human resource development. Eur. J. Train. Dev. 2021, 45, 674–690. [Google Scholar] [CrossRef]
  80. Brendel, A.B.; Mirbabaie, M.; Lembcke, T.-B.; Hofeditz, L. Ethical management of artificial intelligence. Sustainability 2021, 13, 1974. [Google Scholar] [CrossRef]
  81. Hennequin, E. What motivates internal whistleblowing? A typology adapted to the French context. Eur. Manag. J. 2020, 38, 804–813. [Google Scholar] [CrossRef]
  82. Cox, D.J. A guide to establishing ethics committees in behavioral health settings. Behav. Anal. Pract. 2020, 13, 939–949. [Google Scholar] [CrossRef]
  83. Manzoor, F.; Wei, L.; Asif, M. Intrinsic rewards and employee’s performance with the mediating mechanism of employee’s motivation. Front. Psychol. 2021, 12, 563070. [Google Scholar] [CrossRef] [PubMed]
  84. Guttman, N.; Salmon, C.T. Guilt, fear, stigma and knowledge gaps: Ethical issues in public health communication interventions. Bioethics 2004, 18, 531–552. [Google Scholar] [CrossRef] [PubMed]
  85. Windsor, D. Corporate social responsibility: Three key approaches. J. Manag. Stud. 2006, 43, 93–114. [Google Scholar] [CrossRef]
  86. Schaubroeck, J.M.; Hannah, S.T.; Avolio, B.J.; Kozlowski, S.W.; Lord, R.G.; Treviño, L.K.; Dimotakis, N.; Peng, A.C. Embedding ethical leadership within and across organization levels. Acad. Manag. J. 2012, 55, 1053–1078. [Google Scholar] [CrossRef]
  87. Steinbauer, R.; Renn, R.W.; Taylor, R.R.; Njoroge, P.K. Ethical leadership and followers’ moral judgment: The role of followers’ perceived accountability and self-leadership. J. Bus. Ethics 2014, 120, 381–392. [Google Scholar] [CrossRef]
  88. Raddatz, N.; Kettinger, W.J.; Coyne, J. Giving to get well: Patients’ willingness to manage and share health information on AI-driven platforms. Commun. Assoc. Inf. Syst. 2023, 52, 1017–1049. [Google Scholar] [CrossRef]
  89. Gupta, S.; Modgil, S.; Lee, C.K.; Sivarajah, U. The future is yesterday: Use of AI-driven facial recognition to enhance value in the travel and tourism industry. Inf. Syst. Front. 2023, 25, 1179–1195. [Google Scholar] [CrossRef]
  90. Srivastava, A.; Marabelli, M.; Blanch-Hartigan, D.; Moriarty, J.; Carey, E.; Persky, S.; Torous, J. The Present and Future of AI: Ethical Issues and Research Opportunities. Commun. Assoc. Inf. Syst. 2025, 56, 9. [Google Scholar] [CrossRef]
  91. Chowdhury, T.; Oredo, J. AI ethical biases: Normative and information systems development conceptual framework. J. Decis. Syst. 2023, 32, 617–633. [Google Scholar] [CrossRef]
  92. Du, S.; Xie, C. Paradoxes of artificial intelligence in consumer markets: Ethical challenges and opportunities. J. Bus. Res. 2021, 129, 961–974. [Google Scholar] [CrossRef]
  93. Sullivan, Y.W.; Fosso Wamba, S. Moral judgments in the age of artificial intelligence. J. Bus. Ethics 2022, 178, 917–943. [Google Scholar] [CrossRef]
  94. Guleria, P.; Sood, M. Explainable AI and machine learning: Performance evaluation and explainability of classifiers on educational data mining inspired career counseling. Educ. Inf. Technol. 2023, 28, 1081–1116. [Google Scholar] [CrossRef]
  95. Quinn, T.P.; Jacobs, S.; Senadeera, M.; Le, V.; Coghlan, S. The three ghosts of medical AI: Can the black-box present deliver? Artif. Intell. Med. 2022, 124, 102158. [Google Scholar] [CrossRef]
  96. Sun, G.; Zhou, Y.H. AI in healthcare: Navigating opportunities and challenges in digital communication. Front. Digit. Health 2023, 5, 1291132. [Google Scholar] [CrossRef]
  97. Greene, T.; Shmueli, G.; Ray, S. Taking the person seriously: Ethically aware IS research in the era of reinforcement learning-based personalization. J. Assoc. Inf. Syst. 2023, 24, 1527–1561. [Google Scholar] [CrossRef]
  98. Grover, V.; Lindberg, A.; Benbasat, I.; Lyytinen, K. The perils and promises of big data research in information systems. J. Assoc. Inf. Syst. 2020, 21, 9. [Google Scholar] [CrossRef]
  99. Someh, I.; Wixom, B.H.; Beath, C.M.; Zutavern, A. Building an Artificial Intelligence Explanation Capability. MIS Q. Exec. 2022, 21, 5. [Google Scholar] [CrossRef]
  100. Ford, J.; Jain, V.; Wadhwani, K.; Gupta, D.G. AI advertising: An overview and guidelines. J. Bus. Res. 2023, 166, 114124. [Google Scholar] [CrossRef]
  101. Teodorescu, M.H.; Morse, L.; Awwad, Y.; Kane, G.C. Failures of Fairness in Automation Require a Deeper Understanding of Human-ML Augmentation. MIS Q. 2021, 45, 1483–1500. [Google Scholar] [CrossRef]
  102. Binns, R. Human Judgment in algorithmic loops: Individual justice and automated decision-making. Regul. Gov. 2022, 16, 197–211. [Google Scholar] [CrossRef]
  103. Felzmann, H.; Fosch-Villaronga, E.; Lutz, C.; Tamò-Larrieux, A. Towards transparency by design for artificial intelligence. Sci. Eng. Ethics 2020, 26, 3333–3361. [Google Scholar] [CrossRef]
  104. Von Eschenbach, W.J. Transparency and the black box problem: Why we do not trust AI. Philos. Technol. 2021, 34, 1607–1622. [Google Scholar] [CrossRef]
  105. Reddy, S.; Lebrun, A.; Chee, A.; Kalogeropoulos, D. Discussing the role of explainable AI and evaluation frameworks for safe and effective integration of large language models in healthcare. Telehealth Med. Today 2024, 9, 1–3. [Google Scholar] [CrossRef]
  106. Rodgers, W.; Murray, J.M.; Stefanidis, A.; Degbey, W.Y.; Tarba, S.Y. An artificial intelligence algorithmic approach to ethical decision-making in human resource management processes. Hum. Resour. Manag. Rev. 2023, 33, 100925. [Google Scholar] [CrossRef]
  107. Alam, M.N.; Kaur, M.; Kabir, M.S. Explainable AI in Healthcare: Enhancing transparency and trust upon legal and ethical consideration. Int. Res. J. Eng. Technol. 2023, 10, 1–9. [Google Scholar]
  108. Deumes, R.; Schelleman, C.; Vander Bauwhede, H.; Vanstraelen, A. Audit firm governance: Do transparency reports reveal audit quality? Audit. A J. Pract. Theory 2012, 31, 193–214. [Google Scholar] [CrossRef]
  109. Vargas-Santiago, M.; León-Velasco, D.A.; Maldonado-Sifuentes, C.E.; Chanona-Hernandez, L. A State-of-the-Art Review of Artificial Intelligence (AI) Applications in Healthcare: Advances in Diabetes, Cancer, Epidemiology, and Mortality Prediction. Computers 2025, 14, 143. [Google Scholar] [CrossRef]
  110. Axios, A. The Online Health Misinformation Machine. Available online: https://www.axios.com/2025/08/31/health-advice-vaccine-misinformation-research (accessed on 31 August 2025).
  111. New York Post. Mental Health Misinformation on TikTok Is at an All-Time High, and Poses a Huge Risk to Struggling Users, Experts Warn. New York Post. 31 May 2025. Available online: https://nypost.com/2025/05/31/world-news/popular-mental-health-videos-on-tiktok-spread-misinformation-and-pose-a-great-risk-to-struggling-users-experts-warned (accessed on 31 May 2025).
  112. Bamdad, S.; Finaughty, D.A.; Johns, S.E. ‘Grey areas’: Ethical challenges posed by social media-enabled recruitment and online data collection in cross-border, social science research. Res. Ethics 2022, 18, 24–38. [Google Scholar] [CrossRef]
  113. Bhargava, V.R.; Velasquez, M. Ethics of the attention economy: The problem of social media addiction. Bus. Ethics Q. 2021, 31, 321–359. [Google Scholar] [CrossRef]
  114. Cecchini, D.; Pflanzer, M.; Dubljević, V. Aligning artificial intelligence with moral intuitions: An intuitionist approach to the alignment problem. AI Ethics 2024, 5, 1523–1533. [Google Scholar] [CrossRef]
  115. Laine, J.; Minkkinen, M.; Mäntymäki, M. Understanding the Ethics of Generative AI: Established and New Ethical Principles. Commun. Assoc. Inf. Syst. 2025, 56, 7. [Google Scholar] [CrossRef]
  116. David, P.; Shroff-Mehta, P.; Gupta, S. The Role of Virtue Ethics. The Routledge Handbook of Global and Digital Governance Crossroads: Stakeholder Engagement and Democratization; Routledge: London, UK, 2024. [Google Scholar]
  117. Hoffmann-Riem, W. Artificial intelligence as a challenge for law and regulation. In Regulating Artificial Intelligence; Springer: Cham, Switzerland, 2020; pp. 1–29. [Google Scholar]
  118. Birkstedt, T.; Minkkinen, M.; Tandon, A.; Mäntymäki, M. AI governance: Themes, knowledge gaps and future agendas. Internet Res. 2023, 33, 133–167. [Google Scholar] [CrossRef]
  119. Sayles, J. Aligning AI Governance with Other Internal Governance Models for Trustworthy AI: “The Convergence of Governance Frameworks”. In Principles of AI Governance and Model Risk Management: Master the Techniques for Ethical and Transparent AI Systems; Apress: New York, NY, USA, 2024; pp. 132–172. [Google Scholar]
  120. Miró-Llinares, F.; Aguerri, J.C. Misinformation about fake news: A systematic critical review of empirical studies on the phenomenon and its status as a ‘threat’. Eur. J. Criminol. 2023, 20, 356–374. [Google Scholar] [CrossRef]
  121. Moore, L.R. “But we’re not hypochondriacs”: The changing shape of gluten-free dieting and the contested illness experience. Soc. Sci. Med. 2014, 105, 76–83. [Google Scholar] [CrossRef]
  122. Starr, P. Remedy and Reaction: The Peculiar American Struggle over Health Care Reform; Yale University Press: New Haven, CT, USA, 2013. [Google Scholar]
  123. Baird, A.; Maruping, L.M. The Next Generation of Research on IS Use: A Theoretical Framework of Delegation to and from Agentic IS Artifacts. MIS Q. 2021, 45, 315–341. [Google Scholar] [CrossRef]
  124. Nussbaumer, A.; Pope, A.; Neville, K. A framework for applying ethics-by-design to decision support systems for emergency management. Inf. Syst. J. 2023, 33, 34–55. [Google Scholar] [CrossRef]
  125. Stenseke, J. On the computational complexity of ethics: Moral tractability for minds and machines. Artif. Intell. Rev. 2024, 57, 105. [Google Scholar] [CrossRef]
  126. Katirai, A. Ethical considerations in emotion recognition technologies: A review of the literature. AI Ethics 2024, 4, 927–948. [Google Scholar] [CrossRef]
  127. Lopes, L.; Kearney, A.; Washington, I.; Valdes, I.; Yilma, H.; Hamel, L. KFF Health Misinformation Tracking Poll Pilot; Kaiser Family Foundation: New York, NY, USA, 2023. [Google Scholar]
Table 1. Ethical concepts in theory and practice. Table adapted from [8].
Table 1. Ethical concepts in theory and practice. Table adapted from [8].
ConceptTheoryPractice
Abstraction vs. ApplicationInvolves abstract ethical principles and philosophical reasoning about right and wrong [42,43].Applied in specific, often complex scenarios that demand navigating competing ethical demands [39,44]
Consistency vs. ContextualityOffers stable and uniform ethical guidance across situations [45,46].Considers the specific context and circumstances influencing ethical decisions [47,48].
Idealism vs. PragmatismCenters on ideal ethical conduct and what should be done in a perfect setting [41,42].Confronts real-world limitations that complicate ethical choices [34,49]
Clarity vs.
Ambiguity
Provides clearly defined ethical principles to guide decision-making [50,51].Ethical situations are often unclear and require judgment based on incomplete or evolving information [10,52].
Principle vs. CompromiseEmphasizes strict adherence to ethical
principles without deviation [53].
Involves compromise and negotiation, balancing diverse ethical views and practical limitations [51].
Predictability
vs. Uncertainty
Proposes predictable ethical rules that can be systematically followed [54,55].Ethical outcomes are often uncertain and vary due to dynamic and unpredictable conditions [56,57,58].
Objective vs. SubjectiveTreats organizational ethics as largely
objective, often offering singular solutions to complex problems [59].
Addresses ethical issues as inherently subjective, with multiple possible interpretations and solutions [60].
Academic vs. OperationalCommonly situated in academic discourse,
focusing on conceptual analysis and
theoretical depth [61].
Emerges in operational environments where decisions carry real-world implications [61,62]
Detachment
vs. Engagement
Considers hypothetical ethical scenarios
without personal or emotional involvement [63,64].
Involves personal engagement and emotional investment, which can shape ethical reasoning [65].
Prescription
vs. Description
Focuses on normative ethics, prescribing how
individuals ought to behave according to
moral theories [66,67].
Examines how individuals actually behave, often revealing discrepancies between ethical ideals and real-world conduct [68,69].
Table 2. Organizational ethical strategies, definition, and implementations. Table adapted from [8].
Table 2. Organizational ethical strategies, definition, and implementations. Table adapted from [8].
StrategyDefinitionImplementation
Ethics Training and
Education
Programs designed to educate individuals about ethical standards and decision-
making processes [79].
Conduct regular ethics training sessions, incorporating case studies, role-playing, and discussions of real-world ethical scenarios.
Ethical Decision-Making
Processes
Structured approaches that integrate ethics into decision-making [80].Incorporate ethical considerations into all decision-making processes.
Whistleblowing
Mechanisms
Systems that allow employees to confidentially report unethical behavior without fear
of retaliation [81].
Establish hotlines, online reporting systems, or third-party services for receiving reports. Ensure that policies protect whistleblowers and facilitate thorough investigations.
Ethics Committees and
Officers
Dedicated groups or individuals tasked with overseeing and enforcing ethical standards [82].Form ethics committees or appoint ethics officers to monitor compliance, provide guidance, and address ethical concerns within the organization.
Performance
Management Systems
Systems that evaluate and reward employee behavior [83].Include ethical behavior as a key criterion in performance evaluation, promotion, and reward systems.
Communication and
Awareness Campaigns
Initiatives aimed at raising awareness about ethical standards and guidelines [84].Use newsletters, emails, posters, and other forms of media to communicate ethical guidelines and emphasize their importance.
Policy Development and
Review
The creation and ongoing review of policies
to ensure they align with ethical standards [41,85].
Develop policies addressing specific ethical issues and regularly review and update them to reflect current standards.
Cultural IntegrationIntegrating ethical values into the
organizational culture [86].
Foster a culture that values and expects ethical behavior, supported by consistent leadership messages, recognition of ethical actions, and embedding ethics into the organization’s core values.
Ethical LeadershipLeaders who model ethical behavior and
decision-making [87,88].
Ethical leaders demonstrate ethical behavior, set the tone for the organization, and ensure that their actions align with the organization’s values.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Abby Sen, A.; Joy, J.M.; Jennex, M.E. Towards Navigating Ethical Challenges in AI-Driven Healthcare Ad Moderation. Computers 2025, 14, 380. https://doi.org/10.3390/computers14090380

AMA Style

Abby Sen A, Joy JM, Jennex ME. Towards Navigating Ethical Challenges in AI-Driven Healthcare Ad Moderation. Computers. 2025; 14(9):380. https://doi.org/10.3390/computers14090380

Chicago/Turabian Style

Abby Sen, Abraham, Jeen Mariam Joy, and Murray E. Jennex. 2025. "Towards Navigating Ethical Challenges in AI-Driven Healthcare Ad Moderation" Computers 14, no. 9: 380. https://doi.org/10.3390/computers14090380

APA Style

Abby Sen, A., Joy, J. M., & Jennex, M. E. (2025). Towards Navigating Ethical Challenges in AI-Driven Healthcare Ad Moderation. Computers, 14(9), 380. https://doi.org/10.3390/computers14090380

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop