Abstract
Artificial Intelligence, now commonly called AI, is having an increasingly big impact on society. There are fears that may be negatives or downsides, especially when Artificial Intelligence is used unethically. But how are humans guiding these machines to know whether the choice, the decision, is ethical? Since 2007, one way to check the ethicality of any choice has been to apply the JUSTICE model. This framework helps practitioners decide whether a specific action is or is not ethical by looking through one or more of the seven JUSTICE lenses: Justice, Utilitarian, Spiritual Values, TV rule or Transparency, Influence, Core, and Emergency. Now, in this era of increasing prevalence of Artificial Intelligence, with humans making decisions often together with machines, can the JUSTICE framework still be useful? Yes, it can. We look at each of those seven components. Each may give guidance in some situations. Of the seven, it seems that T or the TV test is most likely to give guidance in this new era.
1. Introduction
It is amazing how fast the world changes. While Artificial Intelligence (AI) has been discussed for decades, it was only on the last day of November 2022 when ChatGPT became available to everybody. For many people at that moment, Artificial Intelligence started moving from a topic for academic discussion to something average citizens could experiment with, play with, and use. Then, big global questions started coming up in coffee shops and at dinner tables. Is Artificial Intelligence good or bad? Or both? Is it ethical? Focusing on one possible way to answer these questions, in this paper, we ask the following main question: Can the JUSTICE framework tell us if Artificial Intelligence is ethical? When? How?
The JUSTICE model is intended to help assess the ethicality of anything. Artificial Intelligence, AI, has in many ways changed the world, for better and for worse. Can the JUSTICE framework, published in 2007, long before the explosive growth of Artificial Intelligence, still be useful in a world being changed by Artificial Intelligence?
Hopes to minimize negative outcomes from Artificial Intelligence triggered high-sounding pronouncements, regulations, and laws. Many of the ensuing guardrails help state what organizations and corporations, for example, should or should not do. This essay focuses on a different level and a different actor. We look at the individual, and not the organization or society. When an individual faces the question of whether or not some action involving Artificial Intelligence is ethical, can the JUSTICE framework, published in 2007, long before Artificial Intelligence, still be useful in a world being changed by Artificial Intelligence?
While there is considerable speculation on the vast benefits which may flow from developments stemming from Artificial Intelligence, such as in new drug discoveries and finding cures for diseases, or the elimination of exceedingly repetitive and uninteresting organizational tasks, there are parallel concerns about the proliferation of disinformation and workforce disruption in dealing with the replacement of some jobs. There are even worries that Artificial Intelligence might become too smart and basically take over the world.
Much of the writing in this area focuses on the identification of things to watch out for and suggesting guardrails to keep Artificial Intelligence from doing harm. For example, in what the OECD calls “the first intergovernmental standard on AI”, they call for “AI that is innovative and trustworthy, and that upholds human rights and democratic values” [1]. That set of standards, first issued in 2019 and revised in 2024, might indeed have been the first, but another multi-governmental agency, UNESCO, states that it “produced the first-ever global standard on AI ethics—the ‘Recommendation on the Ethics of Artificial Intelligence’ in November 2021” … “applicable to all 194 member states of UNESCO” [2] (p. 1). While not a governmental organization, the World Economic Forum wields considerable influence, and in 2016, published the “Top 9 ethical issues in artificial intelligence”, predating both the OECD and UNESCO pronouncements [3]. Numerous nations have issued regulations regarding AI and ethics, including the US, the EU, China, Russia, and Singapore [4].
The collective approaches to keep Artificial Intelligence “ethical” are representative of the numerous governmental initiatives across the globe, each hoping to control and channel AI towards the betterment of humanity. As of 2023, according to the one source, the area had seen “significant policy attention and action, evidenced by more than 1000 AI initiatives in over 70 countries and jurisdictions” [1]. However, we are reminded that ethics is not the same as law [5]. Laws are required and will help, but also, individuals will have to make decisions. Compliance with law is helpful, but only in terms of what is required and “significantly insufficient” [6] (p. 8), citing [7]. When there is more to be done over and above what the law strictly requires, one should think of soft law, or ethics [7]. Such “soft law” could lead to ‘good corporate citizenship’ [6] (p. 8).
Most of the regulations or laws we have seen aim at an organizational or group level, not addressing the question we ask: Would this use of a specific Artificial Intelligence tool be ethical in this situation? We only seek to answer a small question: Can the JUSTICE framework help an individual face an Artificial Intelligence-related ethics question?
But we must also acknowledge that across society there are questions concerning not only the value of AI, but also huge global harmful effects which may ensue. What ethical questions might AI pose for humanity? Towards the end of this relatively short essay, we will also look at, but not answer, that “end of the world” question.
Would the use of a specific Artificial Intelligence tool be ethical in this specific set of circumstances? Our paper is not about regulations nor laws. Those following the explosive growth of Artificial Intelligence will not be surprised that there have already been massive increases in the number of such pronouncements. Hopefully academics will summarize and evaluate that growth but that is not the topic of this paper. We focus only on a more micro level. Is a specific application of Artificial Intelligence ethical? It is that individual decision point that is our topic. Most of the regulations or laws we have seen aim at an organizational or group level, not addressing the question we ask: Would this use of some particular Artificial Intelligence tool be ethical in some specific situation? How might an individual decide the ethical thing to do? Specifically, we ask whether the 2007 JUSTICE model can still help an individual decide.
Certainly the topics of ethics and Artificial Intelligence are receiving attention. A bibliometric analysis of some 3000 scholarly articles relating to Artificial Intelligence found that a good number of papers looked at ethical considerations. A study by Koo showed that key words such as “ethics” and “integrity” were prominent in those papers, showing the “importance of addressing ethical challenges associated with AI technologies” [8] (p. 1) (see also [9,10]). It seems that in any field one can think of, people are asking about and writing about the ethics of Artificial Intelligence (a list of representative scholarly articles is available from the authors).
But we return to our one question: How should one determine whether or not some action involving Artificial Intelligence is ethical? This leads us back to the JUSTICE model, which has been used in various contexts. Is that framework, published long before Artificial Intelligence, still useful in the world being changed by Artificial Intelligence?
2. The JUSTICE Framework for Making Decisions with Ethical Implications
Can the JUSTICE framework still be useful in the mid-2020s and into the future, in a world where Artificial Intelligence has the potential to impact just about everything? We teach this framework. In theory any question of ethics can be approached through a consideration of the seven dimensions of the framework. The model asks an individual to consider these seven different ways to look at an issue from an ethical perspective. The respondent may choose any one or more than one of the seven to make a choice for a given issue with ethical dimensions [11].
As this Artificial Intelligence phenomenon grows, we should ask how Artificial Intelligence is developed, deployed and controlled. Issues concerning the maintenance and effectiveness of ethical foundations throughout societal, economic and organizational infrastructures will be called into question and may require new justifications to ensure principles of good conduct. We ask: What does the rapid progress of Artificial Intelligence mean? For the world we know best, education? For humanity? We hear and see unanswered questions daily about the development of artificial intelligence (AI). These developments raise many questions about ethical implications [12,13,14,15]. For example, how can we assess ethical matters surrounding applications of AI? Questions of AI and ethics are not confined to education. As one example, the field of medicine is currently (mid-2020s) seeing many articles on ethical implications of Artificial Intelligence [16,17]. It seems that in any field one can think of people asking about and writing about the ethics of Artificial Intelligence (a list of representative scholarly articles is available from the authors).
3. Artificial Intelligence: Ethical Issues
Our first step is to investigate the following question: What are the major ethical implications of AI in today’s world? Numerous scholars have looked at ethical implications of AI in the field of education. A search for relevant articles in one specialist journal, the Journal of Artificial Intelligence in Education, identified 76 pertinent papers in print as of the mid-2020s. The fact that many ask questions does not mean that many suggest answers. In these 76 papers there seems to be no agreed-upon framework to answer the question “is the suggested AI practice ethical?” Searching a bit broader we found 19,300 items in this and other journals. We did not study these 19,000 but did look at more than 50 journal articles that looked promising. Again, we find no one persuasive scheme to help scholars determine “is this particular use of Artificial Intelligence ethical?”. As Jobin and her colleagues note [18], beyond agreeing “that AI should be ‘ethical’, there is debate about both what constitutes ‘ethical AI’ and which ethical requirements, technical standards and best practices are needed” [18] (p. 389). There are multiple suggested frameworks, but up to this point, no single approach has gained traction in the effort to assess the ethics of AI. Broussard [19] warns that ‘technochauvinism’—the overconfidence in AI’s capabilities—leads users to adopt tools like ChatGPT uncritically, despite evidence they may undermine or prevent critical thinking, for example, in case-based learning [20]. This false sense of objectivity, paired with Hagendorff’s [21] finding that 84% of AI ethics guidelines lack enforcement mechanisms, exposes systemic gaps in AI oversight.
The pages that follow first (1) review three of the more promising discussions on ethical frameworks relating to applications of AI. Second, we consider the J U S T I C E model and evaluate the extent to which the various elements of the model assist in addressing the question in this paper. Finally, we review our findings and the implications which arise from the ongoing rapid development in digital intelligence, specifically Artificial Intelligence (AI), and what it means for our future.
4. Impact of AI: Frameworks to Assess Ethical Challenges and Dilemmas
Three studies which reviewed frameworks and Artificial Intelligence deserve attention and these notable studies are highlighted next.
Hagendorff [21] (p. 99) provides a substantive discussion on the field of AI ethics which assesses and compares 22 guidelines and posits that this endeavor provides “a detailed overview of the field of AI ethics.” The extensive list of references and the detailed analysis underpins the quality of the analysis and discussion. However, arguably the presentation of 22 different guidelines may be viewed as potentially discouraging in efforts to identify one feasible pathway in seeking answers to the research question. Moreover, Hagendorff is not optimistic that such guidelines or written pronouncements are the answer and expresses concerns that guidelines might “simply serve the purpose of calming critical voices from the public” [21] (p. 101) while not leading to any move towards ethical action.
If Hagendorff’s 22-framework model does not provide enough pathways to guide us, Mittelstadt [22] notes that “at least 84 public-private initiatives have produced statements… to guide the ethical development, deployment, and governance of AI” [22] (p. 501). In studying and evaluating these 84 pronouncements identified in the 2019 paper, the author identifies four basic themes. “AI Ethics has seemingly converged on a set of principles that in some ways resemble the four classic principles of medical ethics… (1) common aims and fiduciary duties, (2) professional history and norms, (3) proven methods to translate principles into practice, and (4) robust legal and professional ac-countability mechanisms.” While the Mittelstadt work clarifies the significance of these four principles in the general discussion on ethics and AI, it is not evident that they are pertinent to the underlying concerns embedded in our area of interest, namely, in what ways does AI present ethical dilemmas in the real world.
A third paper by Tasioulas [23] may be helpful; it offers an “overview of the main first-order ethical questions raised by robots and Artificial Intelligence (RAIs) under five broad rubrics”: Functionality, Inherent significance, Rights and responsibilities, Side effects, and Threats. The first letter of each rubric taken together conveniently generates the acronym FIRST [23] (p. 49). While this study merits consideration in contributing to the overall proposition, the Tasioulas work is probably too broad a landscape for our purposes. The educational benefit of the “FIRST” acronym is that it covers five important concepts; however, we do not propose raising ethical issues of robotics. The Tasioulas paper casts too broad a net.
We choose to return to a narrower perspective, starting with classical and modern ideas about ethics. One way to proceed might be to look at the seven sets of ethical lenses provided by the so-called JUSTICE model for possible answers. In that framework each of these seven letters triggers a perspective for addressing any kind of ethical dilemma. For example, J for Justice would ask whether the action meets requirements that whatever we do be fair and just to all concerned. U for Utilitarian would ask whether the contemplated action provides the greatest good for the greatest number, or, more simply, whether the good outweighs the bad. We briefly review the JUSTICE model and discuss the extent to which the various elements of the model provide tools by which we may judge the ethicality in the application of AI in various circumstances. Specifically, as one example, how would each of these distinct sets of ethical principles help us to assess ways by which AI might be unethical?
5. Assessing Ethics of Decisions Involving Artificial Intelligence Through the JUSTICE Framework
This essay is based on our professional experience, not on any research study. We used the decision-making shortcut called the JUSTICE framework by Lau and colleagues [11]. Individuals can approach a question of ethics using the seven dimensions of this framework. That JUSTICE model, included here with minor modifications, suggests these seven different ways to look at an issue from an ethical perspective. The Lau et al. [11] paper recommends individuals select one or more of seven possible approaches to ethical decision-making criteria and apply these to a specific situation. If all say “ethical”, this is accepted as the answer. If all suggest “unethical”, then the action is not ethical. If results are mixed, additional consideration is needed. The model can be used for any issue with ethical dimensions, such as polluting a river, helping someone cheat, sexual harassment. Or using deep fake audio and/or videos to fool a potential investor or voter or lender.
The following summary of the model from the Lau et al. 2007 [11] paper illustrates this. In the JUSTICE model each letter stands for one way to approach an issue with possible ethical ramifications:
- JUSTICE: Same rules apply to all.
- UTILITARIAN: Greatest good for greatest number, good outweighs bad.
- SPIRITUAL VALUES: Such as the Golden Rule—do to others as you would want others to do to you.
- TV RULE: Can you honestly explain your decision on TV with your family watching?
- INFLUENCE: Consider what influence (if any) this action might have.
- CORE: Values, deepest human values, considering things really important in life.
- EMERGENCY: Urgency of the decision, any requirement for immediate action.
Users have little trouble remembering the seven criteria and are able to define each. The acronym JUSTICE is useful. One advantage of the JUSTICE framework is that it seems to be a usable tool. However, it does not fit neatly into various schools of thought about ethics. The model does not cover all approaches or theories. Nor does it dictate whether it is best to teach ethical principles or teach ethical rules [24]. We observe that business seminar participants and students prefer tools that can be used in real life and are less interested in academic arguments as to which ethics approach is “best.” The JUSTICE model does not take a position on the question of whether one should look at ethics from a consequentialist (or teleological) perspective or from an absolutist–moralist (or deontological) point of view. This framework does not use uncommon words such as teleological or deontological, thus being more usable by decision makers today.
- J, Justice
For example, the first letter J introduces JUSTICE. A major idea of Justice is that every person should be treated equally [25]. Inequality might be permissible if that results in benefits for all and especially for those least advantaged [26]. Noble’s [27] research on search algorithms demonstrates how AI disproportionately harms marginalized groups, violating Rawls’ [25] principle of justice. Benjamin [28] extends this critique with her ‘New Jim Code’ framework, showing how technical neutrality often masks systemic racism—an ethical failure Lau et al.’s [11] JUSTICE criterion should address.
- U, Utilitarian
The UTILITARIAN approach is common in any discussion of ethical decision making. However, users of the framework may think of ethical decision making not as the “greatest good for the greatest number,” but instead as “cost benefit analysis”. There is evidence to suggest that AI tools promising ‘greatest good’ sometimes perpetuate inequality. O’Neil’s [29] ‘weapons of math destruction’ reveal how utilitarian AI tools—for example in credit scoring—hurt systematically disadvantaged populations. This challenges the assumption that AI benefits the majority [11], while Broussard’s [19] work on ‘artificial unintelligence’ explains why these failures persist.
- S, Spiritual Values
The letter S suggests that what Westerners call the “Golden Rule” is the essence of the SPIRITUAL approach to ethical decision making. People in the West can grasp this idea easily [30] and we also find wide acceptance in Asia. However, the idea attributed to Jesus “do unto others as you would have them do unto you” is not an exclusively Christian belief. Asians enjoy hearing that some 500 years before Christ, Confucius advised followers not to impose on others what they would not desire others to impose on them [31]. Why is this ethical decision-making approach labeled S not G for Golden Rule? The term “Golden Rule” is closely associated with a Western idea; however, the concept is contained in many religious teachings around the world. A less “Western” term might be preferred. One additional reason the word Spiritual is used is that Spiritual begins with the letter S, and S fits into the JUSTICE acronym. Bruton [30] builds a strong case and shows how ethical theory can address potential issues by looking first at the Golden Rule. A reader of Bruton’s work might conclude that if one wishes to start with one ethical decision-making tool, this might be it. The lens of Spiritual Values is robust yet has depth.
- T, TV Rule
A different candidate for best measure of ethics is T or the TV rule, or Transparency. This will be discussed more below, as it shows great promise in determining the ethicality of issues around Artificial Intelligence.
- I, Influence
The letter I suggests INFLUENCE and draws on ideas of Aristotle that there is a greater responsibility in areas where there is greater influence. Aristotle said much the same thing more than two thousand years ago [32]. More recently Spiderman in the Hollywood movie (2002) popularized the words “with great power comes great responsibility.” It is true that individuals who take ethical shortcuts, for example, using ChatGPT-generated text without attribution in their writing [33], will potentially influence, by example, the entire community. However, we do not find “I” always useful. The job market shows the complexity of everyday decisions having ethical overtones. Should one work as an accountant for British American Tobacco? Refusing the job will not influence people to stop smoking. Thus, by the Influence criterion it is not unethical to take the job. However, some point out that even working as an accountant for a tobacco company might Influence a young family member, say a younger sister. Thus “I” could say that it is not right to take a tobacco job.
- C, Core Values
One can also use the tobacco example to reflect on the CORE values idea. If “human life” is the most central core value, then one might say the ethical choice would be to refuse any job in the tobacco industry. Also, Benjamin [28] argues that without intentional redesign, AI will replicate historical inequities through what she terms the ‘New Jim Code’. This violates Core Values (C) of equity and echoes McCoy’s [34] parable (described below), where systems prioritize efficiency over compassion.
- E, Emergency
Emergency, the seventh ethical decision criterion in the model, basically ignores the morality of an act. Instead E asks whether the situation requires speedy action. In such cases, normal deliberations may not be appropriate. Sometimes a decision has to be made in minutes if, for example, life is at stake. It is OK to ignore a red traffic light if taking someone to a hospital emergency room.
- Fine-tuning the JUSTICE Model
As the above section (taken from the Lau et al. 2007 paper) shows [11], there is not a single thematic answer. However, by looking at an issue using seven different lenses, we note that several of the approaches show varying degrees of promise. For example, Core and Emergency are not always useful for making decisions with ethical ramifications involving AI. However, one can imagine cases where Core and Emergency might apply. For example, to save a life most agree it is ethical to tell a lie. Kant once said even with a murderer at the door one cannot lie [35]. We do not see a situation where AI endangers or saves lives. However, Benjamin [28] argues that AI systems may encode racial biases, thus violating Core Values (C). The four other approaches, J for Justice, U for Utilitarian, S for Spiritual values (golden rule, etc.), and T for TV rule (sunshine test, etc.), are now discussed briefly.
- J, Justice
For an action or inaction to satisfy the Justice test, one should ask whether this policy or action harms one sector and/or benefits another. To be “just” the same rules should apply to all evenly and fairly. In the USA there has been considerable discussion as to whether and to what extent law enforcement is applied evenly to both white and black ethnic groups. The death of black male George Floyd at the hands of a white police officer was followed by protests across the United States and around the world [36] and is an illustration of possible violations of the Justice idea. Noble [27] demonstrates how AI-driven search engines perpetuate racial stereotypes, underscoring the need for Justice (J) in AI design—a failure to ‘apply rules evenly’ [11] (p. 5). To better understand how this J, justice, test might help us see issues of ethics of AI, we can look at the work of academics. Collectively, four scholars—Broussard [19], Noble [27], O’Neil [29], and Benjamin [28]—reframe AI ethics as a question of power, not just principles. Their work suggests that the JUSTICE framework asks not just whether AI is fair, but who gets to define fairness in algorithmic systems. For example, the principle of Justice requires equitable treatment for all stakeholders [25], yet as Noble [27] empirically demonstrates, AI systems routinely fail this test. Her analysis of search algorithms perpetuating racial stereotypes provides concrete evidence of what Benjamin [28] later theorized as the ‘New Jim Code’—systemic racism rebranded as technical neutrality. These findings demand that users apply Lau et al.’s [11] JUSTICE criterion carefully when evaluating AI tools.
- U, Utilitarian
Many ideas emerge in viewing the Utilitarian perspective. As economists use that term, it often seems almost quantifiable, the greatest good for the greatest number. However, we prefer the commonly used idea “does good outweigh bad?” Academics and business practitioners often label this thought process as “cost-benefit analysis” [37].
Ethics classes in business seminars and in universities often use the “trolley dilemma” to help individuals appreciate that the simple use of numbers does not always assist decision makers. In that hypothetical trolley scenario, you, a bystander, see a streetcar on a path that will clearly lead to the death of five persons. You also observe that pushing one innocent obese person (who happens to be there) into the path of the trolley will without doubt lead to the death of that one person but will save the other five. In Killing, Letting Die, and the Trolley Problem, Judith Jarvis Thomson [38] explains why most people would not kill one person to save five. This case or variations of it have been argued long before the use of AI emerged. The trolley problem, as it came to be known, shows that U, Utilitarian thought, does not always help decision makers. Until Artificial Intelligence incorporates emotion and feeling, AI would fail to solve the trolley problem. A machine, responding as a mechanical or electrical device to perform tasks in an automated form, would calculate that five lives over one life would be the best outcome. As shown in this scenario, many situations cannot easily be reduced to quantifiable dimensions.
Another ethics-teaching case, Jim and the Jungle [39,40], discusses a scenario in which, if an individual kills one prisoner, others who have also been sentenced to death would be freed. The discussion of Jim and the Jungle again reveals a flaw in utilitarian thinking. None of our seminar participants would kill one to save many even though this would clearly be the greatest good for the greatest number. What would Artificial Intelligence dictate here? The mathematical answer—kill one, save many—does not work.
Utilitarianism also fails to solve the Parable of the Sadhu as told by Buzz McCoy [34]. In this true situation a group of mountain climbers is forced to decide what to do when a lost religious pilgrim, a near-death Sadhu, is handed over to unprepared climbers. To help the Sadhu down the mountain to reach safety is the obvious ethical choice. But saving this one individual would mean ending the trip of a lifetime for Buzz McCoy. Students try to balance saving one life against ruining this dream for many others. Utilitarian ideas fail us, and failed Buzz McCoy. The McCoy group rendered some aid but never did find out whether the Sadhu survived. Again, this was long before AI emerged, but is an example of the kind of dilemmas raised in ethics discussions. We did not explore what solution ChatGPT might suggest for Buzz McCoy, but we may imagine some have asked exactly that question.
Mini cases can be powerful. They can trigger thinking on ethical issues in general and clearly show that utilitarianism can fail as an ethics decision-making tool in many cases. If a person uses AI to frame an essay on a topic, the user might say the use of AI harms no one and makes the essay better. On balance, it seems applying utilitarian thought with or without AI is of doubtful value.
For our main question, whether the Utilitarian approach helps one judge the ethicality of uses of AI, the answer appears to be no. While cases help individuals see issues using utilitarian approaches, these cases do not benefit from the use of Artificial Intelligence.
The Sadhu case also helps make clear what J (for Justice) might entail. The pilgrim was to blame for his life-threatening predicament. This fact might have led to McCoy deciding to render some minimal aid but only if rendering aid did not ruin the expedition. In the video of this case, after the Sahdu had been left to an unknown fate, a fellow mountain climber asked McCoy, “what would you have done if that were a Western woman?” Clearly the actions taken, or lack of actions, would not pass the J for Justice test. Every person should be treated equally. Inequality might be permissible only if it benefits all and especially those least advantaged.
The JUSTICE model provides no single thematic answer. However, by looking at our issue with seven different lenses, it appears that most of these seven starting points show some promise. However, C (Core) and E (emergency) do not seem directly relevant in most cases. It is possible to imagine situations where these lenses might apply. Most likely, most people, to save a life, would agree that it can be ethical to tell a lie. For our present analysis we cannot imagine any emergency where using Artificial Intelligence would endanger or save a life. C (Core) and E (emergency) do not appear particularly significant in identifying ethical issues of Artificial Intelligence.
Copying the work of many others, when used with appropriate credit, is ethical and permissible in a research context. Copying material without giving credit is plagiarism. Certainly AI, with sources such as ChatGPT readily available, makes plagiarism easier and more efficient. In a study looking at plagiarism, written in the pre-Artificial Intelligence era, Nelms says “plagiarism does not bother me at all” [41]. Individuals may say, “if I can help my friend by pointing out good things to quote, what’s the harm? Who is hurt?”. But this example suggests that in such cases, AI makes unethical behavior easier.
Our starting question was how each of these distinct sets of ethical principles would help assess ways Artificial Intelligence might be unethical. Much depends on what and where. What body of ethics theory might best help us answer our question? Which of the seven lenses in J U S T I C E seem most useful? Probably not C nor E and probably not I. J, U, and S each are useful in deciding questions of ethics in general but do not seem especially useful to solve or prevent ethical issues raised by Artificial Intelligence. What was called by Lau and her colleagues [11] the TV Rule, Transparency, seems most likely to defend Artificial Intelligence against past, present, and future criticisms.
The TV Rule has been described many ways using many terms. If you can honestly tell the world what you are doing it passes this “sunshine test.” Barends and Rousseau [42] (p. 312) ask “does this pass the mother-in-law test?” Can you explain, justify, your decisions to your mother-in-law? Trevino and Weaver [43] use the term “smell test.” Hamilton, Knouse, and Hill [44] show how the smell test can help identify cross-cultural ethical problems. Although these citations are pre-Artificial Intelligence, they illustrate that transparency, the TV Test, can be used in many areas. Such tests can be applied to issues of digitization in general and AI in particular. Consider this list of areas where unethical uses of Artificial Intelligence might occur (taken from Kelly [45] with minor modifications):
- Plagiarism: The use of the work of another person without giving appropriate credit for its use. This may involve using ideas or information made by others without acknowledgement, or insufficient or improper citation; examples of plagiarism include copying sections of text without quotation marks, submitting text purchased from a ghostwriter, or reusing work already submitted for earlier or other assignments.
- Cheating: Acting dishonestly to create or gain an advantage; in the academic sphere it includes breaking rules during or in relation to examinations, such as giving or accepting assistance, copying from another student’s work, or unauthorized access to electronic devices.
- Fabrication: An effort to invent or produce something lacking sincerity, such as production of a fake document, or altering or forging a document.
- Sabotage: Deliberate act to hinder or prevent any act of another, such as the theft or suppression of written information, laboratory or field experiments, computer files and so forth.
- Collusion: An unauthorized collaboration or cooperation with others which confers an unfair advantage for some, which may include other forms of violation mentioned above.
- Disregard of research/professional ethics: Knowingly breaching professional or ethical rules and standards governing principles of best practice (end of section taken from Kelly).
Full transparency, the TV Rule, would help eliminate or alleviate each of these potential problems listed above. Full disclosure may be more difficult than it appears but stands out as the best approach of the seven lenses in the J U S T I C E framework to answer whether a particular application of Artificial Intelligence is or is not ethical.
6. Significance of Artificial Intelligence (AI) for Humanity
All this must be seen in the light of the larger discussion. What is the significance of digital intelligence and artificial intelligence to humanity? South African-born American entrepreneur Elon Musk has often expressed fears in this regard, in comments widely publicized, that Artificial Intelligence might make “work” obsolete. Artificial Intelligence systems might replace humans, making our species irrelevant, echoing frightening comments made earlier by Stephen Hawkins: “full artificial intelligence could spell the end of the human race…” [46]. One statement signed by various experts made the doom prediction clear: “mitigating the risk of extinction from AI should be a global priority” (quoted in [47]). Those warnings sound quite stark and also somewhat dark [48]. That this anti-AI ‘extinction of humanity’ idea could be promulgated by Musk is surprising, as he was a cofounder of OpenAI, the not-for-profit (originally) firm that created ChatGPT. Musk left that firm after a dispute. In 2023 Musk (along with more than 30,000 cosigners) asked for a six-month moratorium on further developments in AI [49]. Even before that proposed 6-month moratorium had concluded, in July 2023 Musk established a new firm X AI, presumably to compete with OpenAI and the many other competitors in the AI space [50]. New ways to put artificial intelligence to work seem to appear weekly. While AI promises transformative benefits, scholars like Broussard [19] caution against ‘technochauvinism’—the assumption that solutions are inherently superior to human judgment, particularly in ethically fraught domains like education. Some of the more prominent early Artificial Intelligence platforms, chatbots, and systems were described by Rudolph and colleagues in a paper with extensive references [51]. Within a few years, some of those on the Rudolph list had been replaced by new chatbots and some had disappeared. A recent quick query to one chatbot, Copilot, identified and described current AI tools, chatbots, as of late 2025:
Mainstream and Actively Used:
ChatGPT—Versatile, widely adopted;
Claude—Thoughtful, long-context;
Copilot—Integrated with Microsoft tools;
Google Gemini—Strong in real-time and mobile;
Perplexity—Research-focused, citation-rich;
Meta AI—Embedded in social apps (WhatsApp, Instagram);
Grok—Elon Musk’s chatbot, edgy and viral;
Duck.ai—Privacy-first, anonymous;
Mistral—Open-source, fast, developer-friendly;
OpenChat—Lightweight, open-source alternative.
Less well known, niche:
ChatSonic—Creative writing and voice features;
Jasperchat—Marketing and copywriting;
Geniechat—Smaller footprint, niche use;
DeepSeek—Fast and affordable, good for devs;
Pi.ai—Emotionally intelligent, life coaching.
This 2025 list, and the 2023 table in Rudolph et al. [52], help illustrate the presence of a wide variety of AI tools usable in the mid-2020s, but also remind us that things can change fast. A similar list a few years later might show new names, and some of the present names will disappear. When looking for quick information today (or yesterday), a person might “Google it.” Now we are likely to consult one of the AI tools, and if we Google it, that may lead us to a chatbot on the list above, Google Gemini [52]. Developments are moving at a great pace in this domain.
A person born before 1980 might remember Netscape, the internet search engine that ruled the world before Google came along [53]. In a few short years, Netscape went from zero to “the world’s most popular computer application” [54] (p. 8). Those born after the year 1990 might not even recognize the word Netscape. Netscape went back to zero [55]. Things change fast. There are new developments in Artificial Intelligence weekly if not daily. At present we can use the letters AI and expect everyone to know those letters stand for artificial intelligence. Two decades later, will “AI” mean anything? Is this a passing fad? In 2023 Bloomberg TV head-lined news from Intel this way: “Intel steps up bid to join AI gold rush: Intel unveils server, PC chips in bid to join AI craze” [56]. Many exciting developments are coming from China and the rest of the world, not only the USA. DeepSeek created a buzz of excitement when it was released. AiiBaba can also search and create images [57]. Will terms such as ChatGPT and AI fade into insignificance, rarely used? Is what is happening a “gold rush” or is it just a passing “craze”? Often things get attention and then that generates more attention, at least for a time. As Mintzberg notes, sometimes new ideas are “greeted with great enthusiasm… then a few years later… quietly ushered out the back door” [58] (p. 53). Those who know the name Charlie Munger may remember him as the quiet guy Warren Buffet always trusted for common-sense ideas. On this topic Munger said, “I am personally skeptical of some of the hype that has gone into artificial intelligence. I think old-fashioned intelligence works pretty well” (Munger quoted in [59]).
The six-month moratorium called for (above) by Musk and the others did not stop anything and it appears that we are not near an end. This has multiple important ramifications: (1) If AI is on a path to destroy humanity, a trend towards the “singularity” almost as in the fictional Hollywood movies such as the Terminator, we had all better stay alert. A bit easier to handle would be the other key ramification: (2) AI is on a path that will impact business and every other realm of human activity.
7. Will AI Bring About the End of the World?
Various fears are voiced, and as in the pithy quotes from celebrities such as Elon Musk, often repeated on social media. But academics voice concerns as well. Patulny, Lazarevic, and Smith [60] explore what will happen when emotion is further digitized and analyzed: “‘Once more, with feeling,’ said the robot: AI, the end of work…” Both academics and journalists see potential dangers ([61,62,63]). Gloom and doom make good reading but do not necessarily make sense. One example would be predicted loss of jobs. Melissa Valentine says “predictions of job loss in the ’90s haven’t played out the way the more cataclysmic predictions foretold” [64]. Predictions about AI ending humanity appear to us as far-fetched, both unclear and unlikely. We as citizens have a responsibility to ask the world not to fear the future. Rather our job should be to excite the next generation about the possibilities. Specifically, calls to ban ChatGPT in academia are neither necessary nor helpful.
However, new developments often encounter fear and even resistance, which in retrospect seem unwarranted. Edison said that alternating current could bring unnecessary deaths, but his statements were attempts to win a commercial battle; Edison used direct current and competitors, such as Westinghouse, used alternating current [65]. Worldwide in the 20th and 21st centuries, alternating current has been used, not Edison’s preferred direct current. Edison also said that books would become obsolete in schools, replaced by motion pictures [66]. Visual media have certainly had an impact in education, but we still have books. Indeed, with the advent of e-books, humans even have access to the written word anywhere and everywhere. Predictions about new technologies have often been wrong in the past and are likely to be at least partly wrong in the future.
The advent of the horseless carriage brought numerous reactions. To impede the growth of this invention, laws were passed that in retrospect seemed to lack common sense or be foolish. In some localities motor cars were required to “lumber along with a man walking in the front of them carrying a red flag to warn other traffic, so that it was impossible for the driver to exceed the flag man’s walking pace, namely 4 miles per hour” [67] (p. 8). Not only did governments fear this new invention, the automobile, but academics also voiced alarm. Given the tragic numbers of people killed in automobile accidents, numerous communities being divided and even destroyed by massive superhighways, critics may have had a point. Few would state these fears as bluntly as A. J. Mishan: “I once wrote that the invention of the automobile was one of the greatest disasters to have befallen mankind. I have had time since to reflect on this statement and to revise my judgment to the effect that the automobile is THE greatest disaster to have befallen mankind” [68] (p. 41).
Only those born before about 1985 would remember the outpouring of warnings that disaster would befall the planet when we were ambushed by the year 2000, Y2K. This Y2K event would bring an endless list of life-threatening problems. Elevators in high rise buildings might stop working, cash registers (which still existed in December 1999) would stop, and ATM machines would not function as of one minute after midnight on that last day of the 20th century. Some even feared that air traffic control systems, having used 2-digit year codes for more than half a century, might not function properly, as 00 would be read as 1900, not as 2000. Some even raised fears of missiles launching. The hysterical and nonsensical fears seem impossible to imagine, now only a couple decades later [69]. Sometimes hysterical warnings of impending disaster catch the attention of the public, partly thanks to publishers who see impending doom as a journalistic moneymaker. We should listen to critics of Artificial Intelligence, even those who are likely wrong about singularity, but it would be wrong to use fear of the future to prevent us from harnessing and guiding Artificial Intelligence for the betterment of humanity.
Another huge technological advance that in fact brought massive changes to humanity started with the advent of aviation. The advent of commercial aviation certainly changed the world. We might guess that every person who reads this has taken a flight. But some of the warnings, as well as some of the promises, were unrealistic. Popular articles during the mid-20th century envisioned “everybody” flying to work in their own helicopters. But the key point to remember here is that the future of commercial aviation is, to most of the earth’s 8 billion people, irrelevant. Statistics are always subject to error, but some sources say fewer than half of all humans have ever boarded a plane [70]. Others say 6 billion of the world’s 8 billion have never been in an airplane [71]. To much more than half of the world, airplanes are irrelevant. One should carefully read and listen to those who say Artificial Intelligence means catastrophe. But to much of the world, is this talk relevant? Instead of taking drastic anti-AI action, perhaps we should learn, adapt, take advantage of this important new set of capabilities [72].
8. Artificial Intelligence and the Ethics of Human Extinction
People we talk to, even those who agree that “mitigating the risk of extinction from Artificial Intelligence should be a global priority,” have difficulty describing exactly how this set of tools that we call Artificial Intelligence will bring even one death much less the extinction of humanity. But this lack of clarity need not prevent us from considering these issues. All too frequently we read of a mass shooting somewhere, and the JUSTICE model can be used to show that murder is not ethical. Killing a person, random or not, fails the “J: Justice” test. Killing one person might pass that test if such a death might prevent the deaths of a larger number of persons. The ‘trolley problem’ described above could be an example. But the extinction of the human race, all 8 billion of us, would fail this test. It would certainly also fail to pass the U Utilitarian perspective. Bringing the human race to extinction would certainly fail to satisfy the admonition to ‘do unto others as you would have them do unto you’, thus failing the S Spiritual test too. It is hard to imagine a person, or in this case a machine, going on TV and announcing, “the end of the world is near”, although some religious leaders have made such pronouncements, (so far) inaccurately. I or Influence seems tricky to apply, as until such a time as Artificial Intelligence can be said to have feeling, sentience, it will be impossible to envision Artificial Intelligence having influence. But again, by envisioning Artificial Intelligence as a set of tools, one might imagine an Artificial Intelligence-generated image influencing an election. The USE of such a tool to mislead would indeed fail the I Influence test. Bringing all human life to an end would violate C Core values, so Artificial Intelligence would truly fail this criterion. Applying E Emergency to evaluate AI bringing extinction surely would lead to the conclusion that this is an Emergency. Applying the JUSTICE model to the warning that Artificial Intelligence brings a risk of extinction suggests that indeed, hastening human extinction would be unethical by any and all of the seven components, JUSTICE. The catch is that this violation of ethics would apply only if indeed Artificial Intelligence made human extinction more likely. To date, we do not see evidence that Artificial Intelligence will hasten human deaths. The JUSTICE Model, unfortunately, does not help us here.
9. Artificial Intelligence and Humanity
The world is changing and Artificial Intelligence is one of the factors driving change. These changes remind us that only 50 years ago, one wrote on typewriters (using a liquid paint called white-out to correct errors) and we located relevant material by going to a physical library. Today scholars use Google Scholar and Microsoft Word and no one considers this unethical. Artificial Intelligence is bringing change. As the printing press and moveable type changed the world, so will AI. Similar changes will come to innumerable aspects of life, industry, commerce, government. But how much change will occur and when? If we are already in AI 2.0, why do we not see the dramatic change we were warned about? The popular news magazine The Economist put it this way: “Beyond America’s west coast, there is little sign AI is having much of an effect on anything” [73] (p. 57). That journalist’s opinion is clearly just an opinion, similar to the opinion in this essay. A study involving 2525 knowledgeable AI decision makers suggests that gaining benefits from AI in real organizations presents real challenges, and “we need to learn much more… [if we are to] deliver on these [AI] promises.” One of those challenges is “responding to the increasing demand for trustworthy and ethical AI” [74,75] (p. 9). A more nuanced perspective might balance information about areas of little AI impact with areas of more impact. A paper published in Sloan Management Review reported that “seven out of 10 companies surveyed [in one particular survey] report minimal or no impact from AI so far… 40% of organizations making significant investments in AI do not report business gains from AI.” If 40% do not report gains, that suggests that 60% did report gains [76].
In addition to the significant organizational and legal attention, there are important industry-wide efforts underway to help build a trustworthy World Wide Web. A loose coalition of more than 5000 small and huge organizations including Adobe, Alphabet, Google, and Microsoft have signed on to, or on paper agreed with, steps to improve digital content transparency. This Coalition for Content Provenance and Authenticity, or C2PA, is probably a good thing, and needs to be evaluated, with examples showing successes and failures. This is for certain worth studying, as of the day of this writing, we saw a YouTube clip with the words WARNING CONTENT ALTERED. This was reassuring. This would be more reassuring if the next YouTube clip we watched had had similar warnings, but it did not. It was a heartwarming story about a wild animal entrusting her injured cub to humans. Heartwarming, but this clip is a 100% Artificial Intelligence creation, without a word of warning. Zero authenticity, zero warning. YouTube is owned by Google, a C2PA signatory.
Will Artificial Intelligence bring change? Yes. Will Artificial Intelligence take over the world and end humanity? That cannot be known now but does not cause us to panic. Will the humans on our earth adapt and change in the light of the possibilities and problems provided by Artificial Intelligence? Will the world change? Yes and yes. But as Hayes [77] explains, change is never easy. Often new elements, new ideas, and new technology are met with apprehension if not fear. There may be tangible resistance to change. As Cadez points out, whenever confronted with requirements for change, we should expect “perseverance of old values and norms” [78] (p. 6891). The foretellers of doom hastened by digital intelligence and AI may be exhibiting resistance to change and the perseverance of old norms. Those who see AI as hastening the end of humanity are unnecessarily pessimistic. The world adapted to the automobile and academics will find ways to accommodate new possibilities in Artificial Intelligence. Change is happening, even at the micro level. In one university course taught by one of us to keep up with the times and to maximize the potential of the internet, exams which had previously been in-class were more recently given take-home open-book. With or without Artificial Intelligence, even pre-ChatGPT, there had been instances of plagiarism in exam answers. Now, as we navigate a world where Artificial Intelligence is ubiquitous, a qualitative shift is discernable. Some answers were great, almost too great given what we knew about the persons turning in the work. A few students used AI-enhanced answers. The new technology caused unintended collateral damage. We abandoned the take-home exams. The next semester exams were written in the classroom, using the almost forgotten exam “blue books.” The adaptation to the new reality was enabled by massive quantities of very-old unused “blue books” which luckily had not been destroyed.
Returning to our starting question, we ask again, the main question: Can the JUSTICE framework tell us if Artificial Intelligence is Ethical? When? How? The typical academic response applies: yes and no. Looking at many applications of Artificial Intelligence using each of the 7 JUSTICE lenses, one at a time, can indeed be illuminating. Different circumstances, different situations, suggest using different evaluation criteria.
Indeed, if somehow Artificial Intelligence did result in human extinction, that would be assessed as unethical by each or any of the 7 JUSTICE tests. But does Artificial Intelligence mean the end of humanity? No. An important new technology requiring change in norms and practices? Yes. What changes? We do not yet know. As Ethan Mollick says, “the world has changed in fundamental ways, and … nobody can really tell you what the world will be like” [78] (p. xii).
Author Contributions
C.L. was first author on the original paper on this framework, A.K. and M.L. helped bring the 2007 paper into the 2020s and J.H. provided the wording on the present draft. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
No data sets were used and no data sets were created during the preparation of this manuscript.
Acknowledgments
A previous version of this paper was presented at the 11th International Symposium on Global Business, Nanjing University, 2024. We also thank Kate Hulpke for editorial help and two anonymous referees for extensive comments.
Conflicts of Interest
The authors declare no competing interests.
References
- OECD. Press Release: OECD Updates AI Principles. 2024. Available online: https://www.oecd.org/en/about/news/press-releases/2024/05/oecd-updates-ai-principles-to-stay-abreast-of-rapid-technological-developments.html (accessed on 9 September 2025).
- UNESCO. AI Openness: A Primer for Policymakers; OECD Artificial Intelligence Papers, No. 44; OECD Publishing: Paris, France, 2022. [Google Scholar] [CrossRef]
- Bossman, J. Top 9 Ethical Issues in Artificial Intelligence. In Emerging Technologies; World Economic Forum: Geneva, Switzerland, 2016; Available online: https://www.weforum.org/stories/2016/10/top-10-ethical-issues-in-artificial-intelligence/ (accessed on 8 September 2025).
- Yang, A.Z. A Comparative Analysis of AI Governance Frameworks. Wash. J. Law Technol. Arts. Available online: https://wjlta.com/2024/07/09/a-comparative-analysis-of-ai-governance-frameworks/ (accessed on 8 September 2025).
- Stone, C.D. Where the Law Ends: The Social Control of Corporate Behavior; Harper and Row: New York, NY, USA, 1975. [Google Scholar]
- Morley, J.; Hine, E.; Roberts, H.; Sirbu, R.; Ashrafian, H.; Blease, C.; Boyd, M.; Chen, J.L.; Filho, A.C.; Coiera, E.; et al. Global Health in the Age of AI: Charting a Course for Ethical Implementation and Societal Benefit. Minds Mach. 2025, 35, 31. [Google Scholar] [CrossRef]
- Floridi, L.; Mariarosaria, T. Moral vs Legal Norms: Soft and Hard Ethics. In A Companion to Digital Ethics; Wiley: Hoboken, NJ, USA, 2025; pp. 11–23. [Google Scholar]
- Koo, M. ChatGPT Research: A Bibliometric Analysis Based on the Web of Science from 2023 to June 2024. Knowledge 2025, 5, 4. [Google Scholar] [CrossRef]
- Giarmoleo, F.V.; Ferrero, I.; Rocchi, M.; Pellegrini, M.M. What Ethics Can Say on Artificial Intelligence: Insights from a Systematic Literature Review. Bus. Soc. Rev. 2024, 129, 258–292. [Google Scholar] [CrossRef]
- Tani, M.; Muto, V.; Basile, G.; Nevi, G. A bibliometric analysis to study the evolution of artificial intelligence in business ethics. Bus. Ethics Environ. Responsib. 2025, 1–23. [Google Scholar] [CrossRef]
- Lau, C.; Hulpke, J.F.; To, M.; Kelly, A. Can ethical decision making be taught? The JUSTICE approach. Soc. Responsib. J. 2007, 3, 3–10. [Google Scholar] [CrossRef]
- Alavi, M. Ethics in AI: A Perspective from Business and Technology. J. Bus. Ethics 2019, 160, 949–952. [Google Scholar]
- Amabile, T.M. Creativity, artificial intelligence, and a world of surprises. Acad. Manag. Discov. 2020, 6, 351–354. [Google Scholar]
- Coglin, C. Book Reviews: The Ethics of Artificial Intelligence: Reviews of Ethical Machines: Your Concise Guide to Totally Unbiased, Transparent, and Respectful AI by Reid Blackman; Ethics of Artificial Intelligence: Case Studies and Options for Addressing Ethical Challenges by Bernd C. Stahl, Doris Schroeder, and Rowena Rodrigues; AI Ethics by Mark Coeckelbergh. Rev. J. Bus. Ethics 2023, 188, 623–627. [Google Scholar]
- Herane, M. AI in Education, Balancing Innovation with Ethics. High. Educ. Digest 2024, 6, 75–80. [Google Scholar]
- Law, R.; Ye, H.; Lei, S.S.I. Ethical artificial intelligence (AI): Principles and practices. Int. J. Contemp. Hosp. Manag. 2025, 37, 279–295. [Google Scholar] [CrossRef]
- Savulescu, J.; Giubilini, A.; Vandersluis, R.; Mishra, A. Ethics of artificial intelligence in medicine. Singap. Med. J. 2024, 65, 150–158. [Google Scholar] [CrossRef]
- Jobin, A.; Lenca, M.; Vayena, E. The global landscape of AI ethics guidelines. Nat. Mach. Intell. 2019, 1, 389–399. [Google Scholar] [CrossRef]
- Broussard, M. Artificial Unintelligence: How Computers Misunderstand the World; MIT Press: Cambridge, MA, USA, 2018. [Google Scholar]
- McGrath, B.; Kelly, A. Teaching Cases: Artificial Intelligence Complicates Things. In Proceedings of the 11th International Symposium on Global Business, London, UK, 4–5 November 2024; Nanjing University: Nanjing, China, 2024. [Google Scholar]
- Hagendorff, T. The Ethics of AI Ethics: An Evaluation of Guidelines. Minds Mach. 2020, 30, 99–120. [Google Scholar] [CrossRef]
- Mittelstadt, B. Principles alone cannot guarantee ethical AI. Nat. Mach. Intell. 2019, 1, 501–507. [Google Scholar] [CrossRef]
- Tasioulas, J. First steps towards an ethics of robots and artificial intelligence. J. Pract. Ethics 2019, 7, 49–84. [Google Scholar] [CrossRef]
- Herron, T.L.; Gilbertson, D.L. Ethical principles vs. ethical rules: The moderating effect of moral development on audit independence judgments. Bus. Ethics Q. 2004, 14, 499–523. [Google Scholar] [CrossRef]
- Rawls, J. A Theory of Justice (Revised Edition); Belknap Press of Harvard University Press: Cambridge, MA, USA, 1999. [Google Scholar]
- Jones, T.M.; Felps, W.; Bigley, G. Ethical theory and stakeholder-related decisions: The role of stakeholder culture. Acad. Manag. Rev. 2007, 32, 137–155. [Google Scholar] [CrossRef]
- Noble, S.U. Algorithms of Oppression: How Search Engines Reinforce Racism; New York University Press: New York, NY, USA, 2018. [Google Scholar]
- Benjamin, R. Race After Technology: Abolitionist Tools for the New Jim Code; Polity Press: Cambridge, UK, 2019. [Google Scholar]
- O’Neil, C. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy; Crown: New York, NY, USA, 2016. [Google Scholar]
- Bruton, S.V. Teaching the golden rule. J. Bus. Ethics 2004, 49, 179–187. [Google Scholar] [CrossRef]
- Mou, B. A reexamination of the structure and content of Confucius’ version of the Golden Rule. Philos. East West 2004, 54, 218–248. [Google Scholar] [CrossRef]
- Okoye, A. Theorising corporate social responsibility as an essentially contested concept: Is a definition necessary? J. Bus. Ethics 2009, 89, 613–627. [Google Scholar] [CrossRef]
- Else, H. Abstracts written by ChatGPT fool scientists. Nature 2023, 613, 423. [Google Scholar] [CrossRef]
- McCoy, B.H. The parable of the Sadhu. 1983. Harv. Bus. Rev. 1997, 75, 54–56. [Google Scholar] [PubMed]
- Varden, H. Kant and Lying to the Murderer at the Door… One More Time: Kant’s Legal Philosophy and Lies to Murderers and Nazis. J. Soc. Philos. 2010, 41, 403–421. [Google Scholar] [CrossRef]
- Bastien, F. Policing and racial justice: Global perspectives after George Floyd. J. Soc. Equity 2020, 15, 45–67. [Google Scholar]
- Hermann, E. Leveraging artificial intelligence in marketing for social good—An ethical perspective. J. Bus. Ethics 2022, 179, 43–61. [Google Scholar] [CrossRef]
- Thomson, J.J. Killing, letting die, and the trolley problem. Monist 1976, 59, 204–217. [Google Scholar] [CrossRef]
- Mason, E. Coercion and integrity. Oxf. Stud. Norm. Ethics 2012, 2, 180–205. [Google Scholar]
- Williams, B. Consequentialism and Integrity. In Consequentialism and Its Critics; Scheffler, S., Ed.; Oxford University Press: Oxford, UK, 1988; pp. 20–50. [Google Scholar]
- Nelms, G. Why Plagiarism Doesn’t Bother Me at All: A Research-Based Overview of Plagiarism as Educational Opportunity. Teaching & Learning in Higher Ed. 2015. Available online: https://teachingandlearninginhighered.org/2015/07/20/plagiarism-doesnt-bother-me-at-all-research (accessed on 8 September 2025).
- Barends, E.; Rousseau, D. Evidence-Based Management: How to Make Better Organizational Decisions; Kogan Page Limited: New York, NY, USA, 2018. [Google Scholar]
- Treviño, L.; Weaver, G.R. Ethical issues in competitive intelligence practice: Consensus, conflicts, and challenges. Compet. Intell. Rev. 1997, 8, 61–72. [Google Scholar] [CrossRef]
- Hamilton, J.B.; Knouse, S.B.; Hill, V. Google in China: A manager-friendly heuristic model for resolving cross-cultural ethical conflicts. J. Bus. Ethics 2009, 86, 143–157, (specifically mentions and defines smell test). [Google Scholar] [CrossRef]
- Kelly, A. External Quality Assurance and Advisory Panel Report on Policy on Academic Integrity; Working paper; University College Dublin: Dublin, Ireland, 2023. [Google Scholar]
- BBC. Stephen Hawking Warns Artificial Intelligence Could end Mankind. BBC News, 2 December 2014. Available online: https://www.bbc.com/news/technology-30290540 (accessed on 8 September 2025).
- Sovak, L. AI and Existential Risk: Scientists’ Warnings; MIT Press: Cambridge, MA, USA, 2022. [Google Scholar]
- Lee, K.-F. Foreword for the Paperback Edition—The Age of AI 2.0 Has Begun. In AI 2041 Ten Visions for Our Future; Lee, K.-F., Chen, Q., Eds.; W. H. Allen, Division of Penguin Random House: London, UK, 2024. [Google Scholar]
- Chalmers, D. The singularity: A reply. J. Conscious. Stud. 2012, 19, 141–167. [Google Scholar]
- Seetharaman, D. Elon Musk, Other AI Experts Call for Pause in Technology’s Development. Wall Str. J. 2023. Available online: https://www.wsj.com/articles/elon-musk-other-ai-bigwigs-call-for-pause-in-technologys-development-56327f?mod=hp_lead_pos6 (accessed on 8 September 2025).
- Knight, W. Six Months Ago Elon Musk Called for a Pause on AI. Instead Development Sped Up. Wired, September 2023. Available online: https://www.wired.com/story/fast-forward-elon-musk-letter-pause-ai-development/ (accessed on 8 September 2025).
- Rudolph, J.; Tan, S.; Tan, S. ChatGPT: Bullshit spewer or the end of traditional assessments in higher education? J. Appl. Learn. Teach. 2023, 9, 342–363. [Google Scholar]
- Pierce, D. Here’s Why AI Search Engines Really Can’t Kill Google. The Verge, 26 March 2024. Available online: https://www.theverge.com/24111326/ai-search-perplexity-copilot-you-google-review (accessed on 8 September 2025).
- Rietveld, J.; Eggers, J.; Nandakumar, M. The rise and fall of Netscape: Lessons for platform dominance. Strateg. Manag. J. 2021, 42, 1100–1125. [Google Scholar]
- Yoffie, D.B.; Cusumano, M.A. Building a Company on Internet Time: Lessons from Netscape. Calif. Manag. Rev. 1999, 41, 8–28. [Google Scholar] [CrossRef]
- Lashinsky, A. How Netscape Lost Its Way. Fortune, 14 July 2005. [Google Scholar]
- Turner, A. Intel Steps Up Bid to Join AI Gold Rush. Bloomberg News, 5 December 2023. [Google Scholar]
- Khappal, R. China’s AliBaba challenges Google with AI-powered image search. TechAsia, 15 June 2023. Available online: https://www.techasia.com/aiibaba-image-search (accessed on 8 September 2025).
- Mintzberg, H. Planning on the left side and managing on the right. Harv. Bus. Rev. 1976, 54, 49–58. [Google Scholar]
- Levin, M. Charlie Munger on AI: “Old-fashioned intelligence works pretty well”. Financial Times, 12 November 2023. [Google Scholar]
- Patulny, R.L.; Lazarevic, N.; Smith, V. ‘Once more, with feeling,’ said the robot: AI, the end of work and the rise of emotional economies. Emot. Soc. 2020, 2, 79–97. [Google Scholar] [CrossRef]
- Cole, J.; Chandler, D. Edison vs. Westinghouse: The Current War; Academic Press: Cambridge, MA, USA, 2019. [Google Scholar]
- Vial, G. Understanding digital transformation: A review and a research agenda. In Managing Digital Transformation; Routledge: Abingdon, UK, 2021; pp. 13–66. [Google Scholar]
- Crompton, T. The horseless carriage and the red flag law. J. Transp. Hist. 1924, 5, 1–15. [Google Scholar]
- Mishan, E.J. The Costs of Economic Growth; Staples Press: Framingham, MA, USA, 1971. [Google Scholar]
- Manion, M.; Evan, W. The Y2K scare: A retrospective analysis. Technol. Soc. 2000, 22, 45–60. [Google Scholar] [CrossRef]
- Negroni, C. Why most of the world has never flown. The Atlantic, 28 September 2016. [Google Scholar]
- Lazarevic, N.; Smith, T. “Once more, with feeling,” said the robot: AI and the end of work. AI Soc. 2020, 35, 1123–1135. [Google Scholar]
- Valentine, M. Human-Centered AI: The Power of Putting People First. McKinsey & Company (Insights Podcast). 2023. Available online: https://www.mckinsey.com/capabilities/people-and-organizational-performance/our-insights/human-centered-ai-the-power-of-putting-people-first (accessed on 8 September 2025).
- Suo, J.; Li, M.; Guo, J.; Sun, Y. Engineering Safety and Ethical Challenges in 2045 Artificial Intelligence Singularity. Sustainability 2024, 16, 10337. [Google Scholar] [CrossRef]
- Gurdus, L. Mad Money with Jim Cramer, Over 80% of the World Has Never Taken a Flight. CNBC, 2017. Available online: https://www.cnbc.com/2017/12/07/boeing-ceo-80-percent-of-people-never-flown-for-us-that-means-growth.html (accessed on 8 September 2025).
- Berdejo-Espinola, V.; Amano, T. AI tools can improve equity in science (letter to the editor). Science 2023, 379, 991. [Google Scholar] [CrossRef]
- Anonymous. A sequence of zeroes—What happened to the artificial-intelligence revolution? Economist 2024, 2024, 67–68. [Google Scholar]
- Ångström, R.C.; Björn, M.; Dahlander, L.; Mähring, M.; Wallin, M.W. Getting AI implementation right: Insights from a global survey. Calif. Manag. Rev. 2023, 66, 5–22. [Google Scholar] [CrossRef]
- Ransbotham, S.; Khodabandeh, S.; Fehling, R.; LaFountain, B.; Kiron, D. “Winning with AI.” MIT Sloan Management Review. 2019. Available online: https://sloanreview.mit.edu/projects/winning-with-%20ai/?utm_medium=pr&utm_source=release&utm_campaign=airpt2019 (accessed on 8 September 2025).
- Hayes, J. The Theory and Practice of Change Management; Palgrave Macmillan: New York, NY, USA, 2002. [Google Scholar]
- Cadez, S. Social change, institutional pressures and knowledge creation: A bibliometric analysis. Expert Syst. Appl. 2013, 40, 6885–6893. [Google Scholar] [CrossRef]
- Mollick, E. Co-Intelligence. Revised Edition; Portfolio/Penguin: New York, NY, USA, 2024. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).