1. Introduction
It is amazing how fast the world changes. While Artificial Intelligence (AI) has been discussed for decades, it was only on the last day of November 2022 when ChatGPT became available to everybody. For many people at that moment, Artificial Intelligence started moving from a topic for academic discussion to something average citizens could experiment with, play with, and use. Then, big global questions started coming up in coffee shops and at dinner tables. Is Artificial Intelligence good or bad? Or both? Is it ethical? Focusing on one possible way to answer these questions, in this paper, we ask the following main question: Can the JUSTICE framework tell us if Artificial Intelligence is ethical? When? How?
The JUSTICE model is intended to help assess the ethicality of anything. Artificial Intelligence, AI, has in many ways changed the world, for better and for worse. Can the JUSTICE framework, published in 2007, long before the explosive growth of Artificial Intelligence, still be useful in a world being changed by Artificial Intelligence?
Hopes to minimize negative outcomes from Artificial Intelligence triggered high-sounding pronouncements, regulations, and laws. Many of the ensuing guardrails help state what organizations and corporations, for example, should or should not do. This essay focuses on a different level and a different actor. We look at the individual, and not the organization or society. When an individual faces the question of whether or not some action involving Artificial Intelligence is ethical, can the JUSTICE framework, published in 2007, long before Artificial Intelligence, still be useful in a world being changed by Artificial Intelligence?
While there is considerable speculation on the vast benefits which may flow from developments stemming from Artificial Intelligence, such as in new drug discoveries and finding cures for diseases, or the elimination of exceedingly repetitive and uninteresting organizational tasks, there are parallel concerns about the proliferation of disinformation and workforce disruption in dealing with the replacement of some jobs. There are even worries that Artificial Intelligence might become too smart and basically take over the world.
Much of the writing in this area focuses on the identification of things to watch out for and suggesting guardrails to keep Artificial Intelligence from doing harm. For example, in what the OECD calls “the first intergovernmental standard on AI”, they call for “AI that is innovative and trustworthy, and that upholds human rights and democratic values” [
1]. That set of standards, first issued in 2019 and revised in 2024, might indeed have been the first, but another multi-governmental agency, UNESCO, states that it “produced the first-ever global standard on AI ethics—the ‘Recommendation on the Ethics of Artificial Intelligence’ in November 2021” … “applicable to all 194 member states of UNESCO” [
2] (p. 1). While not a governmental organization, the World Economic Forum wields considerable influence, and in 2016, published the “Top 9 ethical issues in artificial intelligence”, predating both the OECD and UNESCO pronouncements [
3]. Numerous nations have issued regulations regarding AI and ethics, including the US, the EU, China, Russia, and Singapore [
4].
The collective approaches to keep Artificial Intelligence “ethical” are representative of the numerous governmental initiatives across the globe, each hoping to control and channel AI towards the betterment of humanity. As of 2023, according to the one source, the area had seen “significant policy attention and action, evidenced by more than 1000 AI initiatives in over 70 countries and jurisdictions” [
1]. However, we are reminded that ethics is not the same as law [
5]. Laws are required and will help, but also, individuals will have to make decisions. Compliance with law is helpful, but only in terms of what is required and “significantly insufficient” [
6] (p. 8), citing [
7]. When there is more to be done over and above what the law strictly requires, one should think of soft law, or ethics [
7]. Such “soft law” could lead to ‘good corporate citizenship’ [
6] (p. 8).
Most of the regulations or laws we have seen aim at an organizational or group level, not addressing the question we ask: Would this use of a specific Artificial Intelligence tool be ethical in this situation? We only seek to answer a small question: Can the JUSTICE framework help an individual face an Artificial Intelligence-related ethics question?
But we must also acknowledge that across society there are questions concerning not only the value of AI, but also huge global harmful effects which may ensue. What ethical questions might AI pose for humanity? Towards the end of this relatively short essay, we will also look at, but not answer, that “end of the world” question.
Would the use of a specific Artificial Intelligence tool be ethical in this specific set of circumstances? Our paper is not about regulations nor laws. Those following the explosive growth of Artificial Intelligence will not be surprised that there have already been massive increases in the number of such pronouncements. Hopefully academics will summarize and evaluate that growth but that is not the topic of this paper. We focus only on a more micro level. Is a specific application of Artificial Intelligence ethical? It is that individual decision point that is our topic. Most of the regulations or laws we have seen aim at an organizational or group level, not addressing the question we ask: Would this use of some particular Artificial Intelligence tool be ethical in some specific situation? How might an individual decide the ethical thing to do? Specifically, we ask whether the 2007 JUSTICE model can still help an individual decide.
Certainly the topics of ethics and Artificial Intelligence are receiving attention. A bibliometric analysis of some 3000 scholarly articles relating to Artificial Intelligence found that a good number of papers looked at ethical considerations. A study by Koo showed that key words such as “ethics” and “integrity” were prominent in those papers, showing the “importance of addressing ethical challenges associated with AI technologies” [
8] (p. 1) (see also [
9,
10]). It seems that in any field one can think of, people are asking about and writing about the ethics of Artificial Intelligence (a list of representative scholarly articles is available from the authors).
But we return to our one question: How should one determine whether or not some action involving Artificial Intelligence is ethical? This leads us back to the JUSTICE model, which has been used in various contexts. Is that framework, published long before Artificial Intelligence, still useful in the world being changed by Artificial Intelligence?
5. Assessing Ethics of Decisions Involving Artificial Intelligence Through the JUSTICE Framework
This essay is based on our professional experience, not on any research study. We used the decision-making shortcut called the JUSTICE framework by Lau and colleagues [
11]. Individuals can approach a question of ethics using the seven dimensions of this framework. That JUSTICE model, included here with minor modifications, suggests these seven different ways to look at an issue from an ethical perspective. The Lau et al. [
11] paper recommends individuals select one or more of seven possible approaches to ethical decision-making criteria and apply these to a specific situation. If all say “ethical”, this is accepted as the answer. If all suggest “unethical”, then the action is not ethical. If results are mixed, additional consideration is needed. The model can be used for any issue with ethical dimensions, such as polluting a river, helping someone cheat, sexual harassment. Or using deep fake audio and/or videos to fool a potential investor or voter or lender.
The following summary of the model from the Lau et al. 2007 [
11] paper illustrates this. In the JUSTICE model each letter stands for one way to approach an issue with possible ethical ramifications:
JUSTICE: Same rules apply to all.
UTILITARIAN: Greatest good for greatest number, good outweighs bad.
SPIRITUAL VALUES: Such as the Golden Rule—do to others as you would want others to do to you.
TV RULE: Can you honestly explain your decision on TV with your family watching?
INFLUENCE: Consider what influence (if any) this action might have.
CORE: Values, deepest human values, considering things really important in life.
EMERGENCY: Urgency of the decision, any requirement for immediate action.
Users have little trouble remembering the seven criteria and are able to define each. The acronym JUSTICE is useful. One advantage of the JUSTICE framework is that it seems to be a usable tool. However, it does not fit neatly into various schools of thought about ethics. The model does not cover all approaches or theories. Nor does it dictate whether it is best to teach ethical principles or teach ethical rules [
24]. We observe that business seminar participants and students prefer tools that can be used in real life and are less interested in academic arguments as to which ethics approach is “best.” The JUSTICE model does not take a position on the question of whether one should look at ethics from a consequentialist (or teleological) perspective or from an absolutist–moralist (or deontological) point of view. This framework does not use uncommon words such as teleological or deontological, thus being more usable by decision makers today.
For example, the first letter J introduces JUSTICE. A major idea of Justice is that every person should be treated equally [
25]. Inequality might be permissible if that results in benefits for all and especially for those least advantaged [
26]. Noble’s [
27] research on search algorithms demonstrates how AI disproportionately harms marginalized groups, violating Rawls’ [
25] principle of justice. Benjamin [
28] extends this critique with her ‘New Jim Code’ framework, showing how technical neutrality often masks systemic racism—an ethical failure Lau et al.’s [
11] JUSTICE criterion should address.
The UTILITARIAN approach is common in any discussion of ethical decision making. However, users of the framework may think of ethical decision making not as the “greatest good for the greatest number,” but instead as “cost benefit analysis”. There is evidence to suggest that AI tools promising ‘greatest good’ sometimes perpetuate inequality. O’Neil’s [
29] ‘weapons of math destruction’ reveal how utilitarian AI tools—for example in credit scoring—hurt systematically disadvantaged populations. This challenges the assumption that AI benefits the majority [
11], while Broussard’s [
19] work on ‘artificial unintelligence’ explains why these failures persist.
The letter S suggests that what Westerners call the “Golden Rule” is the essence of the SPIRITUAL approach to ethical decision making. People in the West can grasp this idea easily [
30] and we also find wide acceptance in Asia. However, the idea attributed to Jesus “do unto others as you would have them do unto you” is not an exclusively Christian belief. Asians enjoy hearing that some 500 years before Christ, Confucius advised followers not to impose on others what they would not desire others to impose on them [
31]. Why is this ethical decision-making approach labeled S not G for Golden Rule? The term “Golden Rule” is closely associated with a Western idea; however, the concept is contained in many religious teachings around the world. A less “Western” term might be preferred. One additional reason the word Spiritual is used is that Spiritual begins with the letter S, and S fits into the JUSTICE acronym. Bruton [
30] builds a strong case and shows how ethical theory can address potential issues by looking first at the Golden Rule. A reader of Bruton’s work might conclude that if one wishes to start with one ethical decision-making tool, this might be it. The lens of Spiritual Values is robust yet has depth.
A different candidate for best measure of ethics is T or the TV rule, or Transparency. This will be discussed more below, as it shows great promise in determining the ethicality of issues around Artificial Intelligence.
The letter I suggests INFLUENCE and draws on ideas of Aristotle that there is a greater responsibility in areas where there is greater influence. Aristotle said much the same thing more than two thousand years ago [
32]. More recently Spiderman in the Hollywood movie (2002) popularized the words “with great power comes great responsibility.” It is true that individuals who take ethical shortcuts, for example, using ChatGPT-generated text without attribution in their writing [
33], will potentially influence, by example, the entire community. However, we do not find “I” always useful. The job market shows the complexity of everyday decisions having ethical overtones. Should one work as an accountant for British American Tobacco? Refusing the job will not influence people to stop smoking. Thus, by the Influence criterion it is not unethical to take the job. However, some point out that even working as an accountant for a tobacco company might Influence a young family member, say a younger sister. Thus “I” could say that it is not right to take a tobacco job.
One can also use the tobacco example to reflect on the CORE values idea. If “human life” is the most central core value, then one might say the ethical choice would be to refuse any job in the tobacco industry. Also, Benjamin [
28] argues that without intentional redesign, AI will replicate historical inequities through what she terms the ‘New Jim Code’. This violates Core Values (C) of equity and echoes McCoy’s [
34] parable (described below), where systems prioritize efficiency over compassion.
Emergency, the seventh ethical decision criterion in the model, basically ignores the morality of an act. Instead E asks whether the situation requires speedy action. In such cases, normal deliberations may not be appropriate. Sometimes a decision has to be made in minutes if, for example, life is at stake. It is OK to ignore a red traffic light if taking someone to a hospital emergency room.
As the above section (taken from the Lau et al. 2007 paper) shows [
11], there is not a single thematic answer. However, by looking at an issue using seven different lenses, we note that several of the approaches show varying degrees of promise. For example, Core and Emergency are not always useful for making decisions with ethical ramifications involving AI. However, one can imagine cases where Core and Emergency might apply. For example, to save a life most agree it is ethical to tell a lie. Kant once said even with a murderer at the door one cannot lie [
35]. We do not see a situation where AI endangers or saves lives. However, Benjamin [
28] argues that AI systems may encode racial biases, thus violating Core Values (C). The four other approaches, J for Justice, U for Utilitarian, S for Spiritual values (golden rule, etc.), and T for TV rule (sunshine test, etc.), are now discussed briefly.
For an action or inaction to satisfy the Justice test, one should ask whether this policy or action harms one sector and/or benefits another. To be “just” the same rules should apply to all evenly and fairly. In the USA there has been considerable discussion as to whether and to what extent law enforcement is applied evenly to both white and black ethnic groups. The death of black male George Floyd at the hands of a white police officer was followed by protests across the United States and around the world [
36] and is an illustration of possible violations of the Justice idea. Noble [
27] demonstrates how AI-driven search engines perpetuate racial stereotypes, underscoring the need for Justice (J) in AI design—a failure to ‘apply rules evenly’ [
11] (p. 5). To better understand how this J, justice, test might help us see issues of ethics of AI, we can look at the work of academics. Collectively, four scholars—Broussard [
19], Noble [
27], O’Neil [
29], and Benjamin [
28]—reframe AI ethics as a question of power, not just principles. Their work suggests that the JUSTICE framework asks not just whether AI is fair, but who gets to define fairness in algorithmic systems. For example, the principle of Justice requires equitable treatment for all stakeholders [
25], yet as Noble [
27] empirically demonstrates, AI systems routinely fail this test. Her analysis of search algorithms perpetuating racial stereotypes provides concrete evidence of what Benjamin [
28] later theorized as the ‘New Jim Code’—systemic racism rebranded as technical neutrality. These findings demand that users apply Lau et al.’s [
11] JUSTICE criterion carefully when evaluating AI tools.
Many ideas emerge in viewing the Utilitarian perspective. As economists use that term, it often seems almost quantifiable, the greatest good for the greatest number. However, we prefer the commonly used idea “does good outweigh bad?” Academics and business practitioners often label this thought process as “cost-benefit analysis” [
37].
Ethics classes in business seminars and in universities often use the “trolley dilemma” to help individuals appreciate that the simple use of numbers does not always assist decision makers. In that hypothetical trolley scenario, you, a bystander, see a streetcar on a path that will clearly lead to the death of five persons. You also observe that pushing one innocent obese person (who happens to be there) into the path of the trolley will without doubt lead to the death of that one person but will save the other five. In Killing, Letting Die, and the Trolley Problem, Judith Jarvis Thomson [
38] explains why most people would not kill one person to save five. This case or variations of it have been argued long before the use of AI emerged. The trolley problem, as it came to be known, shows that U, Utilitarian thought, does not always help decision makers. Until Artificial Intelligence incorporates emotion and feeling, AI would fail to solve the trolley problem. A machine, responding as a mechanical or electrical device to perform tasks in an automated form, would calculate that five lives over one life would be the best outcome. As shown in this scenario, many situations cannot easily be reduced to quantifiable dimensions.
Another ethics-teaching case, Jim and the Jungle [
39,
40], discusses a scenario in which, if an individual kills one prisoner, others who have also been sentenced to death would be freed. The discussion of Jim and the Jungle again reveals a flaw in utilitarian thinking. None of our seminar participants would kill one to save many even though this would clearly be the greatest good for the greatest number. What would Artificial Intelligence dictate here? The mathematical answer—kill one, save many—does not work.
Utilitarianism also fails to solve the Parable of the Sadhu as told by Buzz McCoy [
34]. In this true situation a group of mountain climbers is forced to decide what to do when a lost religious pilgrim, a near-death Sadhu, is handed over to unprepared climbers. To help the Sadhu down the mountain to reach safety is the obvious ethical choice. But saving this one individual would mean ending the trip of a lifetime for Buzz McCoy. Students try to balance saving one life against ruining this dream for many others. Utilitarian ideas fail us, and failed Buzz McCoy. The McCoy group rendered some aid but never did find out whether the Sadhu survived. Again, this was long before AI emerged, but is an example of the kind of dilemmas raised in ethics discussions. We did not explore what solution ChatGPT might suggest for Buzz McCoy, but we may imagine some have asked exactly that question.
Mini cases can be powerful. They can trigger thinking on ethical issues in general and clearly show that utilitarianism can fail as an ethics decision-making tool in many cases. If a person uses AI to frame an essay on a topic, the user might say the use of AI harms no one and makes the essay better. On balance, it seems applying utilitarian thought with or without AI is of doubtful value.
For our main question, whether the Utilitarian approach helps one judge the ethicality of uses of AI, the answer appears to be no. While cases help individuals see issues using utilitarian approaches, these cases do not benefit from the use of Artificial Intelligence.
The Sadhu case also helps make clear what J (for Justice) might entail. The pilgrim was to blame for his life-threatening predicament. This fact might have led to McCoy deciding to render some minimal aid but only if rendering aid did not ruin the expedition. In the video of this case, after the Sahdu had been left to an unknown fate, a fellow mountain climber asked McCoy, “what would you have done if that were a Western woman?” Clearly the actions taken, or lack of actions, would not pass the J for Justice test. Every person should be treated equally. Inequality might be permissible only if it benefits all and especially those least advantaged.
The JUSTICE model provides no single thematic answer. However, by looking at our issue with seven different lenses, it appears that most of these seven starting points show some promise. However, C (Core) and E (emergency) do not seem directly relevant in most cases. It is possible to imagine situations where these lenses might apply. Most likely, most people, to save a life, would agree that it can be ethical to tell a lie. For our present analysis we cannot imagine any emergency where using Artificial Intelligence would endanger or save a life. C (Core) and E (emergency) do not appear particularly significant in identifying ethical issues of Artificial Intelligence.
Copying the work of many others, when used with appropriate credit, is ethical and permissible in a research context. Copying material without giving credit is plagiarism. Certainly AI, with sources such as ChatGPT readily available, makes plagiarism easier and more efficient. In a study looking at plagiarism, written in the pre-Artificial Intelligence era, Nelms says “plagiarism does not bother me at all” [
41]. Individuals may say, “if I can help my friend by pointing out good things to quote, what’s the harm? Who is hurt?”. But this example suggests that in such cases, AI makes unethical behavior easier.
Our starting question was how each of these distinct sets of ethical principles would help assess ways Artificial Intelligence might be unethical. Much depends on what and where. What body of ethics theory might best help us answer our question? Which of the seven lenses in J U S T I C E seem most useful? Probably not C nor E and probably not I. J, U, and S each are useful in deciding questions of ethics in general but do not seem especially useful to solve or prevent ethical issues raised by Artificial Intelligence. What was called by Lau and her colleagues [
11] the TV Rule, Transparency, seems most likely to defend Artificial Intelligence against past, present, and future criticisms.
The TV Rule has been described many ways using many terms. If you can honestly tell the world what you are doing it passes this “sunshine test.” Barends and Rousseau [
42] (p. 312) ask “does this pass the mother-in-law test?” Can you explain, justify, your decisions to your mother-in-law? Trevino and Weaver [
43] use the term “smell test.” Hamilton, Knouse, and Hill [
44] show how the smell test can help identify cross-cultural ethical problems. Although these citations are pre-Artificial Intelligence, they illustrate that transparency, the TV Test, can be used in many areas. Such tests can be applied to issues of digitization in general and AI in particular. Consider this list of areas where unethical uses of Artificial Intelligence might occur (taken from Kelly [
45] with minor modifications):
Plagiarism: The use of the work of another person without giving appropriate credit for its use. This may involve using ideas or information made by others without acknowledgement, or insufficient or improper citation; examples of plagiarism include copying sections of text without quotation marks, submitting text purchased from a ghostwriter, or reusing work already submitted for earlier or other assignments.
Cheating: Acting dishonestly to create or gain an advantage; in the academic sphere it includes breaking rules during or in relation to examinations, such as giving or accepting assistance, copying from another student’s work, or unauthorized access to electronic devices.
Fabrication: An effort to invent or produce something lacking sincerity, such as production of a fake document, or altering or forging a document.
Sabotage: Deliberate act to hinder or prevent any act of another, such as the theft or suppression of written information, laboratory or field experiments, computer files and so forth.
Collusion: An unauthorized collaboration or cooperation with others which confers an unfair advantage for some, which may include other forms of violation mentioned above.
Disregard of research/professional ethics: Knowingly breaching professional or ethical rules and standards governing principles of best practice (end of section taken from Kelly).
Full transparency, the TV Rule, would help eliminate or alleviate each of these potential problems listed above. Full disclosure may be more difficult than it appears but stands out as the best approach of the seven lenses in the J U S T I C E framework to answer whether a particular application of Artificial Intelligence is or is not ethical.
6. Significance of Artificial Intelligence (AI) for Humanity
All this must be seen in the light of the larger discussion. What is the significance of digital intelligence and artificial intelligence to humanity? South African-born American entrepreneur Elon Musk has often expressed fears in this regard, in comments widely publicized, that Artificial Intelligence might make “work” obsolete. Artificial Intelligence systems might replace humans, making our species irrelevant, echoing frightening comments made earlier by Stephen Hawkins: “full artificial intelligence could spell the end of the human race…” [
46]. One statement signed by various experts made the doom prediction clear: “mitigating the risk of extinction from AI should be a global priority” (quoted in [
47]). Those warnings sound quite stark and also somewhat dark [
48]. That this anti-AI ‘extinction of humanity’ idea could be promulgated by Musk is surprising, as he was a cofounder of OpenAI, the not-for-profit (originally) firm that created ChatGPT. Musk left that firm after a dispute. In 2023 Musk (along with more than 30,000 cosigners) asked for a six-month moratorium on further developments in AI [
49]. Even before that proposed 6-month moratorium had concluded, in July 2023 Musk established a new firm X AI, presumably to compete with OpenAI and the many other competitors in the AI space [
50]. New ways to put artificial intelligence to work seem to appear weekly. While AI promises transformative benefits, scholars like Broussard [
19] caution against ‘technochauvinism’—the assumption that solutions are inherently superior to human judgment, particularly in ethically fraught domains like education. Some of the more prominent early Artificial Intelligence platforms, chatbots, and systems were described by Rudolph and colleagues in a paper with extensive references [
51]. Within a few years, some of those on the Rudolph list had been replaced by new chatbots and some had disappeared. A recent quick query to one chatbot, Copilot, identified and described current AI tools, chatbots, as of late 2025:
Mainstream and Actively Used:
ChatGPT—Versatile, widely adopted;
Claude—Thoughtful, long-context;
Copilot—Integrated with Microsoft tools;
Google Gemini—Strong in real-time and mobile;
Perplexity—Research-focused, citation-rich;
Meta AI—Embedded in social apps (WhatsApp, Instagram);
Grok—Elon Musk’s chatbot, edgy and viral;
Duck.ai—Privacy-first, anonymous;
Mistral—Open-source, fast, developer-friendly;
OpenChat—Lightweight, open-source alternative.
Less well known, niche:
ChatSonic—Creative writing and voice features;
Jasperchat—Marketing and copywriting;
Geniechat—Smaller footprint, niche use;
DeepSeek—Fast and affordable, good for devs;
Pi.ai—Emotionally intelligent, life coaching.
This 2025 list, and the 2023 table in Rudolph et al. [
52], help illustrate the presence of a wide variety of AI tools usable in the mid-2020s, but also remind us that things can change fast. A similar list a few years later might show new names, and some of the present names will disappear. When looking for quick information today (or yesterday), a person might “Google it.” Now we are likely to consult one of the AI tools, and if we Google it, that may lead us to a chatbot on the list above, Google Gemini [
52]. Developments are moving at a great pace in this domain.
A person born before 1980 might remember Netscape, the internet search engine that ruled the world before Google came along [
53]. In a few short years, Netscape went from zero to “the world’s most popular computer application” [
54] (p. 8). Those born after the year 1990 might not even recognize the word Netscape. Netscape went back to zero [
55]. Things change fast. There are new developments in Artificial Intelligence weekly if not daily. At present we can use the letters AI and expect everyone to know those letters stand for artificial intelligence. Two decades later, will “AI” mean anything? Is this a passing fad? In 2023 Bloomberg TV head-lined news from Intel this way: “Intel steps up bid to join AI gold rush: Intel unveils server, PC chips in bid to join AI craze” [
56]. Many exciting developments are coming from China and the rest of the world, not only the USA. DeepSeek created a buzz of excitement when it was released. AiiBaba can also search and create images [
57]. Will terms such as ChatGPT and AI fade into insignificance, rarely used? Is what is happening a “gold rush” or is it just a passing “craze”? Often things get attention and then that generates more attention, at least for a time. As Mintzberg notes, sometimes new ideas are “greeted with great enthusiasm… then a few years later… quietly ushered out the back door” [
58] (p. 53). Those who know the name Charlie Munger may remember him as the quiet guy Warren Buffet always trusted for common-sense ideas. On this topic Munger said, “I am personally skeptical of some of the hype that has gone into artificial intelligence. I think old-fashioned intelligence works pretty well” (Munger quoted in [
59]).
The six-month moratorium called for (above) by Musk and the others did not stop anything and it appears that we are not near an end. This has multiple important ramifications: (1) If AI is on a path to destroy humanity, a trend towards the “singularity” almost as in the fictional Hollywood movies such as the Terminator, we had all better stay alert. A bit easier to handle would be the other key ramification: (2) AI is on a path that will impact business and every other realm of human activity.
7. Will AI Bring About the End of the World?
Various fears are voiced, and as in the pithy quotes from celebrities such as Elon Musk, often repeated on social media. But academics voice concerns as well. Patulny, Lazarevic, and Smith [
60] explore what will happen when emotion is further digitized and analyzed: “‘Once more, with feeling,’ said the robot: AI, the end of work…” Both academics and journalists see potential dangers ([
61,
62,
63]). Gloom and doom make good reading but do not necessarily make sense. One example would be predicted loss of jobs. Melissa Valentine says “predictions of job loss in the ’90s haven’t played out the way the more cataclysmic predictions foretold” [
64]. Predictions about AI ending humanity appear to us as far-fetched, both unclear and unlikely. We as citizens have a responsibility to ask the world not to fear the future. Rather our job should be to excite the next generation about the possibilities. Specifically, calls to ban ChatGPT in academia are neither necessary nor helpful.
However, new developments often encounter fear and even resistance, which in retrospect seem unwarranted. Edison said that alternating current could bring unnecessary deaths, but his statements were attempts to win a commercial battle; Edison used direct current and competitors, such as Westinghouse, used alternating current [
65]. Worldwide in the 20th and 21st centuries, alternating current has been used, not Edison’s preferred direct current. Edison also said that books would become obsolete in schools, replaced by motion pictures [
66]. Visual media have certainly had an impact in education, but we still have books. Indeed, with the advent of e-books, humans even have access to the written word anywhere and everywhere. Predictions about new technologies have often been wrong in the past and are likely to be at least partly wrong in the future.
The advent of the horseless carriage brought numerous reactions. To impede the growth of this invention, laws were passed that in retrospect seemed to lack common sense or be foolish. In some localities motor cars were required to “lumber along with a man walking in the front of them carrying a red flag to warn other traffic, so that it was impossible for the driver to exceed the flag man’s walking pace, namely 4 miles per hour” [
67] (p. 8). Not only did governments fear this new invention, the automobile, but academics also voiced alarm. Given the tragic numbers of people killed in automobile accidents, numerous communities being divided and even destroyed by massive superhighways, critics may have had a point. Few would state these fears as bluntly as A. J. Mishan: “I once wrote that the invention of the automobile was one of the greatest disasters to have befallen mankind. I have had time since to reflect on this statement and to revise my judgment to the effect that the automobile is THE greatest disaster to have befallen mankind” [
68] (p. 41).
Only those born before about 1985 would remember the outpouring of warnings that disaster would befall the planet when we were ambushed by the year 2000, Y2K. This Y2K event would bring an endless list of life-threatening problems. Elevators in high rise buildings might stop working, cash registers (which still existed in December 1999) would stop, and ATM machines would not function as of one minute after midnight on that last day of the 20th century. Some even feared that air traffic control systems, having used 2-digit year codes for more than half a century, might not function properly, as 00 would be read as 1900, not as 2000. Some even raised fears of missiles launching. The hysterical and nonsensical fears seem impossible to imagine, now only a couple decades later [
69]. Sometimes hysterical warnings of impending disaster catch the attention of the public, partly thanks to publishers who see impending doom as a journalistic moneymaker. We should listen to critics of Artificial Intelligence, even those who are likely wrong about singularity, but it would be wrong to use fear of the future to prevent us from harnessing and guiding Artificial Intelligence for the betterment of humanity.
Another huge technological advance that in fact brought massive changes to humanity started with the advent of aviation. The advent of commercial aviation certainly changed the world. We might guess that every person who reads this has taken a flight. But some of the warnings, as well as some of the promises, were unrealistic. Popular articles during the mid-20th century envisioned “everybody” flying to work in their own helicopters. But the key point to remember here is that the future of commercial aviation is, to most of the earth’s 8 billion people, irrelevant. Statistics are always subject to error, but some sources say fewer than half of all humans have ever boarded a plane [
70]. Others say 6 billion of the world’s 8 billion have never been in an airplane [
71]. To much more than half of the world, airplanes are irrelevant. One should carefully read and listen to those who say Artificial Intelligence means catastrophe. But to much of the world, is this talk relevant? Instead of taking drastic anti-AI action, perhaps we should learn, adapt, take advantage of this important new set of capabilities [
72].
9. Artificial Intelligence and Humanity
The world is changing and Artificial Intelligence is one of the factors driving change. These changes remind us that only 50 years ago, one wrote on typewriters (using a liquid paint called white-out to correct errors) and we located relevant material by going to a physical library. Today scholars use Google Scholar and Microsoft Word and no one considers this unethical. Artificial Intelligence is bringing change. As the printing press and moveable type changed the world, so will AI. Similar changes will come to innumerable aspects of life, industry, commerce, government. But how much change will occur and when? If we are already in AI 2.0, why do we not see the dramatic change we were warned about? The popular news magazine The Economist put it this way: “Beyond America’s west coast, there is little sign AI is having much of an effect on anything” [
73] (p. 57). That journalist’s opinion is clearly just an opinion, similar to the opinion in this essay. A study involving 2525 knowledgeable AI decision makers suggests that gaining benefits from AI in real organizations presents real challenges, and “we need to learn much more… [if we are to] deliver on these [AI] promises.” One of those challenges is “responding to the increasing demand for trustworthy and ethical AI” [
74,
75] (p. 9). A more nuanced perspective might balance information about areas of little AI impact with areas of more impact. A paper published in Sloan Management Review reported that “seven out of 10 companies surveyed [in one particular survey] report minimal or no impact from AI so far… 40% of organizations making significant investments in AI do not report business gains from AI.” If 40% do not report gains, that suggests that 60% did report gains [
76].
In addition to the significant organizational and legal attention, there are important industry-wide efforts underway to help build a trustworthy World Wide Web. A loose coalition of more than 5000 small and huge organizations including Adobe, Alphabet, Google, and Microsoft have signed on to, or on paper agreed with, steps to improve digital content transparency. This Coalition for Content Provenance and Authenticity, or C2PA, is probably a good thing, and needs to be evaluated, with examples showing successes and failures. This is for certain worth studying, as of the day of this writing, we saw a YouTube clip with the words WARNING CONTENT ALTERED. This was reassuring. This would be more reassuring if the next YouTube clip we watched had had similar warnings, but it did not. It was a heartwarming story about a wild animal entrusting her injured cub to humans. Heartwarming, but this clip is a 100% Artificial Intelligence creation, without a word of warning. Zero authenticity, zero warning. YouTube is owned by Google, a C2PA signatory.
Will Artificial Intelligence bring change? Yes. Will Artificial Intelligence take over the world and end humanity? That cannot be known now but does not cause us to panic. Will the humans on our earth adapt and change in the light of the possibilities and problems provided by Artificial Intelligence? Will the world change? Yes and yes. But as Hayes [
77] explains, change is never easy. Often new elements, new ideas, and new technology are met with apprehension if not fear. There may be tangible resistance to change. As Cadez points out, whenever confronted with requirements for change, we should expect “perseverance of old values and norms” [
78] (p. 6891). The foretellers of doom hastened by digital intelligence and AI may be exhibiting resistance to change and the perseverance of old norms. Those who see AI as hastening the end of humanity are unnecessarily pessimistic. The world adapted to the automobile and academics will find ways to accommodate new possibilities in Artificial Intelligence. Change is happening, even at the micro level. In one university course taught by one of us to keep up with the times and to maximize the potential of the internet, exams which had previously been in-class were more recently given take-home open-book. With or without Artificial Intelligence, even pre-ChatGPT, there had been instances of plagiarism in exam answers. Now, as we navigate a world where Artificial Intelligence is ubiquitous, a qualitative shift is discernable. Some answers were great, almost too great given what we knew about the persons turning in the work. A few students used AI-enhanced answers. The new technology caused unintended collateral damage. We abandoned the take-home exams. The next semester exams were written in the classroom, using the almost forgotten exam “blue books.” The adaptation to the new reality was enabled by massive quantities of very-old unused “blue books” which luckily had not been destroyed.
Returning to our starting question, we ask again, the main question: Can the JUSTICE framework tell us if Artificial Intelligence is Ethical? When? How? The typical academic response applies: yes and no. Looking at many applications of Artificial Intelligence using each of the 7 JUSTICE lenses, one at a time, can indeed be illuminating. Different circumstances, different situations, suggest using different evaluation criteria.
Indeed, if somehow Artificial Intelligence did result in human extinction, that would be assessed as unethical by each or any of the 7 JUSTICE tests. But does Artificial Intelligence mean the end of humanity? No. An important new technology requiring change in norms and practices? Yes. What changes? We do not yet know. As Ethan Mollick says, “the world has changed in fundamental ways, and … nobody can really tell you what the world will be like” [
78] (p. xii).