1. Introduction
Artificial intelligence (AI) is becoming increasingly important for the infrastructures that support many of society’s functions. Transportation, security, energy, education, the workplace, government, have all incorporated AI into their infrastructures for enhancement and/or protection. Not only is AI seen as a tool for augmenting existing infrastructures, but AI itself is becoming an infrastructure that many services of today and tomorrow will depend upon. There is a growing body of research addressing the impact of AI on the environment. This body of literature shows that AI development and use requires an amazing amount of computational power which creates increased carbon emissions. The effects of AI on environmental justice will be vast considering too the mining of precious minerals and the vulnerable demographics exploited for these processes. This is deeply concerning given the grave situation the world finds itself in regarding the climate. The Intergovernmental Panel on Climate Change (IPCC) goes so far as to say that it is “code red for humanity” [
1]. Given that there is high confidence that climate change is to a large extent human-induced [
2], we should be asking more questions before introducing a new human-made carbon-emitting infrastructure powered by AI.
The field of sustainable AI has been put forward as a way of addressing the environmental justice issues associated with AI throughout its lifecycle [
3]. Sustainable AI is about more than applying AI to achieve climate goals (Though much work in the field is devoted to this idea. See e.g., [
4,
5,
6,
7,
8,
9,
10]), it is about understanding and measuring the environmental impact of developing and using AI. The little information we have on the environmental impact of AI is, to say the least, not encouraging [
11]. Thus, many of the questions surrounding the sustainability of AI remain unanswered. These answers are needed for society to make an informed choice regarding the use of AI in a particular context. This makes AI a huge environmental risk as AI continues to be implemented in a broad range of contexts despite this opacity regarding its environmental consequences.
It may not be immediately clear why AI researchers and developers, in particular, must pay attention to issues of environmental sustainability. Does not everything need to consider issues of sustainability? In this paper, we argue that the environmental consequences associated with AI are essential issues of AI ethics. The way we choose to build and implement AI today will have profound consequences for our future sustainability that warrants a specific focus on its sustainability. This special attention is due to the connection between AI and the concept of infrastructure.
In what follows, we illustrate how AI has traditionally been understood as conceptually distinct from infrastructure. From this vantage point, AI can be used to enhance or protect existing infrastructures. We also point out that AI is dependent on vast infrastructures which are climate intensive, e.g., AI needs electricity, precious minerals, data to be transferred, etc. AI is increasingly being used to power the next generation of digital services. That is, AI is now the infrastructure relied upon by digital services. Look to the Facebook outage of 2021 that showed how many businesses in Ghana were unable to function without the Facebook infrastructure. Facebook’s services are AI-powered services. Everything from how content is displayed, moderated, and sorted is powered by AI [
12]. Furthermore, the advertising ecosystem which Facebook makes money from is AI-powered [
13]. It is safe to say that without AI there is no Facebook. Consider also, the business model of social networking companies that rely on targeted advertising to generate revenue. The necessity of addressing AI alongside the concept of infrastructure points toward the phenomenon of carbon lock-in—whereby society’s ability to technologically, economically, politically, and socially reduce carbon emissions are constrained due to the inherent inertia created by entrenched technological, institutional, and behavioral norms [
14]. The negative outcomes that AI adoption creates may also give rise to innovative, environmentally sound, solutions. However, without knowing the extent of the problem and giving that problem the attention it deserves, those solutions will never come about. Given these points, we must ask inconvenient questions regarding these environmental costs before becoming locked into this new AI infrastructure. No amount of convenience provided by AI can justify further decimating our planet.
2. AI Ethics and the Sustainability of AI
It is important here to note what we mean by AI. The concept is overused and can refer to many different things. For this article, AI refers to the methodology of creating algorithms driven by the rise of machine learning (ML). ML algorithms “use statistics and probability to “learn” from large datasets” [
15]. This learning is not restricted to picking out features that humans could understand—which gives the resulting algorithm greater power than we have seen before. This is a pragmatic definition as it excludes other methodologies which should fall under the definition of AI. For example, expert systems and decisions trees were for decades the only AI algorithms out there. However, they are not what is driving the rise of AI in our everyday lives and their impact is similar to traditional software applications. ML is what has driven the need for more data, sensors, computing power, etc. We would prefer that everyone simply use ML rather than AI (because that is usually what is being referred to). However, as it stands, AI is the standard concept that people hear about in academic literature, popular culture, and the media. In what follows we use AI to refer to ML algorithms. This means that while autonomous cars and medical technologies are not AI—more and more of these technologies are powered by ML algorithms.
As AI applications increase across society AI ethicists have begun to uncover risks associated with the technology. Risks, for example, concerning the use of historical data to train algorithms when such historical data embeds stereotypes and discriminatory assumptions about individuals and groups in society. The consequence of this practice is oftentimes further discrimination of said individuals and groups. AI ethics, in short, is dedicated to uncovering and understanding the ethical issues associated with the development and use (i.e., the entire life cycle) of AI-powered machines—how does AI threaten the ability of individuals and groups to live a “good life”. Once the risks have been identified it is then the goal to prevent and/or mitigate said risks.
The field of AI ethics has grown in importance in the last decade seen through an increase in academic publications on the topic (For example the Berkman Klein Center identified 36 “prominent AI principles” documents [
16]), the involvement of AI ethics in the policy forum (e.g., European Commission High-Level Expert Group on AI) [
17], and the adoption of AI ethics into the business and consulting space (see e.g., [
18,
19]). In each of these sectors, there are certain canonical ethical issues pertaining to AI that are being discussed, most often concerning particular AI methodologies. Machine learning, for example, has been described as a method that creates a kind of opacity given that it is often impossible to know and/or to understand the rules generated by the model used to make a prediction. Stemming from this technical feature come ethical issues related to transparency (e.g., should a particular technology be used if it is impossible to understand how it arrives at an output); responsibility (e.g., should a particular technology be used if this lack of transparency leads to confusion in terms of who is responsible for the consequences of a decision that are not known or understood by the programmer); and, security (e.g., how can we ensure that security of a system when we do not entirely understand its functioning). To be sure, none of these concerns have been rectified.
Without diminishing the significance of the above issues, it is also important to note that little attention has been paid, to date, to the environmental consequences of making and using AI. A small group of researchers has begun to study carbon emissions [
11] and computing power [
20]; however, there is little incentive for academics and/or industry to incorporate this systematically into research and production methods. There is no regulation to demand an environmental assessment of the impacts of making and/or using AI/ML. The systematic accounting of these environmental impacts is necessary to have a better idea of the large-scale impact of making and using AI systems. Moreover, “accurate accounting of carbon and energy impacts aligns with energy efficiency [
21], raises awareness, and drives mitigation efforts, among other benefits” [
20]. It is this connection—between AI and environmental consequences—that drives the points made in this paper. Namely, that we must know the specifics of this connection before we become (more) dependent on AI.
To be sure, the environmental costs of making and using AI do not end with direct carbon emissions or computing power. The systems used to create and run AI models require precious minerals that are mined in often horrible conditions for the individuals involved [
22]. There is water needed for the cooling of the computing centers. There will be electronic waste (e-waste) resulting from the updating of materials, computers, and data centers. Historically, e-waste has been dumped in underdeveloped countries exposing inhabitants to the toxic chemicals in their water supplies and agricultural [
23]. These concerns are essentially issues of environmental justice and while they focus on environmental consequences, they point to societal concerns that have been, to date, invisible from public discourse. As Hasselbalch describes, data ethics is not only about power but also is power [
24]. AI ethics is not only about power asymmetries but is power in so far as the loudest voices are the ones who determine the ethical issues of importance and priority. The movement to focus on sustainability is about revealing the hidden demographics who suffer and will continue to suffer as AI becomes more and more pervasive in our daily lives.
Sustainable AI was first defined by van Wynsberghe in 2021 as a “movement to foster change in the entire lifecycle of AI products (i.e., idea generation, training, re-tuning, implementation, governance) towards greater ecological integrity and social justice” [
3]. Given the high costs already identified, we suggest that AI researchers (both ethicists and computer scientists) along with AI practitioners (AI developers) and policymakers (those involved with drafting legislation concerning the governance of AI) ought to shift focus to explicitly, and quickly, address the hidden environmental costs associated with AI development and usage.
Reframing AI ethics discussions in terms of sustainability opens up novel insights. First, to use the phrase ”sustainable AI” demands that one consider sustainability as a value within the AI ethics domain, one that is deserving of greater attention. Second, the label of sustainability invokes the recognition of the environment as a starting point for addressing AI ethics issues. The environment becomes a lens through which societal and economic issues are uncovered, conceptualized, and understood. Third, sustainability as a concept emphasizes issues of intra- and inter-generational justice. Attention to environmental consequences demands consideration of the impacts, and our responsibilities to mitigate said impacts, on younger generations as well as those yet to come.
Fourth, sustainability demands the recognition of AI on a larger scale rather than on one or two specific applications. To date, the focus of AI ethics has been on mitigating concerns of privacy, safety, and fairness, to name a few. With this narrow view of the impacts of AI, researchers run the risk of overlooking the larger structural issues of AI as infrastructure, researchers cannot see “the forest from the trees”. By this, we mean that in focusing on issues of design, or how to implement the technology, researchers to date have been unable to take a step back and understand the magnitude of AI development and use. AI is not one or two models that will be restricted to a particular sector or for a particular application. Instead, AI is being promoted as an innovation suitable for any sector, for any application. From our perspective, it is thus paramount to address AI alongside the notion of infrastructure.
3. AI and Infrastructure
We begin by asking: “What is the relationship between AI and infrastructure?” As Kate Crawford describes in her book “Atlas of AI” there is a fascinating phenomenon concerning the materiality of AI/ML; the language used to describe the materials refers to algorithms and “the cloud”, making AI seem not of the physical. However, in reality, there is a vast physical infrastructure behind the production of AI. Water is needed to cool computing centers and the water obtained for this comes from public infrastructures. Electricity is needed to fuel computing centers and the pipelines through which the electricity travels are often publicly funded networks. Minerals are required for batteries and microchips. These minerals are part of a long chain of procurement in which humans often work in slave-like conditions and degradation of the environment results from the way minerals are sourced (see e.g., [
25,
26,
27]). These realities are kept hidden to ensure enthusiasm toward AI. Consequently, the hidden materiality of AI fosters a lack of understanding of the breadth of the physical infrastructures powering AI. This does not entail that AI and its materiality are worse than other industries regarding carbon emissions. Rather, the materiality of AI points to a non-negligible impact on the environment. This must be included in the cost-benefit analysis of specific AI-powered services and products.
Not only does the development and use of AI rely on existing infrastructures but AI is seen as a powerful tool to support, enhance, or protect infrastructure. AI was used by Google in 2016 to understan42d how to conserve electricity in their data centers—allowing the company to enhance its energy conservation efforts [
28]. AI is used in the banking sector to predict when/if a fraudulent transaction has occurred allowing banks to react faster for their customers’ protection [
29,
30]. AI is used in both the public and private sectors to protect against spam and phishing schemes [
31,
32]. AI is used across the transportation sector to enhance in a variety of ways from managing traffic lights [
33] to the idea of autonomous vehicles for the reduction of fatalities [
34].
As we see, AI can be understood as dependent on existing infrastructure and/or as enhancing existing infrastructure. Our aim now is to argue that AI should itself be understood as infrastructure. And it is this understanding that adds urgency to the environmental concerns. Infrastructure is not easily defined, and we do not attempt here to settle any debates on that subject. What we can do is take some properties of infrastructure and show how they relate to AI.
3.1. Infrastructure Properties
Susan Leigh Star [
35] lists 9 such properties (which she calls dimensions). We highlight a few here concerning AI. First, infrastructure has the feature of embeddedness. That is, it exists within other structures, social arrangements, and technologies [
35]. AI can clearly be said to have this property as it is embedded into the technologies and structures that we interact with daily, e.g., simple tools such as Google Maps or the advertising shown to us whenever we are online. When AI is implemented, it often does not stand alone—but interacts with the technologies we use and takes data from our social arrangements (and/or actions) to generate its outputs, e.g., advertisements require data from our search history and previous purchases fed to an AI to predict what might be appealing.
Second, is the property of transparency. Infrastructure is transparent to use. When we turn on a light switch, we do not see the infrastructure of wiring and power grids that enable the light to come on. We simply enjoy the convenience of light. Likewise, when we turn on Netflix, we do not see the infrastructure of cables, servers, and algorithms (often AI) that enable those recommendations to populate the home screen. Our attention is drawn to the result that infrastructure enables—not the process that leads to the result. In our many daily interactions with AI, we could be excused for not even knowing that AI was driving what was happening.
Third, infrastructure becomes visible upon breakdown. When infrastructure ceases to function properly our attention directs itself toward that infrastructure. When the light does not turn on upon flipping the light switch, we direct our attention to the fuse box and if that does not work, we may have to call our attention to the company that runs the infrastructure that provides our electricity. Much attention has been given to AI when it functions improperly. When Google’s AI-powered image labeling system incorrectly labeled people of color as gorillas it quickly drew people’s attention to the algorithm and the data that serves as the infrastructure to that system.
Fourth, infrastructure is modular. Infrastructure does not simply grow from nothing. It is put on top of other infrastructure and must take on the benefits and negatives that come with it. The original wiring of the internet was done through the existing phone lines. Only incrementally was this replaced with fiber optic cables that power the internet that we have today. This is because the infrastructure we have come to rely on has its own inertia—it has to work with the existing infrastructure because we depend on it. AI must also be placed on top of existing infrastructure. It interacts with platforms, algorithms, and the infrastructure that powers the internet. We see new phones with processors that enable AI features [
36]—thereby starting the modular process that slowly replaces old infrastructure.
3.2. AI as Infrastructure
This listing of infrastructure properties provides a base of understanding into how AI can already be considered infrastructure and how this will continue in the years to come. Currently, AI is evaluated in terms of its impact on infrastructure (i.e., as being conceptually distinct from infrastructure); however, in (the near) future AI must be evaluated as the infrastructure itself. Following this, any new infrastructure—because of its importance and resistance to change—should be an environmentally sustainable one. Consequently, evaluating AI requires insight into the environmental consequences of understanding AI as infrastructure.
Part of the reason for writing this paper is that the environmental sustainability of AI is unknown—and for the reasons outlined above, this is an unacceptable situation. Furthermore, one cannot state the environmental sustainability of AI in broad strokes. Particular systems in particular contexts may be environmentally sustainable (e.g., green servers) while others not. The point here is that for governments and consumers to make informed decisions regarding AI-powered solutions, the environmental sustainability of those solutions themselves must be known and factored in.
4. Locked in with AI
The crux of this paper boils down to this: in conceptualizing AI as infrastructure, we can recognize the risk of lock-in, not just carbon lock-in but lock-in as it relates to all the physical needs to achieve the infrastructure of AI.
The phenomenon of lock-in is most referenced in terms of carbon lock-in and the concern for greenhouse gas (GHG) emissions. Carbon Lock-In refers to “the dynamic whereby prior decisions relating to GHG-emitting technologies, infrastructure, practices, and their supporting networks constrain future paths, making it more challenging, even impossible, to subsequently pursue more optimal paths toward low-carbon objectives” [
37]. Coal power plants are an oft-cited example of a carbon lock-in [
37,
38]. While expensive to build carbon plants, they are cheap to operate. This creates political, economic, and social conditions that make it difficult to replace this high carbon-emitting infrastructure.
This points to the fact that the choices we make now regarding our new AI-augmented infrastructure not only relate to the carbon emissions that it will have; but also relate to the creation of constraints that will prevent us from changing course if that infrastructure is found to be unsustainable.
Self-driving vehicles require a large amount of energy to capture, store, and process the large amount of data required to navigate their environments. One estimate shows that it takes roughly 2500 Watts, which is enough to light 40 incandescent light bulbs. That is for just one car [
39]. Multiple studies have attempted to estimate the energy savings and costs of self-driving cars (see e.g., [
40,
41]). They factor in the energy that the sensors capturing data consume, the onboard computers and processors, data transfer energy costs, as well as the efficiency gained by automating driving. However, there is a range of variables that are not accounted for in such analyses, such as hardware production. Thus, we argue here that it is not enough to address a limited number of variables; rather the entire system (from procurement to development to recycling) must be considered.
In what follows we highlight some of the major processes that come with the rise of AI. This points to what must be measured and accounted for when we evaluate the cost of a particular AI application. The costs of producing the hardware running the algorithms, the costs of collecting and transmitting data used and processed by AI, the computational cost of training and using the model, the disposal of the network of hardware needed by AI, and the costs of ensuring that the algorithms are aligned with ethical principles all must factor in. This is not supposed to be exhaustive; rather, it should point to the fact that a lot of work must be done before we even have the information necessary to make informed decisions regarding the use of a particular AI system.
4.1. Hardware Production
The hardware used in the AI lifecycle is, to say the least, non-negligible in terms of energy consumption. There are the obvious components such as the servers and their components (e.g., hard drives, GPUs, etc.) that are required to run the algorithms and store large amounts of data. However, there are also many devices used to collect data such as video cameras, lidar sensors, motion detectors, and so on. It has been shown that the manufacturing of these devices “as opposed to hardware use and energy consumption, accounts for most of the carbon output attributable to hardware systems” [
42]. The rise of “edge computing” is fueling the rise in these devices.
Edge computing has been defined as “the enabling technologies allowing computation to be performed at the edge of the network, on downstream data on behalf of cloud services and upstream data on behalf of IoT services” [
43]. This simply means that the processing of data is happening at the edge of the network closer to the source of the data rather than in some central cloud server. This could, for example, mean that facial recognition processing happens on the smart CCTV camera rather than the video footage being sent to the cloud. This can save on the cost of transferring data and reduce the need for energy-intensive cloud servers; however, it increases the need for complex devices. It is estimated that the number of these devices will almost five-fold by 2030 to 7.8 billion devices [
44].
Many of these modern technological devices have rare earth elements (REE). For example, REEs are found in “hybrid vehicles, rechargeable batteries, wind turbines, mobile phones, flat-screen display panels, compact fluorescent light bulbs, laptop computers, disk drives, catalytic converters, etc.” [
45]. The production of these devices has a huge impact—not only on the environment but also on vulnerable populations that suffer human rights violations [
25]. The environmental impacts are not fully understood; however, it is understood that they are significant: “REE mining and refining generate significant amounts of liquid and solid wastes, with potentially deleterious effects on the environment, and it is expected to continue increasing in the future because they are irreplaceable in many technological sectors” [
45].
As we increasingly depend upon AI-powered technologies, we increase the need for REEs and the processes that produce them. Currently, these processes are terrible for the environment and the people who work in the mines. This cost cannot be ignored when we tally the benefits and consequences of delegating more and more to AI.
4.2. Data Collection and Transmission
AI applications require input data to be processed. This data can come from virtually any kind of data. Security services use video feeds as input for facial recognition algorithms. Biometric sensors attached to people (e.g., smartwatches) collect and send data to healthcare AI algorithms used to detect, for example, heart problems. Smart cities use a vast network of sensors and devices (see
Section 4.1 above) to collect data to use as inputs for many AI algorithms promising to better our cities. It has been calculated that “Internet use has a carbon footprint ranging from 28 to 63 g CO
2 equivalent per gigabyte (GB)” [
46]. The power necessary to keep these sensors active as well as the energy required to transmit this data is non-negligible.
The Shift Project estimates that the digital era is responsible for 4% of greenhouse gas emissions in 2020 [
47]. This is similar to the emissions caused by pre-Covid level commercial aviation [
48]. The Shift Project further estimates an 8% rise year over year due to several factors including the rise of the Internet of Things (IoT) and an explosion in data traffic [
47]. Increasingly relying upon AI will exacerbate these factors.
The massive amounts of video, image, pollution, temperature, biometric, radar, lidar, etc. data that must be transmitted to cloud servers for processing by AI algorithms takes energy. By increasingly relying upon AI to run our society, we become locked into needing this vast network of data transmission. We should know more about its energy cost to responsibly evaluate whether or not certain AI applications are worth it.
4.3. AI Model Creation and Data Processing
The most often cited statistic regarding the creation of AI models is that common large AI models emit more than 626,000 pounds of carbon dioxide—equivalent to five times the lifetime emissions of an automobile [
11]. While this number may be far lower depending on the specific context—for example, when designers are simply fine-tuning a model that has already been trained—there is no question that AI requires an exponentially increasing amount of computing power [
49]. Once the model is trained and the algorithm is live, inputs must be given to that model for processing. Videos, images, text, sound, etc. all need to be classified using the model in question. This has its own associated cost—and with, for example, video input, this can be a large cost. Efforts are being made to reduce this cost by, for example, only feeding the model-specific frames of the video rather than the whole thing. Other methods are also being explored [
50].
Once the hardware is set up, the coding is done, the model is trained on the collected training data and everything is running smoothly, there is the problem that all of this will need updating. We learned that many AI systems failed during the COVID-19 pandemic simply because our behavior changed drastically—making many ML models useless [
51]. New behavior requires new models—which can then cause some of the processes listed above to need to be re-done—furthering the environmental impact, in terms of carbon emissions, of these systems.
4.4. Hardware Disposal
Finally, the process of recycling and disposing of hardware must be accounted for. In 2019 the world generated “53.6 Mt [million metric tons] of e-waste…and is projected to grow to 74.7 Mt by 2030” [
52]. This, of course, factors in all types of e-waste including appliances and personal devices, and not just AI devices alone. The point is that an increased reliance upon AI will require the disposal of more e-waste. While it may seem reasonable for anyone with AI application design not to spend time thinking about this; ignoring this fact while setting up a society that depends more and more on AI would be a critical failure.
Not all computer hardware is used to power AI; however, AI requires an extreme amount of computational power—which requires not only more hardware—but new hardware. Anything which relies upon computer hardware should factor in the cost of the disposal and recycling of that hardware. Here we only want to point out that this is also a cost of using AI. Furthermore, “there is a growing demand for specialized hardware accelerators with optimized memory hierarchies that can meet the enormous compute and memory requirements of” machine learning [
53]. McKinsey, in a report, found that “AI-related semiconductors will see growth of about 18 percent annually over the next few years—five times greater than the rate for semiconductors used in non-AI applications” [
54]. This shows that there is a rise in hardware specifically designed for AI.
There must be a plan for the recycling of all of this hardware—and the environmental cost associated with such recycling must be factored in when setting up a society dependent on AI and the hardware it requires.
4.5. Ethics Alignment
The rise of AI has precipitated a rise in those pointing out that there are many ethical issues associated with AI. Methods for overcoming these risks have been proposed and implemented. Some of these methods themselves come with a cost. For example, many contemporary AI methodologies (e.g., deep neural networks) are not explainable. That is, the considerations which contribute to the output are unknown to even the designers of the algorithm [
15,
55,
56,
57]. When we are delegating the task of certain decisions to AI this lack of explanation will not be acceptable. Delegating judicial decisions [
58] or moral decisions [
59] to AI requires an explanation for the outputs generated.
Various methodologies have come out to overcome this lack of explainability. For example, one proposal has suggested that we can use counterfactual explanations. That is, an explanation can be provided by knowing the smallest change in the input that would yield a positive outcome [
60]. Visual methods that apply to specific models have also been proposed such as Gradient and Guided Back Propagation. These yield visual explanations which may show us which features of an image most contributed to an output. Other methods are more general, for instance LIME and SHAP, which aim to highlight feature importance for a particular output (for a review of such methods see e.g., [
61]).
These methods require their own trained model which then exacerbates the environmental costs pointed out in the above sections. When the use of AI will require the use of more AI to overcome ethical issues, then the environmental cost of this further model must also be calculated.
5. Conclusions
It is no secret that AI requires a vast amount of energy to accomplish its tasks. Any industry uses energy to accomplish its tasks. What we have shown to be special about AI is that AI is increasingly becoming the infrastructure that is required for society to function. Governments, schools, cars, hospitals, banking, etc. are all becoming dependent upon this AI-powered infrastructure. This is a choice that society is making. Choices as important as these cannot be done without thinking about the environmental consequences. And there is little known about the breadth of environmental consequences associated with AI as infrastructure.
Choosing a path that leads to greater harm to the environment is unacceptable. Choosing a path out of ignorance to its impact on the environment is also unacceptable. So far, we are blindly going forward with the creation of a dependence relationship on a technology whose environmental impact, based on the little we do know, is extremely high. While there is much work being done to mitigate this impact, that work, and its results should be known before creating this dependence. We run the risk of locking ourselves into a technological infrastructure that is energy-intensive in both its development and use, as well as energy-intensive to mitigate certain ethical concerns. This is precisely the aim of the Sustainable AI domain—to investigate and make clear that there are a plethora of environmental risks associated with AI and to argue that these risks ought to be the starting point in any ethical analysis of AI/ML.
The argument from large tech companies that most of the energy they use is renewable—and therefore has little impact on the environment is frivolous. The use of energy, renewable or not, during a time that has been called “code red for humanity” is of great importance. The question before any AI model is created should be: is this worth the environmental cost that we will be locked into for decades? The answer will often be no.