Next Article in Journal
An Optimized Dynamic Benefit Evaluation Method for Pumped Storage Projects in the Context of the “Dual Carbon” Goal
Previous Article in Journal
Floating Platform and Mooring Line Optimization for Wake Loss Mitigation in Offshore Wind Farms Through Wake Mixing Strategy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Energy-Aware Machine Learning Models—A Review of Recent Techniques and Perspectives

by
Rafał Różycki
*,†,
Dorota Agnieszka Solarska
and
Grzegorz Waligóra
Institute of Computing Science, Poznan University of Technology, Piotrowo 2, 60-965 Poznan, Poland
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Energies 2025, 18(11), 2810; https://doi.org/10.3390/en18112810
Submission received: 9 March 2025 / Revised: 12 May 2025 / Accepted: 21 May 2025 / Published: 28 May 2025
(This article belongs to the Section B: Energy and Environment)

Abstract

The paper explores the pressing issue of energy consumption in machine learning (ML) models and their environmental footprint. As ML technologies, especially large-scale models, continue to surge in popularity, their escalating energy demands and corresponding CO2 emissions are drawing critical attention. The article dives into innovative strategies to curb energy use in ML applications without compromising—and often even enhancing—model performance. Key techniques, such as model compression, pruning, quantization, and cutting-edge hardware design, take center stage in the discussion. Beyond operational energy use, the paper spotlights a pivotal yet often overlooked factor: the substantial emissions tied to the production of ML hardware. In many cases, these emissions eclipse those from operational activities, underscoring the immense potential of optimizing manufacturing processes to drive meaningful environmental impact. The narrative reinforces the urgency of relentless advancements in energy efficiency across the IT sector, with machine learning and data science leading the charge. Furthermore, deploying ML to streamline energy use in other domains like industry and transportation amplifies these benefits, creating a ripple effect of positive environmental outcomes. The paper culminates in a compelling call to action: adopt a dual-pronged strategy that tackles both operational energy efficiency and the carbon intensity of hardware production. By embracing this holistic approach, the artificial intelligence (AI) sector can play a transformative role in global sustainability efforts, slashing its carbon footprint and driving momentum toward a greener future.

1. Introduction

Artificial intelligence (AI) and machine learning (ML) are gaining prominence every day. We have seen a major and rapid popularization of these technologies in real time in recent years. In hand with the release of GPT-Chat (chatgpt.com) by OpenAI (openai.com) to the public, there has been a starting point of mass usage of large language models by everyday users. This fact is just a prelude to all of the commercial uses we might see and incorporate into our lives in the near future. It is known that large learning models, along other machine learning models, require huge computation power for solving more complex tasks. This means the amount of energy consumed for training and usage purposes is also very high, which results in higher costs of such solutions, along with negative environmental impact. This paper attempts to research and centralize the existing knowledge on energy saving in machine learning in order to find promising solutions for general and specific use cases. The main objectives of the paper are summarized by the following points:
  • To review and analyze the existing literature on energy consumption in machine learning, identifying key areas where improvements can be made.
  • To evaluate various techniques, such as model compression, pruning, and quantization, focusing on their potential to reduce energy usage without compromising model performance.
  • To investigate the impact of different hardware and infrastructure choices on the energy efficiency of ML models, including the use of specialized hardware like GPUs and TPUs.
  • To assess the environmental impact of ML models throughout the entire ML process pipeline.
  • To explore the application of ML in optimizing energy use in other sectors, such as industrial processes and transportation, thereby comparing the positive and the negative impact of AI technologies.
  • To provide practical recommendations for ML practitioners and policymakers on implementing more sustainable practices within the AI pipeline.
The structure of this work is organized as follows. In Section 2, the theoretical foundations related to machine learning models and their energy consumption are discussed, providing a background of the global energy landscape and the specific contributions of the IT sector. Section 3 offers a comprehensive review of the current literature on energy savings in machine learning, highlighting key research areas and methodologies aimed at reducing energy consumption in ML applications. Section 4 focuses on the various methods and approaches used to enhance energy efficiency in machine learning. It includes a detailed analysis of training methods, operational optimizations, and the evaluation of different machine learning architectures. Finally, Section 5 provides a summary of the work, outlining the main conclusions drawn from the research and discussing the practical implications of the findings, particularly in the context of sustainable AI practices.

2. Theoretical Basics

2.1. Global Energy Consumption

In order to provide a comprehensive background for the paper, it is essential to understand the broader landscape of global energy consumption. It is very difficult to track anything, especially energy consumption across the whole population. This is why all of the numbers and percentages have to be seen more as approximations rather than strong facts. Nevertheless, the ratios can provide a lot of information.
Tracking global energy consumption by sector is a complex and challenging task due to several factors. Firstly, the diversity of energy sources and their varying usage across different regions complicate data collection and standardization. For instance, while some countries rely heavily on coal and oil, others have significant contributions from renewable energy sources, such as wind, solar, and hydroelectric power [1]. This heterogeneity necessitates a comprehensive and consistent methodology to accurately capture and compare energy consumption data across different sectors and regions. Secondly, the lack of uniform reporting standards and the varying quality of data from different countries further exacerbate the difficulty. Many developing countries, which are significant contributors to global energy consumption, often lack the infrastructure and resources to accurately measure and report energy usage [2]. This leads to gaps and inconsistencies in the global energy database, making it challenging to obtain a precise global picture. Thirdly, the dynamic nature of energy consumption, influenced by economic growth, technological advancements, and policy changes, adds another layer of complexity. For example, the rapid adoption of renewable energy technologies and the shift towards low-carbon energy sources are continuously altering the energy consumption landscape [2]. These changes require constant updates and adjustments to the tracking mechanisms to ensure that the data remain relevant and accurate. Moreover, the overlapping and interconnected nature of different sectors can lead to double-counting or misallocation of energy consumption. For instance, energy used in the industrial sector for producing goods may also be counted under the commercial sector if those goods are used in commercial activities [3]. This overlap necessitates clear definitions and boundaries for each sector to avoid inaccuracies. Lastly, geopolitical factors and international trade can influence energy consumption patterns, adding to the complexity of tracking. Energy imports and exports, along with varying energy policies across countries, can significantly impact the reported consumption figures [4]. These factors highlight the need for a coordinated international effort to develop standardized and transparent methodologies for tracking global energy consumption by sector.
In summary, the challenges in tracking global energy consumption by sector stem from the diversity of energy sources, the lack of uniform reporting standards, the dynamic nature of energy consumption, overlapping sector boundaries, and geopolitical influences. Addressing these challenges requires a concerted effort from international organizations, governments, and researchers to develop robust and standardized tracking mechanisms.
Let us notice that global energy consumption grew by 2.2% in 2023 [5], which is faster than the average annual growth rate of 1.5% observed from 2010 to 2019 [6]. As far as total consumption is concerned, the world consumed over 20,000 (nearing 25,000) terawatt-hours of electricity annually [7]. Globally, the five sectors that consume the most energy in percentage terms are the following:
  • The Industrial Sector: This sector is the largest consumer of energy, accounting for approximately 37% of the world’s total delivered energy [8]. It includes various industries, such as chemicals, metals, cement, and paper and pulp.
  • The Transport Sector: The transport sector consumes about 25–30% of global energy. This includes energy used for all forms of transportation, such as road, rail, air, and maritime [9].
  • The Residential Sector: The residential sector, which includes energy consumption by households for heating, cooling, lighting, and appliances, accounts for around a quarter of global energy consumption (with 30% for buildings in general) [10,11].
  • The Commercial Sector: This sector, which includes energy used by businesses and public services, accounts for about 8% of global energy consumption.
  • Agriculture, Forestry, and Fishing: This sector consumes around 3% of global energy, primarily for activities related to food production and resource extraction [12].
As for the ICT sector, it consumes about 4% of global electricity, which translates into approximately 2% of total global energy consumption from all sources, including fossil fuels and renewables. Breaking this down further, data centers are particularly energy-intensive, consuming between 240 and 340 terawatt-hours of electricity in 2022, which is about 1.0–1.3% of total global electricity use [13]. In comparison, the entire aviation sector accounts for approximately 2.5% of global energy consumption. This includes both commercial and military aviation. Meanwhile, e.g., France’s total energy consumption accounts for approximately 1.5% of the global total.

2.2. Foundations of Machine Learning

Machine learning, sometimes referred to as statistical learning, is a domain within computer science that concentrates on developing methods leveraging data to improve task performance [1]. It represents a branch of artificial intelligence that empowers software systems to generate more accurate predictions without being explicitly programmed. This is accomplished by training on a large set of examples, known as a training dataset.
Artificial Neural Networks (ANNs) are a class of machine learning algorithms that draw inspiration from the complex networks of biological neurons—specialized cells that transmit chemical and electrical signals, facilitating functions like memory storage and muscle coordination [14,15]. ANNs are structured with an input layer, one or more hidden layers, and an output layer, with neurons in each layer being interconnected. Each neuron is characterized by its threshold and weight, and it becomes activated when the input surpasses the threshold, passing the signal to subsequent neurons. The accuracy of these networks is enhanced by increasing the number of training examples and fine-tuning parameters in supervised learning scenarios.
Deep learning, a subset of machine learning, refers to a class of models that employ multiple layers of nonlinear processing units for feature extraction and transformation. Each layer in a deep learning model builds upon the representations learned by the previous layer, allowing for the automatic discovery of intricate patterns in large datasets. These models are particularly well-suited for tasks that involve high-dimensional data, such as image and speech recognition, natural language processing, and game playing. The advancement of deep learning has been driven by the availability of large-scale datasets, increased computational power, and the hybridization of training approaches (e.g., Reinforced Human Feedback for Large-Scale Transformers, sparse activation for inference in deep neural networks (DNNs)). As a result, deep learning continues to push the boundaries of what is achievable with artificial intelligence, offering unprecedented capabilities in pattern recognition, predictive modeling, and decision making processes.
Deep learning architectures, such as Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), or the Transformer model, have demonstrated remarkable success in various applications by learning hierarchical representations. The depth of these networks enables them to model complex relationships within data, leading to state-of-the-art performance across a range of domains.
Convolutional Neural Networks (CNNs) [16] are a type of deep learning model designed to process data with a grid-like structure, such as images. They use convolutional layers to automatically detect patterns like edges, textures, and shapes directly from raw input data. CNNs are widely used in computer vision tasks, including image classification, object detection, and facial recognition.
Recurrent Neural Networks (RNNs) [17] are a specialized type of ANN engineered to process sequential and time-dependent data through the incorporation of feedback loops. In contrast to traditional feedforward networks, RNNs retain information from previous inputs, which influences the processing of current data. This mechanism negates the independence of inputs across different layers. Moreover, RNN neurons within a layer share identical weight parameters.
Transformer Architecture: The Transformer model, introduced by Google in 2017 [18], is a deep learning architecture designed for tasks involving sequential data, such as natural language processing, time series prediction, and music generation. A core component of the Transformer is the self-attention mechanism, which allows the model to assign varying degrees of importance to different parts of the input when making predictions. Through attention mechanisms, the model can evaluate the relevance of each input element in the context of the entire sequence. The Transformer architecture is particularly advantageous due to its ability to handle input sequences of varying lengths, making it highly effective for tasks like machine translation and text summarization. Additionally, its ability to capture long-range dependencies within sequences is crucial for many applications involving sequential data.
GPT-2: Language Modeling with Creativity—Generative Pre-Trained Transformer 2 (GPT-2), developed by OpenAI, is a deep learning model rooted in the Transformer architecture [19]. Functioning as a language model, GPT-2 is trained on vast datasets of internet text and capable of generating human-like text outputs. Unlike traditional Transformer models, GPT-2 employs a decoder-only structure, lacking an encoder component. This design allows GPT-2 to generate sequences of tokens, facilitating NLP tasks like text generation, sentiment analysis, and translation. However, the absence of an encoder limits its applicability to tasks requiring language understanding.

2.3. Performance Metrics and Evaluation Techniques

In the context of evaluating AI models, especially those deployed for environmental impact assessment, it is imperative to employ rigorous performance metrics and evaluation techniques. This ensures that the models not only achieve high predictive accuracy but also operate efficiently within resource constraints.
Performance metrics provide quantitative measures to assess the efficacy and efficiency of AI models. Key metrics include accuracy, precision, recall, the F1-score, the mean squared error and the root mean squared error, the area under the receiver operating characteristic curve, and the confusion matrix.
Evaluating AI models necessitates systematic techniques that ensure robust and reliable performance assessments.
  • Cross-Validation: This technique involves partitioning the dataset into multiple subsets and performing training and validation iteratively on different partitions. Common methods include k-fold cross-validation, stratified k-fold cross-validation, and leave-one-out cross-validation. This approach mitigates overfitting and provides a more generalized assessment of model performance.
  • Holdout Method: The dataset is divided into separate training and testing subsets. The model is trained on the training subset and evaluated on the testing subset, providing a straightforward yet less robust performance evaluation compared to cross-validation.
  • Bootstrapping: This resampling technique involves generating multiple subsets from the original dataset through replacement. Models are trained and evaluated on these subsets to estimate the accuracy and stability of predictions.

2.4. Computational Efficiency and Resource Utilization

Computational complexity is a pivotal concept in computer science that aids in elucidating the efficiency of algorithms, particularly with respect to time and space requirements. It is predominantly categorized into two main types:
  • Time complexity: This metric evaluates the duration an algorithm requires to complete its execution as a function of the size of its input. It provides an upper bound on the running time, thereby facilitating performance prediction as input sizes escalate.
  • Space complexity: This metric assesses the amount of memory an algorithm utilizes relative to the size of its input.
Both time and space complexity are typically expressed using “Big O” notation. Understanding computational complexity is crucial in the realm of machine learning, particularly when managing large datasets and sophisticated models. Algorithms with lower time and space complexities are generally more scalable and efficient, rendering them preferable for practical applications.
Resource utilization in machine learning pertains to the efficient employment of computational resources, such as CPUs, GPUs, memory, and energy, during the training and inference phases of models. Effective resource utilization ensures that machine learning tasks are executed efficiently, optimizing performance while minimizing costs and energy consumption.
  • CPU vs. GPU vs. TPU:
    CPU (Central Processing Unit): Versatile and suitable for general-purpose tasks, though often slower for parallelizable operations typical in machine learning.
    GPU (Graphics Processing Unit): Highly parallel and efficient for tasks involving large-scale matrix and vector operations, making it ideal for training deep neural networks.
    TPU (Tensor Processing Unit): Specialized hardware designed by Google specifically for accelerating machine learning workloads, providing significant speed-ups for tensor operations.
  • Memory Usage:
    Efficient memory usage is critical to handle large datasets and models. Techniques like memory swapping, gradient checkpointing, and efficient data pipelines can effectively manage memory usage.
  • Energy Consumption:
    Training and deploying machine learning models can be energy-intensive. Estimating and optimizing energy consumption are essential for sustainable AI practices. Techniques like model pruning, quantization, and the use of energy-efficient hardware can mitigate energy usage.
Optimization techniques are imperative to enhance the performance and efficiency of machine learning models. These techniques can be applied at various stages of model development and deployment.
  • Algorithmic Optimization:
    Gradient Descent Variants: Techniques like Stochastic Gradient Descent (SGD), Momentum, RMSprop, and Adam optimize the convergence speed and stability of training.
    Hyperparameter Tuning: Systematic approaches, such as grid search, random search, and Bayesian optimization, aid in identifying the optimal hyperparameters for the model.
  • Model Optimization:
    Pruning: Removing redundant parameters in neural networks to reduce model size and enhance inference speed without significantly affecting accuracy.
    Quantization: Reducing the precision of model parameters (e.g., from 32-bit to 8-bit) to decrease memory usage and increase computational speed.
    Knowledge Distillation: Training a smaller model (student) to replicate the performance of a larger model (teacher), thereby achieving a balance between efficiency and performance.
  • Hardware-Level Optimization:
    Parallel and Distributed Computing: Leveraging multiple processors or distributed systems to manage large-scale computations more efficiently.
    Hardware Acceleration: Utilizing specialized hardware, such as GPUs and TPUs, to accelerate specific machine learning operations.
  • Software Optimization:
    Efficient Libraries and Frameworks: Employing optimized libraries and frameworks (e.g., TensorFlow, PyTorch) designed to maximize the underlying hardware.
    Compiler Optimizations: Utilizing advanced compilers and settings that optimize the code for performance on specific hardware architectures.
By comprehending and implementing these concepts, machine learning practitioners can design and deploy models that are not only accurate but also efficient in terms of computational resources and energy consumption, thereby paving the way for more sustainable and scalable AI solutions.

3. Literature Review

3.1. AI Energy Consumption and CO2 Emission

The environmental impact of artificial intelligence has become a significant area of research, with numerous studies highlighting the energy consumption and CO2 emissions associated with AI technologies. Wu et al. [20] provide a comprehensive overview of the environmental implications, challenges, and opportunities related to sustainable AI. Their work emphasizes the need for more energy-efficient AI models and the potential benefits of integrating renewable energy sources into AI infrastructure.
Lacoste et al. [21] quantify the carbon emissions of machine learning models, revealing that training large AI models can result in substantial CO2 emissions. They propose several strategies to mitigate these emissions, including optimizing model architectures, using more efficient hardware, and leveraging renewable energy sources. Their findings underscore the importance of considering the environmental costs of AI development and the need for sustainable practices in the field.
Recent studies have also explored various methods to reduce the environmental impact of AI. For instance, optimizing energy efficiency through AI-powered systems can significantly lower carbon emissions. These systems can analyze vast amounts of real-time data, enabling smarter grid management and optimizing energy distribution [22]. Additionally, AI algorithms can monitor and analyze energy consumption patterns, identify opportunities for efficiency improvements, and facilitate predictive maintenance to minimize energy waste [23].
Overall, the existing body of research highlights the dual role of AI as both a contributor to and a potential mitigator of environmental impact. By adopting sustainable practices and leveraging AI’s capabilities for energy optimization, it is possible to reduce the carbon footprint of AI technologies and contribute to global sustainability goals.

3.2. Estimation of Energy Consumption in Machine Learning

Energy consumption estimation in machine learning has increasingly become more relevant, as highlighted by García-Martín et al. in [24]. The research focuses on both the training and inference phases of machine learning models, particularly deep neural networks, and their distinct computational requirements. During training, models, often utilizing high-end GPUs, CPUs, or FPGAs, undergo parameter optimization and network architecture tuning. In contrast, the inference phase, typically executed on low-end embedded systems, such as smartphones and wearables, involves applying pre-trained models to new data.
Early energy estimation models used metrics like the number of multiply–accumulate (MAC) operations and memory accesses to approximate energy usage, with optimization techniques like pruning and compression aimed at reducing model weights and therefore energy consumption. Advanced models have since incorporated detailed energy costs across different memory hierarchy levels, optimizing the reuse of data to minimize energy use.
For practical applications, methodologies like SyNERGY [24] utilize Performance Monitoring Counters (PMCs) to model energy consumption at the application level, capturing real-time power data and translating them into energy estimates. This approach has been validated across various Convolutional Neural Network (ConvNet) layers, showing reasonable accuracy in predicting energy usage.
There is a range of other ML methods utilizing PMCs for energy prediction. Linear regression models, guided by the Theory of Energy of Computing [25] (guiding the selection of PMCs), enhance prediction accuracy by ensuring that energy estimations reflect additive behaviors across applications, aligning with a principle similar to energy conservation. More complex infrastructures like MuMMI [26] incorporate multiple ML techniques to model power and performance on high-performance systems, offering low prediction error rates while also identifying key energy-related metrics. Ensemble learning approaches, such as Random Forests and Gradient Boosting, further improve robustness by capturing nonlinear interactions between PMCs and energy consumption. Deep learning models, particularly ANNs, have also shown promise in identifying subtle patterns within performance data, although with higher computational requirements [27].
The literature also addresses the challenge of energy estimation for GPUs, a critical component for training large models. Tools like NeuralPower provide predictive models for GPU energy consumption considering application-level features without altering voltage and frequency settings.
These advancements in energy estimation methodologies are crucial for developing energy-efficient machine learning systems, particularly as models become more complex and hardware diversifies. Further research is needed to refine these models and extend their applicability to new neural network architectures and specialized hardware platforms.

3.3. Dynamic GPU Energy Optimization for Machine Learning Training Workloads

The increasing scale and complexity of modern machine learning models have led to longer training times and higher energy consumption, particularly when using GPUs.
GPOEO: A Framework for Online GPU Energy Optimization
The framework GPOEO (GPU Online Energy Optimization) proposed by Wang et. al. in [28] represents a significant advancement in reducing the energy consumption of GPUs during ML training workloads. The GPOEO framework dynamically determines the optimal energy configuration by leveraging techniques for online measurement, multi-objective prediction modeling, and search optimization.
Key Components and Methodology
GPOEO utilizes GPU performance counters to characterize workload behavior. To minimize profiling overhead, it employs an analytical model that detects training iteration changes and only collects performance counter data during iteration shifts. This approach contrasts with traditional methods that often require extensive offline profiling or source code analysis, which can be inflexible and resource-intensive. GPOEO employs multi-objective models based on Gradient Boosting and a local search algorithm to balance execution time and energy consumption.
Performance and Evaluation
The GPOEO framework was evaluated on 71 ML workloads from two AI benchmark suites running on an NVIDIA RTX 3080 Ti GPU. The results demonstrated a mean energy saving of 16.2%, with a modest average execution time increase of 5.1%, compared to the NVIDIA default scheduling strategy.
Comparative Analysis
The study distinguishes itself from other online GPU power optimization methods, such as the online dynamic power–performance (ODPP) [29] and dynamic power management methods for integrated APUs, by providing finer-grained runtime information and a more accurate prediction model. Existing offline techniques, which involve extensive profiling or specific ML algorithm modifications, lack the adaptability and efficiency of the GPOEO framework.
Future Directions
Future work aims to extend GPOEO by exploring model-free methods that do not require offline training. Additionally, integrating more advanced machine learning techniques for prediction and optimization could further enhance the framework’s efficiency and adaptability.
Implications and Applications
The GPOEO framework offers substantial implications for reducing the environmental impact of training large-scale ML models. By optimizing GPU energy consumption dynamically, it not only lowers operational costs but also enables the training of more complex models within existing energy budgets. This approach is particularly relevant for high-performance computing environments where energy efficiency is paramount.
The development of frameworks like GPOEO signifies a crucial step toward sustainable and efficient machine learning. The integration of online energy optimization techniques, informed by real-time performance data, represents a significant advancement in managing the energy demands of contemporary ML workloads.

3.4. Intelligent AI on the Edge

The rapid growth of Internet of Things (IoT) applications and Connected and Autonomous Vehicles (CAVs) has driven the need for efficient, real-time data processing on edge devices. This subsection reviews recent advancements in deploying intelligent AI on the edge, emphasizing energy-efficient machine learning techniques and the critical role of software optimization.
Energy-Efficient Machine Learning for Edge Devices
Edge devices are integral to the functionality of IoT and CAVs, handling vast amounts of data generated in real time. However, the computational intensity required for ML models to process these data poses significant challenges in terms of energy consumption and heat generation. Efficient hardware alone cannot address these challenges; software optimization is equally essential.
State-of-the-Art Techniques
Recent research has focused on making hardware more efficient for ML tasks. For example, Intel Nervana Neural Network Processors and Nvidia DGX-2 are designed to enhance performance and energy efficiency. However, without energy-efficient software, the potential benefits of such advanced hardware may not be fully realized. This necessitates the development of tools to help software developers optimize ML models for energy efficiency.
JEPO: Java Energy Profiler and Optimizer
One notable contribution to this field is the Java Energy Profiler and Optimizer (JEPO) [30], an Eclipse plugin designed to help developers profile and optimize Java-based ML code. JEPO measures energy consumption at the method level, providing suggestions for energy-efficient coding practices. These suggestions cover various aspects of Java programming, including data types, operators, control statements, strings, exceptions, objects, and arrays.
The effectiveness of JEPO was demonstrated through its application to WEKA ML software, achieving up to 14.46% improvement in energy consumption and a 12.93% reduction in execution time, with only a 0.48% drop in accuracy.
Challenges and Future Directions
While JEPO represents a significant advancement, it underscores the broader challenge of optimizing software for energy efficiency on edge devices. Future research must explore model-free methods that do not rely on extensive offline training and investigate more advanced ML techniques for predictive modeling and optimization.
Applications in IoT and CAVs
The deployment of ML models on edge devices enables a wide range of applications in IoT and CAVs. In IoT, edge computing supports smart homes, retail, travel, finance, healthcare, industry, social media, and research by processing data locally to meet latency, bandwidth, availability, and privacy constraints. For instance, EdgeBox leverages edge computing for real-time video analysis to detect safety threats, highlighting the need for energy-efficient software to avoid hardware overheating.
In the context of CAVs, edge devices process data from sensors, such as GPS, LiDAR, cameras, radar, and sonar, to enable perception, object recognition, and decision making. The energy efficiency of these processes is crucial due to the real-time processing requirements and limited computational resources of edge devices.
Impact on Environmental Sustainability
Optimizing the energy consumption of ML models on edge devices has significant implications for environmental sustainability. Reducing the energy footprint of IoT and CAVs not only lowers operational costs but also contributes to mitigating the environmental impact of large-scale data processing. This aligns with broader efforts to create more sustainable and efficient technology ecosystems.
The advancements in intelligent AI on the edge, particularly through tools like JEPO, mark a critical step towards achieving energy efficiency in ML applications. By integrating software and hardware optimization strategies, we can enhance the performance of edge devices, support the growing demands of IoT and CAVs, and promote sustainable technological development.

3.5. Energy-Efficient Practices in Deep Learning Training

Yarally et al. in [31] delve into strategies for enhancing the energy efficiency of deep learning models, which is particularly relevant for applications in the Internet of Things (IoT) and Connected and Autonomous Vehicles (CAVs). The study focuses on two main aspects: hyperparameter tuning strategies and the complexity of neural network architectures.
The authors explore three popular hyperparameter tuning strategies: grid search, random search, and Bayesian optimization. Their findings indicate that Bayesian optimization significantly outperforms the other two strategies in terms of energy efficiency. This method not only reduces the number of iterations needed to find optimal hyperparameters but also minimizes energy consumption during the training process. This is particularly beneficial for edge devices, which have limited computational resources and energy budgets.
Moreover, the study examines the energy consumption of different neural network layers, specifically convolutional, linear, and ReLU layers. It was observed that convolutional layers are the most computationally expensive, thus contributing the most to the overall energy consumption of the model. The researchers suggest that by simplifying the network architecture, particularly by reducing the number of convolutional layers, it is possible to significantly cut down on energy consumption without severely compromising accuracy. This approach aligns with the principles of Green AI, which advocates for the consideration of energy consumption as an essential metric alongside model accuracy.
The research underscores the importance of adopting energy-efficient practices in the early stages of the deep learning pipeline. By prioritizing energy efficiency during hyperparameter tuning and model design, it is possible to develop AI models that are both effective and sustainable. This is especially crucial for edge computing environments, where energy efficiency directly impacts the feasibility and performance of deploying intelligent AI solutions.
In summary, ref. [31] highlights innovative strategies for reducing energy consumption in deep learning, offering valuable insights for developers and researchers aiming to create more sustainable AI systems. Their emphasis on energy-efficient hyperparameter tuning and simplified network architectures presents practical approaches to mitigating the environmental impact of AI, paving the way for more sustainable and efficient AI applications on the edge.

3.6. Parallelizing Deep Neural Networks: Data and Model Parallelism

In the realm of distributed machine learning, parallelization techniques play a pivotal role in scaling deep neural networks (DNNs) to handle vast amounts of data efficiently. There are two primary approaches to parallelizing DNNs: data parallelism and model parallelism.
Data Parallelism
Data parallelism involves distributing the training data across multiple machines or processors. Each machine computes the gradients independently using its subset of the data and then communicates these gradients to a central parameter server or master node, where they are aggregated and used to update the global model parameters. This approach allows for simultaneous processing of different subsets of data and is well-suited to deep–narrow networks where each training example is processed through multiple layers sequentially. Notable implementations of data parallelism include frameworks like TensorFlow and PyTorch, which facilitate efficient gradient aggregation across distributed systems.
Model Parallelism
Model parallelism, on the other hand, partitions the neural network itself across multiple devices or machines. Each machine is responsible for computing a portion of the network’s operations, typically within a single layer or a subset of layers. This approach is particularly beneficial for wide–shallow networks where the model’s size or complexity limits its fit into a single device’s memory or computational capacity. Model parallelism requires careful coordination and synchronization between devices to ensure consistent and accurate model updates.
The effective implementation of these parallelization strategies heavily depends on advancements in hardware designed to support distributed learning. Modern systems leverage multi-GPU setups, such as NVIDIA’s P100 and V100 GPUs, as well as specialized accelerators like Google’s Tensor Processing Units (TPUs) and Intel’s Knights Landing (KNL) chips. These hardware platforms are designed to handle the massive computational demands of training large-scale DNNs efficiently.
To fully exploit the potential of distributed hardware, asynchronous optimization algorithms have gained prominence. These algorithms allow workers to update model parameters independently and asynchronously, avoiding the synchronization overheads associated with traditional synchronous approaches. Two notable asynchronous solvers are the following.
  • Hogwild! SGD [32]: This method removes locks on shared parameters, enabling multiple threads to update the global model parameters concurrently. Despite potential conflicts during updates, it has been shown to converge under certain conditions.
  • Elastic Averaging SGD (EASGD) [33]: EASGD is designed for distributed systems and balances local updates with global model synchronization. It allows each worker to compute gradients independently, and it periodically synchronizes with a central server using a round-robin strategy, ensuring convergence and scalability.
Practical implementations of asynchronous solvers often face challenges, such as parameter synchronization and communication overheads. Techniques like EA-wild [34] have been proposed to mitigate these issues by allowing asynchronous updates without explicit locking, thereby enhancing throughput and scalability in distributed environments.
In contrast, synchronous solvers rely on efficient communication patterns like all-reduce operations to synchronize gradients across multiple devices. This approach is crucial for maintaining consistency in model updates and widely used in distributed optimization algorithms like synchronous SGD variants.
In conclusion, the choice between data and model parallelism depends on the specific characteristics of the neural network architecture and the available hardware infrastructure. Advances in hardware and algorithmic developments continue to drive the scalability and efficiency of parallelized deep learning, making it feasible to train state-of-the-art models on massive datasets across distributed environments.

3.7. Machine Learning for Reducing CO2 Emissions in Other Sectors

Machine learning has a dual impact on the environment. While it can consume significant amounts of energy and produce substantial CO2 emissions, it also offers powerful tools for optimizing processes to reduce energy use and emissions. This section explores the beneficial side of ML’s environmental impact. It focuses on its potential to improve energy efficiency and decrease carbon emissions across various other sectors.
Lifecycle assessment (LCA) systematically evaluates the environmental impacts of products and processes from start to finish. Integrating ML into LCA has greatly enhanced this field by improving accuracy and efficiency. Ghoroghi et al. [35] review how ML techniques have been applied to LCA, showing their potential to streamline processes and enhance predictive capabilities. Using ML algorithms, researchers can better predict CO2 emissions at different stages of a product’s lifecycle, leading to more informed decisions and sustainable practices.
Figure 1 illustrates the predictors and outcomes of machine learning methods utilized in lifecycle assessment (LCA) applications. It highlights characteristics as the most commonly used inputs and impact categories as the most frequently evaluated outcomes in these applications. The environmental impact categories include global warming, acidification, eutrophication, formation of air pollution, and ozone depletion.
Enhancements in Predictive Accuracy
A key benefit of incorporating ML into LCA is the significant improvement in predictive accuracy. Traditional LCA methods often rely on static models and historical data, which can be limited in scope and precision. In contrast, ML can process vast amounts of data and identify complex patterns, resulting in more accurate environmental impact predictions. This enhanced accuracy is crucial for developing effective strategies to reduce CO2 emissions, as it identifies the most critical areas for improvement and optimization.
Case Studies and Applications
Several case studies have demonstrated the effectiveness of ML in LCA. For example, ML models have been used to optimize product design and manufacturing processes, leading to reduced energy consumption and lower emissions. These applications highlight ML’s transformative potential in promoting sustainability across various industries. Examples include optimizing material usage in manufacturing, improving waste management practices, and enhancing supply chain efficiency, all contributing to lower energy consumption and CO2 emissions.
The industrial sector is a major source of global CO2 emissions. However, ML-driven optimizations can significantly reduce these emissions. Sayyah et al. [36] present a case study on the carbon dioxide methanation process, following the process from Figure 2, demonstrating how ML-based lifecycle optimization can achieve both environmental and productivity efficiency.
Process Optimization
ML models can optimize various industrial processes, from manufacturing to waste management. By analyzing data from these processes, ML algorithms can identify inefficiencies and suggest improvements, leading to reduced energy consumption and lower emissions. For example, ML can optimize machinery operations, reduce downtime, and improve production line efficiency. Continuous real-time monitoring and adjustment ensure industrial operations perform at their best, minimizing energy waste and emissions.
Real-Time Monitoring and Control
Real-time monitoring and control are essential for maintaining optimal performance in industrial processes. ML algorithms can analyze data in real time, providing insights and recommendations for immediate adjustments. This ensures that processes always operate at peak efficiency, minimizing energy waste and emissions. Additionally, real-time monitoring enables predictive maintenance, reducing the likelihood of unexpected equipment failures and further enhancing energy efficiency.
Delanoë et al. [37] provide a comprehensive evaluation of AI models designed to reduce CO2 emissions. Their research highlights the gains achieved through AI-driven optimizations in various applications, from transportation to energy production.
Algorithm Development
Developing efficient algorithms is crucial for using AI to reduce CO2 emissions. Delanoë et al. [37] discuss various approaches to algorithm development, emphasizing the importance of creating models that are both accurate and computationally efficient. These algorithms can be applied to a wide range of applications, from optimizing energy grids to improving transportation system efficiency. For instance, ML algorithms can optimize the routing and scheduling of delivery trucks, reducing fuel consumption and emissions.
Sustainable AI Practices
In addition to developing efficient algorithms, it is important to consider the sustainability of AI practices. Training AI models can be energy-intensive, and researchers are exploring ways to reduce the carbon footprint of AI development. This includes using renewable energy sources for data centers and developing more efficient training algorithms. Efforts are also being made to improve the energy efficiency of the hardware used in AI training, making it more environmentally friendly.

3.8. Quantifying the Carbon Emissions of Machine Learning

The rapid expansion of machine learning and artificial intelligence technologies has led to growing concerns regarding their environmental impact, particularly in terms of carbon emissions. Lacoste et al. [21] delve into this issue by conducting a comprehensive study that quantifies the carbon emissions associated with training ML models. Their work highlights the urgency of understanding and mitigating these emissions, which are significantly influenced by factors like the type of energy consumed, the computational infrastructure employed, and the duration of training processes.
Lacoste et al. [21] identify several critical factors that contribute to the carbon footprint of ML training, emphasizing the need for the ML community to consider these elements in their work.
Type of Energy Used
The energy source powering the data centers where ML models are trained is one of the most critical determinants of carbon emissions. The study notes that servers connected to grids powered by renewable energy sources result in significantly lower CO2-equivalent (CO2eq) emissions compared to those reliant on fossil fuels. For example, Brander et al. [38] provide a detailed analysis of electricity-specific emission factors, showing that carbon emissions can vary drastically depending on geographical location. In regions like Quebec, Canada, where hydroelectric power is prevalent, carbon emissions from data centers are as low as 20 g CO2eq/kWh. Conversely, in places like Iowa, USA, which rely more heavily on coal and natural gas, emissions can soar to 736.6 g CO2eq/kWh. This highlights the importance of choosing data center locations wisely to minimize environmental impact.
Computing Infrastructure and Training Time
The choice of computational infrastructure, such as GPUs and TPUs, and the duration of the training process also significantly impact carbon emissions. Modern deep learning models often require vast computational resources and extended training times, which lead to substantial energy consumption. Jouppi et al. [39] discuss the energy efficiency of Tensor Processing Units (TPUs), noting that these specialized processors can offer significantly better performance per watt (FLOPS/W) compared to traditional GPUs. This makes TPUs a more sustainable choice for large-scale ML training tasks.
Furthermore, the optimization of training procedures is crucial for reducing energy consumption. Tajbakhsh et al. [40] demonstrate that fine-tuning pre-trained models, as opposed to training from scratch, can lead to significant reductions in the computational resources required without compromising model performance. This is particularly important in domains like medical imaging, where large datasets and complex models are common. Similarly, Howard and Ruder [41] show that Universal Language Model Fine-Tuning (ULMFiT) can achieve state-of-the-art results with a fraction of the computational cost compared to training new models from the ground up.
Cloud Providers and Data Center Location
The environmental impact of machine learning training and artificial intelligence is significantly influenced by the choice of cloud provider and the location of data centers. Different cloud service providers have varying levels of commitment to sustainability, which directly affects their environmental footprint. For example, Google has achieved carbon neutrality across its global operations through substantial investments in renewable energy and advanced cooling systems designed to minimize energy consumption [42]. Microsoft has also made significant strides by developing carbon-neutral data centers and investing heavily in renewable energy [43]. These initiatives set a high standard for the industry in terms of sustainability.
In contrast, providers that still rely predominantly on carbon-intensive energy sources contribute to higher emissions. The energy grid’s composition in different regions plays a crucial role in this regard. Data centers located in areas where the energy grid relies heavily on fossil fuels, such as in China and India, have greater carbon intensity due to their reliance on coal, leading to greater emissions per kilowatt-hour compared to regions with a higher share of renewable energy, such as Scandinavia [20].
The efficiency of data centers is also a critical factor. Data centers optimized for energy efficiency can substantially reduce their operational carbon footprints. The choice of cloud provider and their commitment to sustainability are essential considerations for minimizing the carbon footprint of ML and AI projects [21].
To assist researchers and practitioners in assessing and mitigating the carbon impact of their ML models, Lacoste et al. [21] developed the Machine Learning Emissions Calculator (https://mlco2.github.io/impact/ accessed on 13 January 2025). This tool provides estimates of the carbon emissions produced during the training of ML models, taking into account factors like the geographical location of the server, the type of computational hardware used (e.g., GPU, TPU), and the total training time. By using this calculator, researchers can make more informed decisions regarding their model training processes, potentially opting for more energy-efficient methods or choosing to train models in regions with lower carbon intensity.
One study [21] offers a critical examination of the carbon footprint associated with ML model training, underscoring the necessity of sustainable practices in the field of artificial intelligence. By providing practical tools and evidence-based recommendations, the work serves as a call to action for the ML community to actively reduce the environmental impact of their research. As AI continues to grow, integrating sustainability into its development and deployment becomes increasingly imperative.
In light of the findings, Lacoste et al. [21] provide several actionable recommendations for reducing the carbon emissions associated with ML training:
  • Choose Energy-Efficient Hardware: Selecting hardware with higher computing efficiency is crucial for reducing energy consumption. For instance, Jouppi et al. [39] suggest that TPUs, with their higher performance per watt, are a more sustainable choice compared to traditional GPUs, especially for large-scale deep learning tasks.
  • Optimize Training Procedures: Utilizing pre-trained models and efficient hyperparameter search methods can greatly reduce the computational resources required for training. Bergstra and Bengio [44] highlight the effectiveness of random search over grid search in hyperparameter optimization, significantly lowering the computation needed. Falkner et al. [45] further propose BOHB (Bayesian Optimization with HyperBand), which balances exploration and exploitation, providing a more efficient hyperparameter search mechanism.
  • Select Low-Carbon Data Centers: The choice of data center is another critical factor. Researchers should consider cloud providers with a strong commitment to renewable energy. Google’s [42] and Microsoft’s [43] data centers, for example, offer lower carbon options due to their investments in sustainable energy infrastructure.
  • Engage in Responsible Computing Practices: Awareness of the environmental implications of computing tasks is essential. Schwartz et al. [46] advocate for “Green AI”, encouraging the community to prioritize energy-efficient algorithms and reduce unnecessary computations. This includes adopting practices like thorough literature reviews to avoid redundant experiments, efficient code debugging, and leveraging optimized algorithms.

3.9. Sustainable AI

The unprecedented growth of artificial intelligence technologies has sparked a global revolution across various sectors, ranging from healthcare and finance to autonomous systems and beyond. However, this rapid expansion comes with significant environmental costs, particularly in terms of carbon emissions and energy consumption. In their comprehensive study, Wu et al. [20] examine these environmental implications, outlining the challenges and opportunities for making AI more sustainable. This section delves into the core issues identified in the paper, supported by key references that illustrate the broader context of AI’s environmental impact.
The proliferation of AI has been characterized by a super-linear growth trend, especially in data volume, model complexity, and computational infrastructure. Wu et al. [20] provide a striking example of this trend by highlighting a 2.4× increase in training data at Facebook between 2019 and 2021, which has driven a 3.2× rise in data ingestion bandwidth demand. This rapid escalation is not confined to data alone; AI model sizes have also expanded dramatically. For instance, recommendation models at Facebook grew 20× in size over the same period, necessitating significant increases in computational resources and leading to substantial environmental impacts.
This growth trajectory is mirrored across the AI industry, where large-scale models, such as OpenAI’s GPT-3 and Google’s BERT, require immense computational power, contributing to a substantial carbon footprint. The environmental cost of training such models is non-trivial; it involves both operational and embodied carbon footprints, the latter referring to the carbon emissions associated with the manufacture and lifecycle of the hardware used for AI computations.

3.9.1. Operational vs. Embodied Carbon Footprint and the Environmental Impact of AI Model Training

The carbon footprint of AI systems can be divided into two primary components: operational and embodied carbon footprints. The operational carbon footprint pertains to the energy consumed during the deployment and use of AI models. This includes the power required to run data centers and the ongoing energy demands of AI systems in production. Wu et al. [20] emphasize that while operational emissions are significant, the embodied carbon footprint—the carbon emissions from the production, transportation, and disposal of hardware—can be even more consequential.
The embodied carbon footprint is particularly relevant in the context of AI’s rapid infrastructure growth. For example, Facebook’s data centers experienced a 2.9× increase in AI training infrastructure capacity over a span of 1.5 years, highlighting the environmental burden of scaling AI infrastructure [20]. The production of GPUs, TPUs, and other accelerators involves energy-intensive processes, contributing to a high embodied carbon footprint. Studies like those by Anthony et al. [47] and Schneider et al. [48] provide a deeper understanding of the carbon costs associated with the hardware lifecycle, suggesting that the embodied emissions may account for a significant portion of the total carbon footprint in large-scale AI deployments.
The training of AI models is one of the most energy-intensive stages in the AI lifecycle. Strubell et al. [49] bring attention to the staggering energy requirements for training large natural language processing (NLP) models. For example, training a single BERT model can emit as much CO2 as five cars over their entire lifetimes. This energy-intensive process is exacerbated by the trend toward increasingly complex models, which require larger datasets and more computational power. The environmental impact of such training processes is not only a function of the model’s size but also the efficiency of the training algorithms and the underlying hardware.
Wu et al. [20] discuss various strategies to mitigate these environmental impacts, including the use of more energy-efficient hardware, such as Google’s TPUs, which offer better performance per watt compared to traditional GPUs. Furthermore, optimizing training procedures, such as by employing transfer learning, hyperparameter optimization, and model pruning, can significantly reduce the computational resources required, thereby lowering the associated carbon emissions.

3.9.2. Challenges and Opportunities for Sustainable AI

Achieving sustainable AI is fraught with challenges. Wu et al. [20] outline several critical obstacles, including the lack of standardized metrics for measuring the environmental impact of AI and the complexity of balancing performance with energy efficiency. The rapid pace of AI innovation often outstrips the development of sustainable practices, leading to an increased carbon footprint that is not adequately accounted for in most AI research and development projects.
Moreover, the environmental cost of AI is often externalized, meaning that the true carbon footprint is not reflected in the cost of AI products and services. This externalization creates a disincentive for companies to invest in greener technologies, as the immediate financial benefits are not apparent. Henderson et al. [50] advocate for systematic reporting of the energy and carbon footprints of machine learning models, urging the AI community to adopt more transparent practices that account for environmental costs.
One study [20] underscores the urgent need for the AI community to address the environmental implications of its rapid growth. By recognizing the carbon footprint associated with AI model training and infrastructure, and by implementing strategies to mitigate these impacts, the field can progress towards more sustainable practices. The integration of resource-efficient algorithms, lifecycle assessments, and the adoption of renewable energy are critical steps in this journey. As AI continues to evolve, it is imperative that sustainability becomes a core consideration in its development and deployment, ensuring that the benefits of AI are not overshadowed by its environmental costs.
Despite the challenges, there are significant opportunities for making AI more sustainable. Wu et al. [20] suggest several strategies, including the following.
  • Development of Resource-Efficient Algorithms: There is a need for the AI community to prioritize the development of algorithms that are both computationally efficient and environmentally friendly. Techniques like model compression, knowledge distillation, and low-rank approximation can reduce the computational demands of AI models without compromising performance [51].
  • Lifecycle Assessments of AI Systems: Conducting comprehensive lifecycle assessments (LCAs) of AI systems can help identify the most significant sources of carbon emissions throughout the AI lifecycle, from hardware production to deployment and end-of-life disposal. This approach can guide the development of more sustainable AI systems by highlighting areas where carbon reductions are most feasible.
  • Optimizing AI Pipelines: Implementing best practices in AI pipeline management, such as avoiding redundant computations, optimizing data transfer, and utilizing energy-efficient hardware, can lead to substantial reductions in carbon emissions. Techniques like hyperparameter tuning and early stopping can also help minimize the computational resources required for model training.
  • Shift to Renewable Energy Sources: Encouraging the use of data centers powered by renewable energy sources is critical for reducing the operational carbon footprint of AI. Wu et al. [20] highlight the importance of geographical location in this regard, suggesting that AI workloads should be shifted to regions with a higher proportion of renewable energy in their power grids.
  • Collaboration Across Stakeholders: Addressing the environmental challenges of AI requires collaboration between AI developers, policymakers, and industry stakeholders. Wu et al. [20] call for a collective effort to establish industry-wide standards and policies that promote the development and deployment of sustainable AI technologies.

3.10. Integrating Green AI Principles into Automated Machine Learning Systems

The environmental impact of machine learning practices, particularly within Automated Machine Learning (AutoML) systems, has become an increasingly important area of research. As AutoML systems automate the complex processes of model design and optimization, they also introduce significant computational demands, leading to heightened energy consumption and carbon emissions. Addressing these environmental concerns requires the integration of Green AI principles, which focus on reducing the carbon footprint of AI technologies while maintaining high levels of performance.
AutoML systems, by their nature, require extensive computational resources for tasks like hyperparameter optimization, model selection, and evaluation. These processes are inherently energy-intensive, contributing to a considerable carbon footprint. Recent research highlights the importance of examining the entire lifecycle of AutoML processes—from data generation and storage to model deployment—to identify opportunities for improving energy efficiency and reducing environmental impact [52].
The need for more energy-efficient AutoML systems is underscored by findings that highlight the substantial energy usage associated with pipeline evaluations and the storage of intermediate search results [49]. By considering the full environmental implications of these processes, researchers and practitioners can better address the sustainability challenges posed by the widespread adoption of AutoML technologies.
Incorporating Green AI metrics into the optimization of AutoML systems is a promising approach to mitigating their environmental impact. These metrics, which include energy consumption, carbon dioxide equivalent (CO2e) emissions, and runtime, provide a framework for evaluating the sustainability of different hyperparameter optimization strategies [50]. For example, proof-of-concept experiments using libraries like Scikit-learn have demonstrated the potential for Bayesian and random search strategies to optimize hyperparameters with a focus on minimizing energy consumption while maintaining competitive model performance [53].
The application of these energy-aware strategies has shown that it is possible to significantly reduce CO2e emissions without compromising the accuracy or effectiveness of the models. This approach aligns with the broader goals of Green AI, which advocate for the development of AI technologies that are both environmentally sustainable and performance-efficient [54].
Several methodologies have been proposed to enhance the sustainability of AutoML, including the use of lightweight models, model compression, and energy-efficient algorithms [55]. Bayesian optimization, in particular, has been highlighted for its efficiency in exploring hyperparameter spaces with fewer evaluations, thereby reducing computational costs and energy consumption [53].
In addition to algorithmic innovations, the design of AutoML pipelines that prioritize energy efficiency is crucial. This includes selecting appropriate datasets and implementing processes that minimize unnecessary computational overhead [54]. By embedding these practices into the AutoML lifecycle—from data preprocessing to model deployment—researchers can achieve significant reductions in the environmental footprint of ML models.
The integration of Green AI principles into AutoML systems reflects a broader shift in AI development, where sustainability is becoming as important as performance. Traditionally, AI systems have been designed with a primary focus on maximizing accuracy and speed, often overlooking the environmental costs [20]. However, the growing awareness of the ecological implications of AI is driving a change towards more sustainable practices.
This shift is supported by ongoing research that emphasizes the importance of energy-efficient algorithms and sustainable computing infrastructures. Future directions in this field are likely to focus on the development of new optimization techniques and standardized metrics that further enhance the energy efficiency of AutoML processes [56].

4. Key Findings

4.1. Comprehensive View

By drawing conclusions from researched sources, a logical graph is a representation of the way ML can influence the environment negatively. A similar graph can be seen in Figure 3.
On the low level of the machine learning pipeline, there are two main components that affect the carbon footprint: hardware manufacturing and data centers.
While the operational emissions during the use of ML hardware are often highlighted, the embodied emissions from manufacturing can be significantly larger. Estimates suggest that the embodied carbon footprint from the production of IT equipment can be over 70 times greater than the operational emissions from ML training, indicating that addressing manufacturing processes may yield more substantial environmental benefits than focusing solely on operational efficiencies [57].
When it comes to hardware manufacturing, we can distinguish two important components:
  • Lifecycle consideration—the longer the component is able to serve its purpose, the smaller the carbon footprint (assuming equal energy efficiency of components).
  • Material sourcing—material extraction, transportation, etc.
A data center’s environmental load can be subdivided into the following:
  • The center’s energy sources—renewable or of high environmental impact.
    Possibility of onsite power generation.
  • Location (climate and cooling needs, water availability, vulnerability to natural hazards).
  • The cooling system’s efficiency—more efficient in a naturally colder climate.
    Variable-speed fans—adjusting energy consumption to demand.
  • Monitoring and Management: Implementing Data Center Infrastructure Management (DCIM) software allows for real-time monitoring of energy consumption. These data help identify inefficiencies and opportunities for improvement, such as detecting underutilized servers or optimizing cooling systems.
  • Energy-Efficient Hardware: Upgrading to modern servers that use advanced chip architectures and smart power management can decrease energy consumption. Virtualization also allows for multiple workloads to run on fewer physical servers, improving overall resource utilization.
    Raising the ambient temperature of data centers can lead to immediate energy savings in cooling without negatively impacting server performance. Modern equipment can operate efficiently at higher temperatures.
  • Dynamic Power Management: Implementing advanced scheduling algorithms and dynamic power management techniques can optimize resource allocation and reduce energy waste by adjusting power usage based on workload demands.
On the higher level of the machine learning pipeline (in production), there are two other components that impact the carbon footprint.
  • Operational aspects
    Deployment and inference: Using optimizing inference runtime systems can significantly improve efficiency.
    Model shadowing: Often times, when an upgraded or updated model comes into production, they work in parallel for some time to ensure that the new model is correct. This doubles the energy consumption of that period, so it is important to ensure that those periods are as short as possible.
  • Model training
    Computational complexity of the model: It is important to not over-complicate a model and to delete unnecessary layers. Another aspect is choosing an adequate pre-trained model.
    Training time: Practices like early stopping for underperforming models can help reduce unnecessary resource and time use.
    Hyperparameter tuning: Traditional methods like grid search for hyperparameter tuning can lead to excessive computational demands. More efficient tuning methods can help minimize energy use and emissions during the training phase.

4.2. Environmental Impact of Machine Learning Methods

This section provides a comparative analysis of various machine learning optimization methods based on their environmental impact, focusing on energy consumption and carbon emissions. The methods are explained in terms of their environmental benefits and ranked based on their impact.

4.2.1. Overview of Methods

Various machine learning methods are designed to improve efficiency and reduce the environmental impact associated with training and deploying models. These methods are categorized into three primary groups: efficient training, operational optimizations, and model architecture and algorithms. Each category addresses different aspects of machine learning processes and their associated environmental footprint.
Efficient Training
Efficient training methods focus on reducing the computational resources and energy required during the training phase of machine learning models. This category includes techniques that optimize the training process, thereby minimizing energy consumption and lowering the carbon footprint associated with model development.
  • Knowledge Distillation: This involves training a smaller model, often referred to as the “student”, to replicate the behavior of a larger, more complex model, known as the “teacher”. The process of transferring knowledge from the teacher to the student allows the student model to achieve similar performance with significantly fewer computational resources. This technique enables the deployment of smaller, more efficient models that maintain high accuracy while consuming less energy. Studies have shown that it can reduce energy use and CO2 equivalent by a factor of 19 [58].
  • Quantization: This reduces the precision of the model’s parameters, such as converting 32-bit floating-point numbers to 8-bit integers. This reduction decreases both memory usage and computational demand, making it particularly useful for deploying models on edge devices with limited resources. This method can lead to savings in energy consumption and memory usage [59], with up to 16 times reduction in memory footprint [60], making it an effective strategy for reducing the environmental impact of machine learning models.
  • Early Stopping: This is a technique used during model training to halt the process once the model’s performance stabilizes or reaches a pre-defined threshold. By stopping training early, unnecessary computations are avoided, which not only reduces energy consumption but also prevents overfitting. Additionally, this method helps extend the lifespan of hardware by lowering computational demands (up to 80% reduction in energy consumption for model training) [61].
  • Data Parallelization: This method distributes data across multiple processors or computing nodes, enabling parallel computation during the training process. This method accelerates training by processing multiple data batches simultaneously, which in turn reduces training time and leads to proportional reductions in energy use and emissions. Effective parallelization also improves the utilization of computational resources by minimizing idle times, thereby enhancing overall energy efficiency [62,63].
Operational Optimizations
Operational optimizations are strategies aimed at improving the efficiency of machine learning operations during both training and inference phases. These methods focus on optimizing the operational aspects of machine learning systems to minimize their environmental impact.
  • Renewable Energy-Powered Data Centers: These centers utilize energy from renewable sources, such as solar, wind, or hydropower, to power data centers. By transitioning to renewable energy, these data centers can achieve carbon reductions up to 40% compared to those that rely on conventional fossil fuels [64]. This significant reduction is due to the elimination of carbon emissions from energy generation, making it a highly effective strategy for sustainable computing [65].
  • Optimizing Data Transfer: This focuses on reducing the volume of data transferred between nodes or stages in a machine learning pipeline. By minimizing data movement and optimizing communication protocols, energy consumption associated with data transfer can be significantly reduced. In addition to energy savings, this method reduces processing overhead and enhances overall system performance, making it a valuable operational optimization technique [20,66].
  • Lifecycle Assessment: This is a comprehensive evaluation method that assesses the environmental impact of a machine learning system throughout its entire lifecycle—from development and training to deployment and eventual decommissioning. By identifying the major sources of energy consumption and emissions at each stage, it helps in developing targeted strategies to reduce overall environmental impact.
  • Optimizing Java Code: This involves fine-tuning Java code for machine learning applications to improve runtime efficiency and memory management. Energy consumption can be reduced by 6.2% to 80% (in extreme cases) through optimized code execution. This optimization not only enhances resource utilization but also prolongs the lifespan of hardware by reducing the computational load required to run machine learning models [67,68].
  • Optimizing GPU Operations: This focuses on improving the efficiency of GPU operations, such as parallel execution and memory management. By ensuring that GPUs are used more efficiently, power consumption and emissions can be reduced by up to 75%. The GPU Accelerator can reduce a company’s carbon footprint by as much as 80% while delivering 5× average speedups and 4× reductions in computing costs. This method is particularly beneficial for energy-intensive tasks like training deep learning models, where the energy savings can be substantial [69].
  • Carbon-Friendly Inference Techniques: These techniques optimize runtime systems for inference with an emphasis on reducing carbon emissions. By prioritizing the use of energy-efficient hardware, rightsizing the hardware, restructuring code for maximizing the existing CPUs’ usage (reuse), and leveraging renewable energy sources, these techniques and individual design strategies save around 29%, 25%, 34%, and 41% carbon for reusing, rightsizing, reducing, and recycling, respectively [70].
  • Energy-Efficient Hardware: This includes specialized devices, such as low-power GPUs or Tensor Processing Units (TPUs), designed to operate with reduced energy consumption. These devices can reduce energy use by up to 50% compared to traditional hardware. They often incorporate advanced power management features and are optimized for specific computational tasks, resulting in further reductions in energy consumption and heat generation during machine learning operations [71].
  • Reducing Redundant Computations: This aims to minimize unnecessary or duplicate computations within machine learning pipelines. By streamlining the process and eliminating redundancy, a 4.40× decrease in energy consumption can be observed while training CNNs [72]. Reducing redundant computations not only saves energy but also enhances the overall efficiency and speed of data processing and model training, contributing to a lower environmental footprint.
Model Architecture and Algorithms
This category includes methods that optimize model architectures or algorithms to enhance computational efficiency and reduce environmental impact. These techniques often involve rethinking the design of models to make them more efficient in terms of both performance and energy usage.
  • Bayesian Hyperparameter Tuning: This uses probabilistic models to explore hyperparameter spaces more efficiently than traditional methods, such as grid search or random search. By focusing on the most promising regions of the hyperparameter space, this method reduces the number of evaluations needed to find optimal parameters. This approach is particularly effective for large-scale models where hyperparameter tuning is computationally expensive [73,74].
  • Deleting Unnecessary Layers: This involves identifying and removing layers from neural networks that do not contribute significantly to the model’s performance. Simplifying the model in this way reduces computational complexity, leading to decreased energy usage and emissions [75]. In addition to energy savings, simplified models require less training and inference time, making them more efficient in deployment scenarios.
  • Model Parallelization: This divides a model into segments that can be processed concurrently across multiple processors. This technique is especially useful for managing large models that exceed the computational capacity of a single processor. By distributing the computational load, energy consumption can be reduced by up to 26×, resulting in significant energy savings [76]. Additionally, it optimizes the use of high-performance computing resources, further reducing the overall environmental impact.

4.2.2. Description of Parameters Utilized for Environmental Impact Evaluation

To accurately assess the environmental impact of various machine learning methods, several parameters have been adopted. These parameters offer insights into how different methodologies affect energy consumption, computational efficiency, and overall carbon footprint.
These parameters collectively help in evaluating the efficiency and environmental benefits of various machine learning methods, ensuring that the assessment is comprehensive and accurate.
The key parameters considered are as follows:
  • Environmental Impact: Quantifies the overall effect of a method on the environment, including reductions in energy consumption and carbon emissions. A high environmental impact means that the method significantly lowers energy use and emissions, contributing to greener AI practices.
  • Energy Efficiency Ratio: Measures the ratio of performance improvement (e.g., accuracy, speed) to the energy consumed. It helps assess how effectively a method balances performance with energy consumption.
  • Scalability Impact: Measures how the environmental impact scales with increasing data size, model complexity, or deployment scenarios. It indicates how changes in scale affect energy consumption and emissions.
  • Lifecycle Emissions: Evaluates the total emissions produced over the entire lifecycle of the method, including development, training, deployment, and maintenance phases. It provides a comprehensive view of the total environmental footprint.
  • Training Time Impact: Measures how the method influences the duration of training processes. Methods with a low training time impact reduce the time needed for training, thereby enhancing efficiency.

4.2.3. Optimization Techniques in Machine Learning for Minimizing Energy Consumption

Understanding the benefits and trade-offs of these techniques is crucial for practitioners aiming to implement more sustainable machine learning practices. By carefully selecting and applying the appropriate strategies, it is possible to significantly reduce the energy consumption and carbon footprint associated with AI technologies. This not only contributes to more environmentally friendly AI systems but also aligns with global efforts to mitigate the impact of technology on climate change. Table 1 below shows the energy efficiency capabilities of various optimization techniques used to reduce the environmental impact of ML.

4.3. Methodology for Energy Consumption Estimation

The environmental impact of machine learning methods varies with the complexity of the tasks they address. Below, for the purpose of this work, we define different levels of task complexity, along with examples.
Non-Complex Tasks
These tasks involve straightforward data processing with minimal computational demands. Examples include the following.
  • Basic Binary Classification: Spam detection in emails using a small set of features.
  • Simple Regression: Predicting house prices based on a few key attributes.
Moderate Tasks
Tasks at this level require more sophisticated models due to increased data or feature complexity. Examples include the following.
  • Intermediate Classification: Sentiment analysis of text reviews to determine if a review is positive or negative.
  • Moderate Regression: Predicting sales figures based on various market indicators.
Complex Tasks
These tasks involve advanced models to handle large datasets or high-dimensional features. Examples include the following.
  • Image Classification: Recognizing objects in images using CNNs.
  • Complex Pattern Recognition: Predicting stock market trends based on multiple economic indicators.
Very Complex Tasks
Highly intricate tasks that require cutting-edge models and substantial computational resources. Examples include the following.
  • Large-Scale Natural Language Processing: Using transformer models like BERT or GPT for language understanding across vast corpora.
  • High-Dimensional Data Processing: Processing and analyzing video data with DNNs or CNNs.
In this research, energy consumption for training machine learning methods was estimated based on the training times obtained from various references. These training times provide a range for each method depending on suitable task complexity. Energy consumption was obtained considering the following:
kWh = Power (in kW) × Time (in hours)
The power consumption of hardware components plays a crucial role in these estimations:
  • CPU Power Consumption: A typical CPU used for machine learning tasks consumes around 100 watts (0.1 kW) [79]. A CPU is suitable for non-complex and moderate tasks.
  • GPU Power Consumption: More complex tasks often rely on GPUs, which are more power-intensive. A GPU typically consumes between 300 watts (0.3 kW) [80] and 400 watts (0.4 kW) [81], depending on the task and model. A GPU is suitable for complex and very complex tasks.
The energy consumption estimates presented in this work are based on a qualitative comparison across various machine learning methods. The training times referenced are derived from different research studies, each employing varying hardware configurations and executing other tasks. As a result, the estimates provide only a general guideline for expected energy consumption in real-world scenarios.
It is essential to recognize that actual energy consumption may differ significantly depending on several factors, including the dataset size, model complexity, the hardware used, and specific optimizations applied. Furthermore, while this comparison focuses on energy consumed during the training phase, the inference phase of machine learning models can also contribute to the overall energy footprint.
Below, we present brief descriptions of the ML methods compared in this work.
Logistic Regression/Linear Models. These models establish relationships between input features and the target variable through linear equations. They are highly efficient for low-complexity tasks and require minimal computational resources.
Naive Bayes. A probabilistic classifier based on Bayes’ theorem with independence assumptions between features. It is mainly used for text classification, and it is energy efficient.
Gradient Boosting Machines (GBMs). An ensemble technique that builds models sequentially to correct errors of previous models, optimizing performance through boosting.
Decision Trees (DTs). A model that predicts the value of a target variable by learning decision rules inferred from data features. They are easy to interpret and suitable for various levels of task complexity.
K-Nearest Neighbors (KNNs). A non-parametric algorithm that classifies data points based on proximity to other labeled points. It can be computationally expensive for large datasets.
Support Vector Machines (SVMs). SVMs find a hyperplane that best separates data into classes. They are well-suited for high-dimensional data and moderately complex tasks.
Random Forests. An ensemble method that constructs multiple decision trees and combines their outputs for improved accuracy. Suitable for moderate and complex tasks.
Deep Neural Networks (DNNs). DNNs consist of multiple layers and are used for complex tasks like image and speech recognition. They require significant energy for high-complexity tasks.
Bayesian Neural Networks (BNNs). BNNs incorporate Bayesian inference into traditional neural networks to estimate uncertainty. They require significant computational resources for complex tasks.
Convolutional Neural Networks (CNNs). A class of deep neural networks designed for processing grid-like data structures like images. Highly suitable for complex tasks, particularly in image and video recognition.
Reinforcement Learning (RL). RL models interact with an environment to make decisions that maximize cumulative rewards, requiring continuous learning and exploration. They are resource-intensive for complex tasks.
Transformer Models (BERT, GPT). Deep learning models designed for sequence-to-sequence tasks. They handle long-range dependencies using self-attention mechanisms and require substantial computational resources.
We conducted a comparative analysis of machine learning methods based on their environmental impact, specifically focusing on their energy consumption across different task complexities. Table 2 summarizes the maximum estimated energy usage for each method, categorized from the least to the most energy-intensive. This comparison is useful for understanding the relative sustainability of different machine learning approaches and making informed decisions regarding their use in practice.

5. Summary and Conclusions

This paper investigates the energy consumption associated with machine learning (ML) models, with a particular focus on minimizing their environmental impact. As ML technologies, particularly large-scale models, become increasingly popular and widely used, their significant energy demands have raised concerns. This study underscores the urgent need for energy-saving techniques to mitigate the environmental impact of these technologies.
Global Energy Consumption
Tracking global energy consumption is a complex task due to the diversity of energy sources and inconsistent reporting standards. Although the Information Technology (IT) sector is not the largest consumer of energy—accounting for approximately 4% of global electricity use, which translates to about 2% of total global energy consumption—it still represents a significant percentage of global energy usage. In comparison, the IT sector’s energy consumption is still less crucial than that of the industrial, transportation, or residential sectors.
Machine Learning Energy Efficiency
This paper explores various strategies to reduce energy consumption in ML models, such as model compression, pruning, quantization, and the use of specialized hardware. Continuous improvements in energy consumption practices within the IT sector, particularly in the rapidly growing branches of machine learning and data science, are both possible and necessary. Moreover, beyond enhancing the energy efficiency of ML itself, ML can also be leveraged to optimize other sectors, such as industrial processes and transportation, resulting in broader energy efficiency improvements across the economy.
Environmental Impact of AI
The training of large AI models can result in significant CO2 emissions. Notably, the embodied emissions from the manufacturing of ML hardware often exceed the operational emissions. Estimates suggest that the embodied carbon footprint from the production of IT equipment can be over 70 times greater than the operational emissions from ML training. This finding indicates that addressing manufacturing processes may yield more substantial environmental benefits than focusing solely on improving operational efficiencies.
Green AI Principles
This paper discusses the integration of Green AI principles into ML pipelines. Key recommendations include using energy-efficient hardware, optimizing training procedures, and selecting low-carbon data centers. Additionally, the importance of responsible computing practices, such as reducing redundant computations and employing efficient algorithms, is emphasized.
Below, we collect the main conclusions from the undertaken research.
Comparative Energy Impact
While the IT sector is not as energy-intensive as the industrial or transportation sectors, it still contributes significantly to global energy consumption. The embodied emissions from manufacturing IT equipment are substantially larger than the operational emissions, suggesting that focusing on reducing these embodied emissions could have a more significant environmental impact. Additionally, the research highlights the importance of considering both the direct and indirect energy impacts of machine learning technologies, including the energy used in data centers and the broader IT infrastructure supporting ML operations.
Potential for Improvement
Continuous improvements in energy efficiency within ML, particularly in model design and training processes, are essential. Additionally, the application of ML to optimize other sectors can lead to broader energy efficiency improvements, thereby amplifying the positive environmental impact of AI technologies. Future research should also focus on minimizing the energy use of the biggest and most popular models, like GPT-3 or GPT-4.
Dual Approach Needed
This paper advocates for a dual approach to mitigate the environmental impact of ML technologies by improving operational efficiencies and addressing the embodied emissions from manufacturing processes. Implementing this strategy would contribute positively to global sustainability efforts and reduce the overall carbon footprint of the growing AI sector. Moreover, integrating Green AI principles into ML pipelines can further enhance the sustainability of AI technologies by ensuring that energy consumption and environmental impact are minimized across the entire lifecycle of ML models.

Author Contributions

Conceptualization, D.A.S. and R.R.; methodology, D.A.S., R.R. and G.W.; validation, R.R. and G.W.; formal analysis, D.A.S.; investigation, D.A.S.; writing—original draft, D.A.S., R.R. and G.W.; visualization, R.R. and D.A.S.; supervision, G.W.; project administration, G.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Poznan University of Technology (project number 0311/SBAD/0746).

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Carbonell, J.G.; Michalski, R.S.; Mitchel, T.M. An overview of machine learning. In Machine Learning: An Artificial Intelligence Approach, 1st ed.; Carbonell, J.G., Michalski, R.S., Mitchel, T.M., Eds.; Springer: Berlin Heidelberg, Germany, 1983; pp. 3–23. [Google Scholar]
  2. Kessides, I.N.; Toman, M. World Bank Blogs. The Global Energy Challenge. Available online: https://blogs.worldbank.org/en/developmenttalk/the-global-energy-challenge/ (accessed on 13 January 2025).
  3. Energy Alliance. The Global Challenge. Available online: https://energyalliance.org/powering-people-planet-2023/the-global-challenge/ (accessed on 13 January 2025).
  4. NS Energy. Profiling the World’s Top Five Countries in Electricity Consumption. Available online: https://www.nsenergybusiness.com/analysis/electricity-consuming-countries/ (accessed on 13 January 2025).
  5. IEA Executive Summary. Available online: https://www.iea.org/reports/electricity-2024/executive-summary (accessed on 1 May 2025).
  6. World Energy & Climate Statistics. Available online: https://yearbook.enerdata.net/total-energy/world-consumption-statistics.html (accessed on 1 May 2025).
  7. Electric Energy Consumption. Available online: https://en.wikipedia.org/wiki/Electric_energy_consumption (accessed on 1 May 2025).
  8. IEA. Energy Sytem/Industry. Available online: https://www.iea.org/energy-system/industry (accessed on 1 May 2025).
  9. EBSCO, Energy-Efficient Modes of Transportation. Available online: https://www.ebsco.com/research-starters/power-and-energy/energy-efficient-modes-transportation (accessed on 1 May 2025).
  10. González-Torres, M.; Pérez-Lombard, L.; Coronel, J.F.; Maestre, I.R.; Bertoldi, P. Activity and efficiency trends for the residential sector across countries. Energy Build. 2022, 273, 112428. [Google Scholar] [CrossRef]
  11. IEA. Energy Efficiency 2023. Available online: https://iea.blob.core.windows.net/assets/dfd9134f-12eb-4045-9789-9d6ab8d9fbf4/EnergyEfficiency2023.pdf (accessed on 1 May 2025).
  12. REN21, Renewables Global Status Report (GSR) Collection 2023. Available online: https://www.ren21.net/wp-content/uploads/2019/05/GSR2023_Fact_Sheet_Agriculture.pdf (accessed on 1 May 2025).
  13. Frontier Group. Fact File: Computing Is Using More Energy than Ever. Available online: https://frontiergroup.org/resources/fact-file-computing-is-using-more-energy-than-ever/ (accessed on 13 January 2025).
  14. Hardesty, L. MIT News. Explained: Neural Networks. Available online: https://news.mit.edu/2017/explained-neural-networks-deep-learning-0414/ (accessed on 13 January 2025).
  15. Genetic Science Learning Center. Neurons Transmit Messages in the Brain. Available online: https://learn.genetics.utah.edu/content/neuroscience/neurons/ (accessed on 13 January 2025).
  16. O’Shea, K.; Nash, R. An Introduction to Convolutional Neural Networks. arXiv 2015. [Google Scholar] [CrossRef]
  17. Dupond, S. A thorough review on the current advance of neural network structures. Annu. Rev. Control. 2019, 14, 200–230. [Google Scholar]
  18. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Polosukhin, I. Attention is all you need. Adv. Neural Inf. Process. Syst. 2017, 30, 5998–6008. [Google Scholar]
  19. Radford, A.; Wu, J.; Child, R.; Luan, D.; Amodei, D.; Sutskever, I. Language Models Are Unsupervised Multitask Learners. OpenAI. 2019. Available online: https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf (accessed on 17 January 2025).
  20. Wu, C.J.; Raghavendra, R.; Gupta, U.; Acun, B.; Ardalani, N.; Maeng, K.; Hazelwood, K. Sustainable AI: Environmental implications, challenges and opportunities. Proc. Mach. Learn. Syst. 2022, 4, 795–813. [Google Scholar]
  21. Lacoste, A.; Luccioni, A.; Schmidt, V.; Dandres, T. Quantifying the Carbon Emissions of Machine Learning. arXiv 2019. [Google Scholar] [CrossRef]
  22. AI Is An Energy Hog. This Is What It Means for Climate Change. Available online: https://www.technologyreview.com/2024/05/23/1092777/ai-is-an-energy-hog-this-is-what-it-means-for-climate-change/ (accessed on 13 January 2025).
  23. How AI Can Optimize Energy Efficiency and Reduce Carbon Emissions. 2023. Available online: https://energycentral.com/c/pip/how-ai-can-optimize-energy-efficiency-and-reduce-carbon-emissions/ (accessed on 13 January 2025).
  24. Garcıa-Martın, E.; Rodrigues, C.F.; Riley, G.; Grahn, H. Estimation of energy consumption in machine learning. J. Parallel Distrib. Comput. 2019, 134, 75–88. [Google Scholar] [CrossRef]
  25. Shahid, A. Towards Reliable and Accurate Energy Predictive Modelling Using Performance Events on Modern Computing Platforms. Ph.D. Thesis, University College Dublin, Dublin, Ireland, 2020. Available online: https://hcl.ucd.ie/system/files/%5B%5BFinal%5D%5D%20Towards%20Reliable%20and%20Accurate%20Energy%20Predictive%20Modelling%20using%20Performance%20Events%20on%20Modern%20Computing%20Platforms.pdf (accessed on 13 January 2025).
  26. Wu, X.; Taylor, V.; Lan, Z. Performance and Power Modeling and Prediction Using MuMMI and Ten Machine Learning Methods. arXiv 2020. [Google Scholar] [CrossRef]
  27. Awan, M.R.; Rojas, H.A.G.; Hameed, S.; Riaz, F.; Hamid, S.; Hussain, A. Machine Learning-Based Prediction of Specific Energy Consumption for Cut-Off Grinding. Sensors 2022, 22, 7152. [Google Scholar] [CrossRef]
  28. Wang, F.; Zhang, W.; Lai, S.; Hao, M.; Wang, Z. Dynamic GPU energy optimization for machine learning training workloads. IEEE Trans. Parallel Distrib. Syst. 2021, 33, 2943–2954. [Google Scholar] [CrossRef]
  29. Pengfei, Z.; Li, A.; Barker, K.; Ge, R. Indicator-directed dynamic power management for iterative workloads on GPU-accelerated systems. In Proceedings of the 20th IEEE/ACM International Symposium on Cluster, Cloud and Internet Computing (CCGRID), Melbourne, Australia, 11–14 May 2020; Available online: https://www.researchgate.net/publication/342929972_Indicator-Directed_Dynamic_Power_Management_for_Iterative_Workloads_on_GPU-Accelerated_Systems (accessed on 13 January 2025).
  30. Kumar, M.; Zhang, X.; Liu, L.; Wang, Y.; Shi, W. Energy-efficient machine learning on the edges. In Proceedings of the IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), New Orleans, LA, USA, 18–22 May 2020; Available online: https://weisongshi.org/papers/kumar20-EEML.pdf (accessed on 13 January 2025).
  31. Yarally, T.; Cruz, L.; Feitosa, D.; Sallou, J.; van Deursen, A. Uncovering Energy-Efficient Practices in Deep Learning Training: Preliminary Steps Towards Green AI. arXiv 2023. [Google Scholar] [CrossRef]
  32. Niu, F.; Recht, B.; Re, C.; Wright, S.J. Hogwild!: A lock-free approach to parallelizing stochastic gradient descent. Advances in neural information processing systems 24. In Proceedings of the 25th Annual Conference on Neural Information Processing Systems (NIPS), Granada, Spain, 12–17 December 2011; Available online: https://proceedings.neurips.cc/paper/2011/file/218a0aefd1d1a4be65601cc6ddc1520e-Paper.pdf (accessed on 13 January 2025).
  33. Sixin, Z.; Choromanska, A.E.; LeCun, Y. Deep learning with elastic averaging SGD. In Proceedings of the 29th International Conference on Neural Information Processing Systems—Volume 1, Montreal, QC, Canada, 7–12 December 2015; MIT Press: Cambridge, MA, USA, 2015; pp. 685–693. [Google Scholar]
  34. Wongpanich, A. Efficient Parallel Computing for Machine Learning at Scale. Technical Report of Univ. of California Berkeley. Available online: https://www2.eecs.berkeley.edu/Pubs/TechRpts/2020/EECS-2020-225.pdf (accessed on 18 December 2020).
  35. Ghoroghi, A.; Rezgui, Y.; Petri, I.; Beach, T. Advances in application of machine learning to life cycle assessment: A literature review. Int. J. Life Cycle Assess. 2022, 27, 433–456. [Google Scholar] [CrossRef]
  36. Sayyah, A.; Ahangari, M.; Mostafaei, J.; Nabavi, S.R.; Niaei, A. Machine learning-based life cycle optimization for the carbon dioxide methanation process: Achieving environmental and productivity efficiency. J. Clean. Prod. 2023, 426, 139120. [Google Scholar] [CrossRef]
  37. Delanoe, P.; Tchuente, D.; Colin, G. Method and evaluations of the effective gain of artificial intelligence models for reducing CO2 emissions. J. Environ. Manag. 2023, 331, 117261. [Google Scholar] [CrossRef]
  38. Brander, M.; Sood, A.; Wylie, C.; Haughton, A.; Lovell, J. Electricity-Specific Emission Factors for Grid Electricity. Ecometrica. Available online: https://www.scribd.com/document/386397121/Nur/ (accessed on 13 January 2025).
  39. Jouppi, N.P.; Young, C.; Patil, N.; Patterson, D.; Agrawal, G.; Bajwa, R.; Yoon, D.H. In-datacenter performance analysis of a tensor processing unit. In Proceedings of the 44th Annual International Symposium on Computer Architecture, Toronto, ON, Canada, 24–28 June 2017; Association for Computing Machinery: New York, NY, USA, 2017; pp. 1–12. [Google Scholar]
  40. Tajbakhsh, N.; Shin, J.Y.; Gurudu, S.R.; Hurst, R.T.; Kendall, C.B.; Gotway, M.B.; Liang, J. Convolutional neural networks for medical image analysis: Full training or fine tuning? IEEE Trans. Med. Imaging 2016, 35, 1299–1312. [Google Scholar] [CrossRef]
  41. Howard, J.; Ruder, S. Universal Language Model Fine-Tuning for Text Classification. arXiv 2018. [Google Scholar] [CrossRef]
  42. Google. Google Environmental Report 2018. Available online: https://sustainability.google/reports/environmental-report-2018/#data-centers/ (accessed on 13 January 2025).
  43. Microsoft. Beyond Carbon Neutral. 2018. Available online: https://download.microsoft.com/download/6/7/0/6706756C-867B-4A53-BDDD-30D93650FED1/Microsoft_Beyond_Carbon_Neutral.pdf (accessed on 13 January 2025).
  44. Bergstra, J.; Bengio, Y. Random search for hyper-parameter optimization. J. Mach. Learn. Res. 2012, 13, 281–305. [Google Scholar]
  45. Falkner, S.; Klein, A.; Hutter, F. BOHB: Robust and Efficient Hyperparameter Optimization at Scale. arXiv 2018. [Google Scholar] [CrossRef]
  46. Schwartz, R.; Dodge, J.; Smith, N.A.; Etzioni, O. Green AI. arXiv 2019. [Google Scholar] [CrossRef]
  47. Anthony, L.F.; Kanding, B.; Selvan, R. Carbontracker: Tracking and Predicting the Carbon Footprint of Training Deep Learning Models. arXiv 2020. [Google Scholar] [CrossRef]
  48. Schneider, I.; Xu, H.; Benecke, S.; Patterson, D.; Huang, K.; Ranganathan, P.; Elsworth, C. Life-Cycle Emissions of AI Hardware: A Cradle-To-Grave Approach and Generational Trends. arXiv 2025. [Google Scholar] [CrossRef]
  49. Strubell, E.; Ganesh, A.; McCallum, A. Energy and policy considerations for deep learning in NLP. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, 28 July–2 August 2019; pp. 3645–3650. [Google Scholar]
  50. Henderson, P.; Hu, J.; Romoff, J.; Brunskill, E.; Jurafsky, D.; Pineau, J. Towards the systematic reporting of the energy and carbon footprints of machine learning. J. Mach. Learn. Res. 2020, 21, 1–43. [Google Scholar]
  51. Sander, J.; Cohen, A.; Dasari, V.R.; Venable, B.; Jalaian, B. On Accelerating Edge AI: Optimizing Resource-Constrained Environments. arXiv 2025. [Google Scholar] [CrossRef]
  52. Castellanos-Nieves, D.; García-Forte, L. Strategies of Automated Machine Learning for Energy Sustainability in Green Artificial Intelligence. Appl. Sci. 2024, 14, 6196. [Google Scholar] [CrossRef]
  53. Castellanos-Nieves, D.; Garcia-Forte, L. Improving automated machine-learning systems through Green AI. Appl. Sci. 2023, 13, 11583. [Google Scholar] [CrossRef]
  54. Herzog, B.; Schubert, J.; Rheinfels, T.; Nickel, C.; Hönig, T. GreenPipe: Energy-Efficient Data-Processing Pipelines for Resource-Constrained Systems. Available online: https://ewsn.org/file-repository/ewsn2024/ewsn24-final133.pdf (accessed on 27 May 2024).
  55. Feurer, M.; Klein, A.; Eggensperger, K.; Springenberg, J.; Blum, M.; Hutter, F. Auto-sklearn: Efficient and robust automated machine learning. In Automated Machine Learning; Hutter, F., Kotthoff, L., Vanschoren, J., Eds.; Springer: Cham, Switzerland, 2019; pp. 113–134. [Google Scholar]
  56. Patterson, D.; Gonzalez, J.; Holzle, U.; Quoc, L.; Liang, C.; Munguia, L.-M.; Rothchild, D.; So, D.R.; Texier, M.; Dean, J. The carbon footprint of machine learning training will plateau, then shrink. Computer 2022, 55, 18–28. [Google Scholar] [CrossRef]
  57. Patterson, D.; Gilbert, J.M.; Gruteser, M.; Robles, E.; Sekar, K.; Wei, Y.; Zhu, T. Energy and emissions of machine learning on smartphones vs. the cloud. Commun. ACM 2024, 67, 86–97. [Google Scholar] [CrossRef]
  58. Rafat, K.; Islam, S.; Mahfug, A.A.; Hossain, M.I.; Rahman, F.; Momen, S.; Mohammed, N. Mitigating carbon footprint for knowledge distillation based deep learning model compression. PLoS ONE 2023, 18, e0285668. [Google Scholar] [CrossRef]
  59. Rokh, B.; Azarpeyvand, A.; Khanteymoori, A. A comprehensive survey on model quantization for deep neural networks. ACM Trans. Intell. Syst. Technol. 2023, 14, 1–50. [Google Scholar] [CrossRef]
  60. Bondarenko, Y.; Nagel, M.; Blankevoort, T. Understanding and overcoming the challenges of efficient transformer quantization. arXiv 2021. [Google Scholar] [CrossRef]
  61. MIT News. New Tools Are Available to Help Reduce the Energy that AI Models Devour. MIT News, 2023. Available online: https://news.mit.edu/2023/new-tools-available-reduce-energy-that-ai-models-devour-1005/ (accessed on 15 January 2025).
  62. Liu, Y.; Lu, H.; Luo, Y.; Memaripour, A.; Merritt, A.; Pillai, S.; Zhao, X. Scaling distributed machine learning with the parameter server. In Proceedings of the 11th USENIX Symposium on Operating Systems Design and Implementation, Broomfield, CO, USA, 6–8 October 2014. [Google Scholar]
  63. Sze, V.; Chen, Y.-H.; Yang, T.-J.; Emer, J.S. Efficient Processing of Deep Neural Networks; Springer: Cham, Switzerland, 2020. [Google Scholar]
  64. Deepmind AI Reduces Energy Used for Cooling Google Data Centers by 40%. Available online: https://deepmind.google/discover/blog/deepmind-ai-reduces-google-data-centre-cooling-bill-by-40/ (accessed on 13 January 2025).
  65. Sarkar, S.; Naug, A.; Luna, R.; Guillen-Perez, A.; Gundecha, V.; Ghorbanpour, S.; Mousavi, S.; Markovikj, D.; Ramesh Babu, A. Carbon Footprint Reduction for Sustainable Data Centers in Real-Time. In Proceedings of the 38th Annual AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada, 20–27 February 2024; Volume 38, pp. 22322–22330. [Google Scholar]
  66. Walsh, D.; Donti, P. Tackling Climate Change with Machine Learning. MIT Sloan. Available online: https://mitsloan.mit.edu/ideas-made-to-matter/tackling-climate-change-machine-learning (accessed on 13 January 2025).
  67. Pereira, R.; Couto, M.; Cunha, J.; Fernandes, J.P.; Saraiva, J. The Influence of the Java Collection Framework on Overall Energy Consumption. arXiv 2016. [Google Scholar] [CrossRef]
  68. Karamchandani, A.; Mozo, A.; Gómez-Canaval, S.; Pastor, A. A methodological framework for optimizing the energy consumption of deep neural networks: A case study of a cyber threat detector. Neural Comput. Appl. 2024, 36, 10297–10338. [Google Scholar] [CrossRef]
  69. Patterson, D.; Gonzalez, J.; Le, Q.; Liang, C.; Munguia, L.-M.; Rothchild, D.; So, D.; Texier, M.; Dean, J. Carbon Emissions and Large Neural Network Training. arXiv 2021. [Google Scholar] [CrossRef]
  70. Li, Y.; Hu, Z.; Choukse, E.; Fonseca, R.; Suh, G.E.; Gupta, U. EcoServe: Designing Carbon-Aware AI Inference Systems. arXiv 2025, arXiv:2502.05043. [Google Scholar] [CrossRef]
  71. Stanford AHA Retreat. Energy Efficiency and AI Hardware. Bill Dally. 2023. Available online: https://aha.stanford.edu/sites/g/files/sbiybj20066/files/media/file/aha-retreat-2023_dally_keynote_en_eff_ai_hw_0.pdf (accessed on 19 February 2025).
  72. Lew, J.S.; Liu, J.; Gong, W.; Goli, N.; Evans, R.D.; Aamodt, T.M. Anticipating and eliminating redundant computations in accelerated sparse training. In Proceedings of the 49th Annual International Symposium on Computer Architecture, New York, NY, USA, 18–22 June 2022; Association for Computing Machinery: New York, NY, USA, 2022; pp. 536–551. [Google Scholar]
  73. Gomes Mantovani, R.; Horváth, T.; Rossi, A.L.D.; Cerri, R.; Barbon, S., Jr.; Vanschoren, J.; de Carvalho, A.C.P.L.F. Better trees: An empirical study on hyperparameter tuning of classification decision tree induction algorithms. Data Min. Knowl. Discov. 2024, 38, 1364–1416. [Google Scholar] [CrossRef]
  74. Dou, H.; Zhu, S.; Zhang, Y.; Chen, P.; Zheng, Z. HyperTuner: A cross-layer multi-objective hyperparameter auto-tuning framework for data analytic services. J. Supercomput. 2024, 80, 1682–1691. [Google Scholar] [CrossRef]
  75. Gromov, A.; Tirumala, K.; Shapourian, H.; Glorioso, P.; Roberts, D.A. The Unreasonable Ineffectiveness of the Deeper Layers. arXiv 2024. [Google Scholar] [CrossRef]
  76. Wu, X.; Brazzle, P.; Cahoon, S. Performance and Energy Consumption of Parallel Machine Learning Algorithms. arXiv 2023. [Google Scholar] [CrossRef]
  77. Chandler, J. Saving Green: Accelerated Analytics Cuts Costs and Carbon. NVIDIA Blog, 2023. Available online: https://blogs.nvidia.com/blog/spark-rapids-energy-efficiency/ (accessed on 15 January 2025).
  78. Helen Victoria, A.; Maragatham, G. Automatic tuning of hyperparameters using Bayesian optimization. Evol. Syst. 2021, 12, 217–223. [Google Scholar] [CrossRef]
  79. Xia, Y.; Zhu, M.; Kuang, L.; Ma, X. Applications classification and scheduling on heterogeneous HPC systems using experimental research. J. Digit. Inf. Manag. 2011, 9, 227–232. [Google Scholar]
  80. Gao, Y.; Iqbal, S.; Zhang, P.; Qiu, M. Performance and power analysis of high-density multi-GPGPU architectures: A preliminary case study. In Proceedings of the 2015 IEEE 17th International Conference on High Performance Computing and Communications, 2015 IEEE 7th International Symposium on Cyberspace Safety and Security, and 2015 IEEE 12th International Conference on Embedded Software and Systems, New York, NY, USA, 24–26 August 2015; pp. 66–71. [Google Scholar]
  81. Luccioni, A.S.; Viguier, S.; Ligozat, A.-L. Estimating the carbon footprint of BLOOM, a 176B parameter language model. J. Mach. Learn. Res. 2023, 24, 1–15. [Google Scholar]
  82. Ngufor, C.; Wojtusiak, J. Extreme logistic regression. Adv. Data Anal. Classif. 2016, 10, 27–52. [Google Scholar] [CrossRef]
  83. Lim, T.-S.; Loh, W.-Y.; Shih, Y.-S. A comparison of prediction accuracy, complexity, and training time of thirty-three old and new classification algorithms. Mach. Learn. 2000, 40, 203–228. [Google Scholar] [CrossRef]
  84. Dixit, M.; Sharma, R.; Shaikh, S.; Muley, K. Internet traffic detection using naive Bayes and k-nearest neighbors (KNN) algorithm. In Proceedings of the International Conference on Intelligent Computing and Control Systems (ICICCS), Madurai, India, 15–17 May 2019; pp. 1153–1157. [Google Scholar]
  85. Analytics Vidhya. Naive Bayes Explained. 2017. Available online: https://www.analyticsvidhya.com/blog/2017/09/naive-bayes-explained/ (accessed on 13 January 2025).
  86. Mohammadi, A.M.; Mahmood Fathy, M. The empirical comparison of the supervised classifiers performances in implementing a recommender system using various computational platforms. Int. J. Intell. Syst. Appl. 2020, 15, 11–20. [Google Scholar] [CrossRef]
  87. Jankowski, D.; Jackowski, K.; Cyganek, B. Learning decision trees from data streams with concept drift. Procedia Comput. Sci. 2016, 80, 1682–1691. [Google Scholar] [CrossRef]
  88. Bolchini, C.; Cassano, L. Machine learning-based techniques for incremental functional diagnosis: A comparative analysis. In Proceedings of the 2014 IEEE International Symposium on Defect and Fault Tolerance in VLSI and Nanotechnology Systems (DFT), Amsterdam, The Netherlands, 1–3 October 2014; pp. 246–251. [Google Scholar]
  89. Saadatfar, H.; Khosravi, S.; Joloudari, J.H.; Mosavi, A.; Shamshirband, S. A new k-nearest neighbors classifier for big data based on efficient data pruning. Mathematics 2020, 8, 286. [Google Scholar] [CrossRef]
  90. Dong, J.-X.; Krzyżak, A.; Suen, C.Y. A fast SVM training algorithm. In Pattern Recognition with Support Vector Machines; Lee, S.W., Verri, A., Eds.; Springer: Berlin/Heidelberg, Germany, 2003; Volume 2388, pp. 53–67. [Google Scholar]
  91. Hansch, R.; Hellwich, O. Faster trees: Strategies for accelerated training and prediction of random forests for classification of polsar images. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2018, 4, 105–112. [Google Scholar] [CrossRef]
  92. Cai, L.; Barneche, A.M.; Herbout, A.; Foo, C.S.; Lin, J.; Chandrasekhar, V.R.; Aly, M.M.S. TEA-DNN: The quest for time-energy-accuracy co-optimized deep neural networks. In Proceedings of the 2019 IEEE/ACM International Symposium on Low Power Electronics and Design (ISLPED), Lausanne, Switzerland, 29–31 July 2019; pp. 1–6. [Google Scholar]
  93. Peng, Y.; Zhu, Y.; Chen, Y.; Bao, Y.; Yi, B.; Lan, C.; Guo, C. A generic communication scheduler for distributed DNN training acceleration. In Proceedings of the 27th ACM Symposium on Operating Systems Principles, Huntsville, ON, Canada, 27–30 October 2019; pp. 16–29. [Google Scholar]
  94. Surrisyad, H. A fast military object recognition using extreme learning approach on CNN. Int. J. Adv. Comput. Sci. Appl. 2020, 11, 211–220. [Google Scholar] [CrossRef]
  95. Haryanto, T.; Wasito, I.; Suhartanto, H. Convolutional neural network for gland images classification. In Proceedings of the 11th International Conference on Information & Communication Technology and System (ICTS), Surabaya, Indonesia, 31 October 2017; pp. 55–60. [Google Scholar]
  96. Taylor, M.E.; Stone, P. Representation transfer for reinforcement learning. In Proceedings of the AAAI Fall Symposium: Computational Approaches to Representation Change During Learning and Development, Arlington, TX, USA, 9–11 November 2007; pp. 78–85. [Google Scholar]
  97. You, Y.; Li, J.; Reddi, S.; Hseu, J.; Kumar, S.; Bhojanapalli, S.; Hsieh, C.J. Large Batch Optimization for Deep Learning: Training BERT in 76 Minutes. arXiv 2019. [Google Scholar] [CrossRef]
  98. Izsak, P.; Berchansky, M.; Levy, O. How to Train BERT with an Academic Budget. arXiv 2021. [Google Scholar] [CrossRef]
Figure 1. Relationship between the inputs and outcomes of ML methods in LCA [35].
Figure 1. Relationship between the inputs and outcomes of ML methods in LCA [35].
Energies 18 02810 g001
Figure 2. Neural network topology for system features [36].
Figure 2. Neural network topology for system features [36].
Energies 18 02810 g002
Figure 3. Carbon footprint sources all along the machine learning model usage context.
Figure 3. Carbon footprint sources all along the machine learning model usage context.
Energies 18 02810 g003
Table 1. Optimization techniques to reduce the environmental impact of ML.
Table 1. Optimization techniques to reduce the environmental impact of ML.
TechniqueEnergy Efficiency Characteristics
Early StoppingUp to 80% reduction in energy used for model training [61].
Knowledge DistillationReduces energy use and CO2 equivalent by a factor of 19 [58].
Optimizing GPU OperationsUp to 75% reduction in emissions [69], and the GPU Accelerator can reduce a company’s carbon footprint by as much as 80% while delivering 5× average speedups and 4× reductions in computing costs [49,77].
Data and Model ParallelizationEnergy consumption reduced by up to 26× through parallel processing [76].
Lifecycle AssessmentIdentifies opportunities to reduce energy use and emissions throughout the model’s lifecycle.
QuantizationReduces energy consumption and memory usage [59], with up to 16× reduction in memory footprint [60].
Renewable Energy-Powered Data CentersSubstantial reduction in carbon footprint by using renewable energy sources. Applying DeepMind’s machine learning to Google data centers reduced the amount of energy used for cooling by up to 40% [64].
Energy-Efficient HardwareUtilizes low-power hardware to minimize energy consumption.
Deleting Unnecessary LayersSimplifies architectures, reducing computational complexity and energy use.
Optimizing Java CodeEnergy consumption reduced by 6.2% [67] to 80% through optimized code execution [68].
Optimizing Data TransferReduction in energy consumption [20] and carbon footprint by optimizing data transfer [66].
Bayesian Hyperparameter TuningSignificant reduction in energy consumption [78] and carbon emissions by efficiently tuning hyperparameters [73,74].
Carbon-Friendly Inference TechniquesOptimizes the inference phase to minimize carbon emissions.
Reducing Redundant ComputationsEliminates unnecessary computations, improving efficiency and reducing energy use.
Table 2. Energy consumption estimation for machine learning models.
Table 2. Energy consumption estimation for machine learning models.
MethodTask ComplexityTraining Time RangeEnergy Consumption Estimation (kWh)
Logistic Regression/Linear ModelsNon-ComplexSeconds [82] to a few minutes [83]~0.002 kWh
Naive BayesNon-ComplexSeconds [84] to 1 min [85]~0.002 kWh
Gradient Boosting Machines (GBMs)Moderate1 to 3 min [86]~0.002 to 0.006 kWh
Decision Trees (DTs)Non-Complex/Moderate5–300 s [87,88]~0.0001 to 0.008 kWh
K-Nearest Neighbors (KNNs)Non-Complex/Moderate30 s [84] to 300 s [88,89]~0.001 to 0.008 kWh
Support Vector Machines (SVMs)Moderate25 s [88] to 2 h [90]~0.001 to 0.2 kWh
Random ForestsModerate/ComplexA few minutes to 3 h [91]~0.01 to 0.3 kWh
Deep Neural Networks (DNNs)Complex/Very Complex35 min [92] to 115 min [83,93]~0.2 to 0.6 kWh
Bayesian Neural Networks (BNNs)ComplexMinutes to several hours [88]~0.05 to 1 kWh
Convolutional Neural Networks (CNNs)Complex/Very Complex1 h [94] to 8 h [95]~0.3 to 2.5 kWh
Reinforcement Learning (RL)Very Complex14 h to 2 days [96]~4 to 15 kWh
Transformer Models (BERT, GPT)Very Complex1–4 days [97,98]~10 to >40 kWh
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Różycki, R.; Solarska, D.A.; Waligóra, G. Energy-Aware Machine Learning Models—A Review of Recent Techniques and Perspectives. Energies 2025, 18, 2810. https://doi.org/10.3390/en18112810

AMA Style

Różycki R, Solarska DA, Waligóra G. Energy-Aware Machine Learning Models—A Review of Recent Techniques and Perspectives. Energies. 2025; 18(11):2810. https://doi.org/10.3390/en18112810

Chicago/Turabian Style

Różycki, Rafał, Dorota Agnieszka Solarska, and Grzegorz Waligóra. 2025. "Energy-Aware Machine Learning Models—A Review of Recent Techniques and Perspectives" Energies 18, no. 11: 2810. https://doi.org/10.3390/en18112810

APA Style

Różycki, R., Solarska, D. A., & Waligóra, G. (2025). Energy-Aware Machine Learning Models—A Review of Recent Techniques and Perspectives. Energies, 18(11), 2810. https://doi.org/10.3390/en18112810

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop