Next Article in Journal
Spatiotemporal Risk-Aware Patrol Planning Using Value-Based Policy Optimization and Sensor-Integrated Graph Navigation in Urban Environments
Previous Article in Journal
Soils of the Settlements of the Yamal Region (Russia): Morphology, Diversity, and Their Environmental Role
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Investigating Supercomputer Performance with Sustainability in the Era of Artificial Intelligence

Department of Computer Science Technology, College of Computer Science and Engineering Technology, Applied College, University of Hafr Al Batin, Hafr Al Batin 31991, Saudi Arabia
Appl. Sci. 2025, 15(15), 8570; https://doi.org/10.3390/app15158570
Submission received: 31 May 2025 / Revised: 15 July 2025 / Accepted: 23 July 2025 / Published: 1 August 2025

Abstract

The demand for high-performance computing (HPC) continues to grow, driven by its critical role in advancing innovations in the rapidly evolving field of artificial intelligence. HPC has now entered the era of exascale supercomputers, introducing significant challenges related to sustainability. Balancing HPC performance with environmental sustainability presents a complex, multi-objective optimization problem. To the best of the author’s knowledge, no recent comprehensive investigation has explored the interplay between supercomputer performance and sustainability over a five-year period. This paper addresses this gap by examining the balance between these two aspects over a five-year period. This study collects and analyzes multi-year data on supercomputer performance and energy efficiency. The findings indicate that supercomputers pursuing higher performance often face challenges in maintaining top sustainability, while those focusing on sustainability tend to face challenges in achieving top performance. The analysis reveals that both the performance and power consumption of supercomputers have been rapidly increasing over the last five years. The findings also reveal that the performance of the most computationally powerful supercomputers is directly proportional to power consumption. The energy efficiency gains achieved by some top-performing supercomputers become challenging to maintain in the pursuit of higher performance. The findings of this study highlight the ongoing race toward zettascale supercomputers. This study can provide policymakers, researchers, and technologists with foundational evidence for rethinking supercomputing in the era of artificial intelligence.

1. Introduction

Supercomputers are the main components driving innovations in the artificial intelligence (AI) boom, particularly in the development and inference of large language models (LLMs). Companies adopting supercomputers aiming for AI innovations typically gain a competitive edge, positioning themselves to lead in the race to dominate the AI industry. With the emergence of NVIDIA as the leader in AI, many organizations are increasingly shifting focus to high-performance computing (HPC) systems as a catalyst for rapid AI advancement. However, not all supercomputers are optimized for AI. Some supercomputers are specifically designed for handling both AI workloads and other scientific computations. Additionally, some supercomputers have the flexibility to be upgraded or extended to support AI capabilities [1].
For example, NVIDIA’s Eos offered an impressive 18.4 exascale floating-point operations per second (Eflops) for AI computational performance and advancing the AI research field. NVIDIA’s Selene extends its capabilities to support research in robotics, data analytics, and autonomous vehicle technology. Fujitsu’s Fugaku operates at the exascale level. The integration of HPC and AI workloads has been instrumental in advancing AI research and development. Notably, the Japanese LLM is a product of Fugaku’s capabilities. Aurora is another powerful HPC designed for scientific computations, including brain mapping [1]. In addition, other supercomputers with the potential to accommodate AI capabilities include Leonardo Frontier, the Large Unified Modern Infrastructure (LUMI), Sierra, Perlmutter, and Summit [1]. The HPC community has entered the era of exascale supercomputers. However, with the rapid evolution of AI technology, it is expected that exascale supercomputers may struggle to maintain pace with the increasing advancement of AI models in the future. To address this challenge, Japan has announced a five-year project for the development of zettascale supercomputers [2]. Typically, an enormous amount of energy is required to power supercomputers [3]. The amount of power is in the range of tens of thousands of kilowatts to almost 50,000 kw; for example, Aurora consumes 38,698 kw of power. LLMs consume a lot of power in different phases, such as pre-training, storage, data transfer, inference, and hardware manufacturing.
The workload demand in data centers for the year 2023 was estimated at 1088 million computer instances, with an estimated power consumption of 411 terawatts per hour [4]. For years, power consumption in data centers has been stable despite the growing workload [5]. However, the AI evolution has sparked an increasing demand for energy to power supercomputers in data centers. It is estimated that the power demand will increase by 160% every year up to 2030, requiring 200 terawatts per hour. Europe is estimated to require USD 1 trillion to upgrade its power grid to cope with the increasing AI energy demand [4]. For example, GPT-3 was pre-trained on 10,000 graphical processing units (GPUs) at the Microsoft datacenter, and GLaM was pre-trained on TPUv4s at the Oklahoma datacenter [6]. The effectiveness of data center power usage for BLOOM, GPT-3, OPT, and Gopher is 1.2, 1.1, 1.092, and 1.08, respectively [7].
Currently, companies are investing billions of dollars into AI projects to gain a competitive edge over their competitors and win the AI race to become the leader in the industry. For example, Workday injected USD 500 million into AI, focusing on generative AI. In another financial war chest, Meta significantly increased its AI investment from USD 30 billion in 2023 to USD 40 billion in 2024. A consortium called the Global AI Infrastructure Investment Partnership comprising Microsoft, MGX, Global Infrastructure partners, NVIDIA as a technical adviser, and Blackrock is raising an ambitious amount ranging between USD 80 and USD 100 billion to invest in the construction of a data center and energy-efficient infrastructure for powering the data center [8]. Alphabet, the parent company of Google, has invested a substantial amount in AI. In the last quarter of 2023, IBM pumped USD 500 million into enterprise AI projects. In the second quarter of 2024, CISCO injected USD 1 billion as a fund for AI investment. Out of the USD 1 billion, USD 200 million was allocated for investment in generative AI in AI companies such as Mistral AI, Cohere, and Scale AI. Microsoft pledges to inject USD 3.2 billion over a period of three years. In addition, Microsoft backed OpenAI with an investment of USD 13 billion, leading to the integration of ChatGPT into the Bing search engine. Sapphire Ventures has announced its investment of USD 1 billion to support startups focusing on AI technology [8].
The significant investments in AI, amounting to hundreds of billions of US dollars, made by companies have the potential to significantly increase demand for faster supercomputers, scale up supercomputer operations, and exacerbate power consumption issues. This, in turn, could lead to higher carbon dioxide emissions, particularly if the power sources are not renewable. However, giant companies like Amazon, Meta, Google, Microsoft, and Apple have pledged to power their data centers with renewable energy [9,10]. As mentioned previously, the world has entered the era of exascale supercomputing and generative AI [11], thus drawing significant attention to sustainability [12]. Supercomputers contribute to carbon dioxide footprints, particularly in regions where those datacenters are located [13], with HPC applications like large-scale modeling adding to emissions [14]. The carbon dioxide footprint from computation is substantial and poses a threat to the climate [15], with HPC and data centers emitting approximately 100 megatonnes of carbon dioxide annually. This is significantly contributing to climate change [5]. By 2030, the carbon footprint of the IT industry is projected to increase ten-fold, with power consumption estimated at 200 TWh per year [5]. Supercomputers emit greenhouse gases [16], whose atmospheric concentration profoundly influences climate change. The consequences of climate change include rising sea levels, severe wildfires (e.g., in Australia), destructive typhoons (e.g., in the Pacific), devastating droughts (e.g., in Africa), and adverse impacts on human health [5]. The growing demand for faster supercomputers with powerful computational capabilities has significantly increased power consumption and environmental impact [16]. Therefore, it is critical for the scientific community to consider both environmental impact and large-scale computational operations running on supercomputers [17]. The rapid development in machine learning poses a challenge for the environment because model pre-training requires extensive computational resources, materials, and energy [7]. This creates a challenge in balancing supercomputer performance with sustainability, resulting in the creation of a multi-objective optimization problem.
Many studies have analyzed the performance, power consumption, and carbon footprint of supercomputers. However, most research has extensively examined the issues of energy efficiency and performance in isolation. Only limited studies have attempted to explore the relationship between performance and energy efficiency (EE), and these are typically restricted to one or two supercomputers within a given year. Such studies often fail to provide insights into trends over time. Another issue is that previous research has overlooked the behavior of groups of top-performing supercomputers and top-energy-efficiency supercomputers in terms of performance and sustainability simultaneously over time. The relationship between the top-performing and most energy-efficient supercomputers has frequently been ignored in past studies.
This paper’s intent is to comprehensively analyze the interplay between performance and EE in supercomputers by examining trends over year-over-year periods. Also, the study examines the behavior of top-performing and top-EE supercomputers. The study aims to provide insights into the sustainability and performance trade-offs, addressing the lack of longitudinal and group-based analyses in already published research.
A summary of the study contributions is presented as follows:
  • The article compares top-performing supercomputers with top-energy-efficiency supercomputers, revealing a consistent trade-off showing high performance typically means lower energy efficiency in most cases.
  • The paper identifies rare supercomputers such as Frontier, LUMI, MareNostratrum, and Adastra showing the feasibility of achieving both top performance and top energy efficiency.
  • A detailed statistical analysis of power usage, core counts, and EE shows that top-performing supercomputers have core counts in the millions. Energy-efficient supercomputers typically operate with thousands of cores. Frontier achieved an exceptional balance but faced a challenge in maintaining it over the five-year period under study, indicating an unstable trade-off.
  • The research provides empirical evidence that supercomputers with higher performance tend to consume significantly more power, thereby having the potential to contribute more to carbon emissions.
  • The study systematically analyzes five years of data from 2020 to 2024 collected from the TOP500 rankings; such an analysis is rarely found in the literature for such a long period of time.
  • The article highlights the multi-objective challenges of balancing performance with sustainability.
  • The study highlights a shift from CPU-dominated supercomputers to GPU-dominated supercomputers as a result of AI workloads, especially LLMs. Also, it discusses how AI evolution is driving demand for GPUs, increasing power needs, and affecting the sustainability of supercomputers.
  • The research can provide policymakers, researchers, and technologists with foundational evidence for rethinking supercomputing in the era of AI.
The rest of this article is organized as follows: Section 2 introduces the fundamental concepts required to understand the study. Section 3 reviews related work, placing the current research within the context of the existing literature. Section 4 describes the methodology used in the study, and Section 5 presents the results along with a detailed discussion. Finally, Section 6 offers concluding remarks.

2. Fundamentals

To make the article self-contained, particularly for novice or new readers, this section provides basic background information to facilitate easy understanding of the article.

2.1. Performance

Measuring the performance of supercomputers is a complex task, in view of the fact that there is no universally accepted metric that accounts for every aspect of the operations. Instead, the capabilities and behavior of the supercomputers are typically measured using multiple metrics. The number of operations performed and the computational time taken under prescribed conditions within a specific context are the two fundamental parameters for evaluating the performance of supercomputers. These metrics can be used either in isolation or in combination, depending on the context. The most widely used metric for measuring supercomputer performance is FLOPS. A floating-point operation is the multiplication or addition of real numbers formatted in a machine-understandable format, allowing for flexible manipulation. However, the true performance of a supercomputer lies in its ability to solve real-world problems and produce useful results with societal impact, such as simulating physical phenomena. To facilitate performance comparisons across supercomputers, the HPC community adopted standardized benchmarks for measuring the performance [18]. Supercomputers typically consist of hundreds of thousands to millions of processor cores. Unlike cores in conventional small-scale computers (e.g., laptops), the cores in a supercomputer are either physical CPU cores or physical GPU cores with interconnected communication systems. However, CPUs and GPUs are often combined into a single supercomputing system, so both types of cores or other accelerators can be found in one machine (e.g., the El Capitan supercomputer has 11,039,616 combined CPU and GPU cores, and Frontier has a combined 8,699,904 CPU and GPU cores). These cores work collaboratively to solve complex problems. The number of cores significantly influences the computational power, performance, and power consumption of a supercomputer. Currently, the most powerful and fastest supercomputer is El Capitan with 1742.00 Pflops and over 11 million cores.
The performance of supercomputers is measured using benchmarks. Benchmarking involves testing and measuring a supercomputer’s performance by evaluating the time taken to run a standard program. Two key metrics, Rmax (flops) and Rpeak (flops), are typically used to compare performance. Rpeak represents the theoretical peak performance of a supercomputer, calculated by multiplying the peak performance of its cores by the total number of cores. This metric is based on the hardware specifications and assumes ideal conditions for achieving Max efficiency. Rmax refers to the performance achieved at the Max level during the execution of LINPACK. LINPACK is a benchmark used for mathematical computations on large-scale square matrices (Nmax X Nmax). Each matrix is a floating-point number matrix table with the flexibility to choose a big Nmax. Rmax and the Rpeak are used by TOP500 for ranking the performance of supercomputers. Rmax is the actual performance of a supercomputer, which is typically lower than Rpeak due to inefficiencies or unforeseen issues encountered during the execution of the benchmark in real-world environments [19].

2.2. Power Consumption and Energy Efficiency

Supercomputers consume a tremendous amount of power (kW) for their computational operations, driven by the large number of interconnected cores and other factors. In some cases, powering a supercomputer can require up to 30 megawatts [20].
The total amount of energy γ required to execute a given task is expressed as [17]
γ = t a t b d t β ( t )
where the starting and ending of the task are represented by t a and t b , respectively. The power consumption at time t is represented by β ( t ) , and the average power drawn β ¯ is expressed as
β ¯ = γ t a t b
Power is collected at discrete values, t n , in practice at the time of execution. For instance, at each second, NVIDIA SMI time collects samples sufficient for the computation of the average power of samples, expressed as
β ¯ G P U 1 N n = 1 N β G P U   ( t n )
where t 1   =   t n , t N   =   t b , and N is the number of samples. The samples for rack monitoring are collected at an interval of approximately 7s at a lower frequency (this is not generally true for all systems; it varies according to the hardware integrator). There are a few seconds of variation between two samples. The energy is computed using the trapezoidal rule, taking measurement irregularities into account and mitigating roughness of the samples, as expressed as follows [17]:
γ r a c k     1 2 n   =   1 N     1 ( t n   +   1     t n ) [ β r a c k t n   +   1   +   β r a c k t n ]
β ¯ r a c k is the average rack power, and the average power consumption for non-GPU components is expressed as [17]
β ¯ n o n G P U = β ¯ r a c k β ¯ G P U
To promote EE in supercomputers, the Green500 list was created to rank the top-EE supercomputers globally. The Green500 ranks supercomputers based on the amount of computational performance delivered on the High-Performance Linpack benchmark per watt of power consumed by the HPC system. The EE is measured in gigaflops per watt (GFlops/watts) [19].

3. Literature Review

3.1. Technology Trend Shift from Central-Processing-Unit-Dominated to Graphical-Processing-Unit-Dominated Supercomputers

The era of AI has significantly disrupted the dominance of CPU-based supercomputers, triggering a shift toward GPU-dominated systems. GPUs have become the foundational technology for today’s generative AI, providing the infrastructure required for large-scale model pre-training and inference. As a result, GPUs have recently emerged as the dominant computing core, driven by the growing demand fueled by the rapid evolution of AI.
In today’s rapidly evolving AI technology, GPUs are widely adopted for the pre-training and evaluation of deep neural networks such as transformer architecture [21,22,23]. The computation of a deep neural network based on mathematical operations is parallelized. The parallelization operations are successful as a result of the large volume of cores on GPUs making it possible for distributed computational operations on multiple cores to speed up operations. In the parallelization process, the processing units are required to operate independently. Parallelization operations on a PC with average hardware features comprising four or eight cores cannot be the same as parallelization on GPUs comprising thousands of cores. Parallelization is directly proportional to the number of cores. Operational speed increases with parallel processing, and GPUs are often used for the parallelization process. The parallelization capabilities of GPUs are higher than those of CPUs because GPUs have significantly more cores than CPUs [24]. The parallelization capabilities, high throughout, and thousands of cores of GPUs make them fit for the pre-training of deep neural networks on large-scale volumes of data where concurrent processing is required, unlike conventional CPUs optimized for sequential operations [25]. GPUs delivered high performance with greater EE compared to CPUs [26]. The 2019 investment of USD 1 billion in OpenAI in 2019 led to the development and deployment of the Azure AI supercomputer, designed for large-scale pre-training of deep neural networks. The Azure AI supercomputer is embedded with more than 285,000 CPUs and over 10,000 GPU cores; it became one of the most powerful AI supercomputers in the world [22].
GPU-based systems can be scaled up effectively to meet the requirements of supercomputing [26]. An increase in workload means increasing the number of GPUs in the system to allow increasing computational power to handle large models and high volumes of data and to allow the configuration of the system for model pre-training or inference. The AI supercomputing architecture designed to handle complex computation of AI applications heavily relied on GPUs to provide massive capabilities for large-volume data analytics [25]. The largest machine learning models in this era of AI are LLMs that have billions of parameters and consume millions of GPU hours during the pre-training process, emitting carbon dioxide. For example, the pre-training of BLOOM consumed 1,082,990 GPU hours and 433,196 kWh of power. GPT-3 consumed 1287 MWh of energy and emitted 502 tonnes of carbon dioxide. Gopher consumed 1066 MWh of power and emitted 352 tonnes of carbon dioxide, and OPT consumed 324 MWh of energy and emitted 70 tonnes of carbon dioxide. It is estimated that the pre-training of the BLOOM model emitted 24.7 tonnes of carbon dioxide and 50.5 tonnes of carbon dioxide for dynamic power consumption and the lifecycle of BLOOM, respectively [7].
Most of the LLMs in the world today, such as ChatGPT and most of the AI projects of Microsoft, Amazon, and Meta, are powered by GPU systems [27]. The demand for GPUs, driven by the evolution of AI, has enabled Nvidia (the maker of GPU accelerators) to overtake Amazon in market capitalization. Nvidia’s stock is now worth USD 1.83 trillion, narrowly surpassing Google’s USD 1.82 trillion valuation. This development makes Nvidia the fourth most valuable company in the world during this era of AI boom, behind Microsoft, Apple, and Saudi Aramco [28].
It has been reported that GPU performance has made a dramatic improvement in performance over the years since 2003. It is estimated that GPU performance increased by 7000 times with a 5600 times greater price–performance ratio. This dramatic GPU growth is attributed to the fast-evolving AI technology [25]. The TOP500 list is dominated by GPU systems and AI computing technology, as revealed by the latest rankings in TOP500. The embedding of AI into conventional supercomputing creates a new paradigm that integrates the raw power of GPUs with AI algorithm demands. GPUs are the cardinal point of the evolution powering most of the new supercomputing installations. The current TOP500 list indicates that out of the 53 new supercomputers in the list, 85% are GPU-based supercomputers, reflecting the growing demand for AI-driven technology to execute modern scientific research. Similarly, the Green500 list indicated that eight out of the 10 most energy-efficient supercomputers use Nvidia accelerated computing [29]. The demand for powerful computing resources is expected to continue to rise as AI evolution continues. The architecture of GPUs is designed to suit these unique demands. Therefore, GPUs are indispensable resources for AI supercomputing [25].

3.2. Related Works

In this section, the article presents only the most recent and highly relevant works to situate the current study within the existing literature. Also, insight into the differences between the current study and previously published works is provided.
For example, the energy performance of the Eurora supercomputer (top-ranked in the 2013 Green500) was analyzed under different workloads. The findings indicated that variability significantly impacts EE [30]. However, this study primarily focused on a single supercomputer and its EE. In another study, real-time power demand data for the Frontier supercomputer, along with waste heat measurements from the HPC center, were collected over the course of one year in 2023. The data were reported to guide research on waste heat optimization [31]. However, the work by Sun et al. [31] is limited to Frontier and one year of real-time data. Therefore, it is limited to one supercomputer and does not account for long-term variability. Similarly, the power consumption levels of the Cori and Perlmutter supercomputers were compared. The analysis revealed that power usage within the same application domain is consistent and that power consumption levels for the same application run by different users are similar [32]. However, the findings by Rrapaj et al. [32] are limited to Cori and Perlmutter and do not address performance metrics. Similarly, a study analyzed the trends in computational nodes of exascale supercomputers by examining their architecture and key technologies. The study highlights the EE challenges faced by exascale supercomputers [33]. The study examines EE in isolation without considering sustainability.
An algorithm was developed to estimate carbon dioxide emissions from large-scale computations without interfering with the machine’s source code while accounting for hardware configurations. The goal was to create carbon awareness and provide recommendations to minimize carbon emissions, thereby promoting green computing [15]. However, the study is limited to carbon awareness and does not consider the interplay between carbon dioxide emissions, power consumption, and performance. Another study analyzed the 59th edition of TOP500, focusing on Rmax, Rpeak, and high-performance conjugate gradient metrics. The study examined energy demands driven by increasing AI operations and found that AI workloads are computationally expensive, with EE slowing geometrically due to scaling [34]. However, this study is limited to the 59th edition of TOP500 and, therefore, cannot provide trends across multiple years. Unlike the work of Lannelongue et al. [15], a separate study conducted power consumption and performance analysis of legacy applications in the scientific domain for a single two-socket HPC node. It found that energy consumption increases as molecular size grows [35]. Similarly, Silva et al. [36] explored the decarbonization of HPC centers, while Cooper et al. [37] focused on performance comparisons between unified memory and explicit memory management, providing insights into challenges faced by performance.
A study was conducted on the carbon footprint of HPC systems with a focus on sustainability within the region where an HPC system is located. The study modeled the quantification of the carbon footprint within the region [38]. Another study measured EE in data centers and proposed a renewable framework for sustainable energy efficiency. The investigation revealed that the proposed framework reduced energy consumption [39]. However, both studies [38,39] overlooked the interplay between carbon footprint, EE, and performance over long period of time.
A study investigated instantaneous power consumption and EE in future supercomputers, taking coding optimization as a case study. The study demonstrated the impact of coding optimization on different computational stages [40]. Another study evaluated four supercomputers based on the HPCC benchmark rather than limiting performance evaluation to the Linpack benchmark. The study introduced computing efficiency and EE for a more comprehensive assessment. The study findings showed that the efficiency of supercomputers is significantly influenced by the interconnected network [41]. In another study, Tan et al. [42] examined the interplay between HPC EE and resilience by developing energy-saving undervolting techniques. The framework was found to improve EE. In another separate study, Wu et al. [43] classified energy and power conservation into the following categories: reducing power consumption and runtime, reducing runtime while increasing power consumption, and increasing runtime while reducing power consumption. The study developed models for performance and power consumption. Data on performance counters, system power, CPU power, runtime, and memory power were collected and analyzed using statistical tools, such as correlation, regression, and principal component analysis. The survey by Nazaré et al. [44] on the interplay between computing performance and sustainability is generalizable to the broader IT industry, unlike our current study, which specifically focuses on supercomputing.
A model for carbon emissions was developed to measure both operational and embodied emissions of HPC systems [45]. The interplay between performance and sustainable AI, with a focus on the carbon footprint of large models, is explored by Oyewole and Joseph [46]. Huang et al. [47] analyze the ecological impact of HPC on the environment using three machine learning algorithms. Yu et al. [48] employ a logistic model to evaluate the environmental impact of HPC in terms of power consumption and carbon emissions.

4. Methodology

The detailed procedure of the methodology adopted in this study is presented in this section. It outlines the systematic stages as shown in Figure 1 followed to achieve the research objectives, including data source selection, data collection, and data analysis strategies. Each component of the methodology is carefully described to ensure transparency and reproducibility. The section also highlights the statistical tools and techniques employed. By detailing the stages, this section provides a comprehensive understanding of how the study was carried out.
The three most relevant HPC ranking lists in the HPC community are as follows: Top500, Green500, and Graph500 [44,49]. This study focuses on Top500 and Green500 to cover the scope of performance and sustainability. Supercomputers must first be listed in TOP500 before qualifying for the Green500 ranking, encouraging designers to prioritize both high performance and EE.
The study utilized TOP500 and Green500 as data sources because they are well-established and widely accepted reference points for information within the HPC community [30,32,34]. The primary source for data collection is the official TOP500 website, and the data is publicly available. The TOP500 releases rankings of the 500 most powerful supercomputers worldwide two times a year, in June and November. The most recent data at the time of this investigation was from November 2024. For this study, we focused on the top 10 supercomputers for each year over a span of five years. This approach can account for variations in rankings driven by technological advancements and the deployment of new systems over time. The study considered five years of data to provide a recent perspective in this era of AI. Data on Rmax (PFlop/s), Rpeak (PFlop/s), cores, and power consumption (kW) for the top 10 supercomputers in terms of performance were collected from June 2020 to November 2024, resulting in a total of 10 lists.
Unlike TOP500, Green500 ranks the 500 most energy-efficient supercomputers globally, promoting the balance of performance and sustainability. The data is released simultaneously with the TOP500 rankings. Data on the top-EE supercomputers were extracted for the same period, from June 2020 to November 2024. The variables collected include cores, power consumption (kW), Rmax, Rpeak, and EE rating (EER) (GFlops/watts). The data focuses on the top 10 supercomputers in terms of EE, as the study addresses both top performance and sustainability. This approach creates two groups of supercomputers: the top-performing category and the top-EE category, enabling an in-depth analysis of both performance and sustainability. In this study, the format MonthYear[index measure] is used for clearer representation during analysis. For example, Nov23E refers to the performance of the top 10 supercomputers in terms of EE in November 2023, while June23P represents the top 10 supercomputers in terms of performance in June 2023.

Data Analytics

The data collected as described in the preceding section was analyzed using statistical tools. The SPSS 8 statistical package served as the platform for the analysis. Descriptive statistics were employed to summarize the data, providing insights into central tendency, variability, distribution, core trends, power consumption trends, yearly comparisons, and EE trends. Correlation analysis was employed to examine the relationships between variables in the two groups of supercomputers. This was aimed at identifying potential trade-offs. The relationships analyzed include the following:
  i.
Performance (Rmax) across years;
 ii.
EE across years;
iii.
Power consumption across years;
iv.
Performance (Rmax) of top-performing supercomputers vs. performance (Rmax) of top-EE supercomputers.

5. Results and Discussion

This section presents the results of the data analysis conducted over a five-year period, examining supercomputers from two perspectives: top-performing systems and those with the highest EE. The analysis focuses on key performance indicators that distinguish these two groups and provides insights into characteristics and trends. The evaluation covers four main dimensions: computational performance, power consumption, core count, and EER. By comparing these metrics across the two groups, the study highlights how priorities differ between achieving raw performance and optimizing for EE.

5.1. Core Counts

Figure 2 illustrates the stark difference in the number of cores between top-performing supercomputers in Figure 2A,C and top-EE supercomputers in Figure 2B,D. The top-performing supercomputers boast core counts in the millions, whereas the core counts of the top-energy-efficiency supercomputers are typically limited to the thousands. Notably, in 2024, none of the supercomputers categorized as top-EE supercomputers has more than 500,000 cores. On the contrary, among the top-performing supercomputers, every system has at least 1 million cores. An exception to this trend is observed where only EoS and MareNostrum fall below the 1-million-core threshold among the top-performing supercomputers.
Figure 3 shows the core counts of supercomputers listed in the top-performing and top-energy-efficiency categories for the year 2023. With the exception of MareNostrum and EoS, the top-performing supercomputers have core counts ranging from at least 1 million to over 10 million, as seen in Sunway. In contrast, the top-EE supercomputers consistently maintain core counts in the thousands, similar to the trend observed in 2024. Interestingly, Frontier and LUMI, despite having millions of cores, managed to secure spots in the top 10 EE rankings, appearing in both the top 10 performance (Figure 3A,C) and top 10 EE rankings (Figure 3B,D) categories. In addition, MareNostrum, with 680,960 cores, just less than 1 million, also achieved positions in both rankings, highlighting its versatility in balancing performance with sustainability.
Figure 4 presents the core counts of supercomputers listed in both the top performance rankings (Figure 4A,C) and the top EER rankings (Figure 4B,D) for 2022. Similar to the trends observed in 2023 and 2024, the core counts of top-performing supercomputers are generally in the millions, with the exceptions of Perlmutter, Selene, and Adastra. On the contrary, the core counts for supercomputers in the top EE rankings remain in the thousands, except for LUMI and Frontier, which stand out with millions of cores. Notably, Frontier and LUMI demonstrate exceptional balance by securing positions in both the top 10 supercomputers in terms of performance and the top 10 EE rankings, despite the large core counts. Additionally, Adastra, with fewer than 500,000 cores, is listed in both rankings, showcasing how a mix of hundreds of thousands of cores and millions of cores can achieve a balance between top computational performance and sustainability. A particularly remarkable observation is Frontier’s position as the top-performing supercomputer, holding the number one spot, while ranking second in the top-EE category. This achievement highlights an impressive balance between unparalleled computational power and sustainability, marking a significant milestone in the history of supercomputing for its focus on balancing performance with sustainability.
Figure 5 presents the core counts of the top-performing supercomputers (Figure 5A,C) and top-energy-efficiency supercomputers (Figure 5B,D) for 2021. Among the top-performing supercomputers, the core counts exhibit a balanced distribution: 50% have core counts in the millions, while the other 50% have core counts in the hundreds of thousands. This pattern is consistent in both November and June, where half of the supercomputers have fewer than 1 million cores. Despite changes in the top performance rankings with new entries, this 50%–50% ratio in core counts remains a consistent trend.
In contrast, the core counts of supercomputers in the top EE rankings are much lower, consistently remaining in the thousands. None of the top-EE supercomputers in 2021 had core counts of 1 million or higher. Notably, MN-3, which secured the number 1 spot on the list of the 10 most energy-efficient supercomputers, had only 1664 cores, a stark contrast to the millions of cores seen in the top 10 systems in terms of performance. Interestingly, Perlmutter and Juwels, both having fewer than a million cores, achieved recognition in both the top 10 for performance and the top 10 for EE. This demonstrates the ability to balance high computational performance with sustainability, challenging the notion that large core counts are essential for achieving peak performance.
Figure 6 illustrates the core counts of supercomputers ranked in the top 10 for both performance and EE in 2020. Similar to the trend observed in 2021, the core counts of the top-performing supercomputers show a balanced distribution: 50% have core counts in the millions, while the remaining 50% have core counts in the hundreds of thousands. For the EE supercomputers, most core counts remain in the thousands, with the notable exceptions of Fugaku, Summit, and NA-1 with core counts in the millions. In addition, Juwels, Selene, HPC5, Fugaku, and Marconi-100 are notable for appearing in both the top 10 for performance and the top 10 for EER.
The year 2020 stands out as a significant milestone for achieving a balance between core counts for powerful computation and sustainability. Five supercomputers, comprising a combination of systems with hundreds of thousands and millions of cores, achieved placement in both the top performance and top energy efficiency rankings. This highlights the ability to balance exceptional computational power with sustainable energy use.

Observations and Implications

A closer look at the 2024 trend reveals that Frontier and LUMI find it challenging to maintain positions in the top 10 EE rankings achieved in 2023 and 2022 but retained spots among the top 10 supercomputers in terms of performance. This shift is likely because new supercomputing systems that are more energy efficient entered the Green500 list. Therefore, it could be concluded that more work is being performed to improve the energy efficiency of new machines than to improve the performance of the machines. It has been observed that supercomputers consistently securing the number 1 spot in the top 10 EE rankings typically have a very small number of cores. The only exception is Frontier TDS, which once achieved the top position with 120,832 cores. However, in most cases, the number of cores for the top-EE supercomputers remains below 20,000 throughout the period under study. The disparity in number of cores between top-performing supercomputers and top-EE supercomputers highlights a significant trade-off in achieving peak computational performance. Conversely, supercomputers optimized for EE appear to have lower core counts, possibly to reduce energy consumption and improve power utilization ratios. This trend likely underscores the strategy of minimizing core counts as a critical factor in achieving and maintaining sustainability. The cases of Frontier and LUMI demonstrate that it is possible to achieve a balance between powerful computational performance and EE with millions of cores, challenging the conventional notion that a lower core count is necessary to attain top EE. Similarly, the inclusion of MareNostrum suggests that balancing performance and sustainability does not always require millions of cores; a lower core count can also succeed in bridging this gap. In this subset, there is usually very little overlap between the Top10 and Green10 lists. This is mainly because the systems on these lists are very different in size. To achieve the highest performance, a computer system needs to be very large, with many cores. However, the larger a system is, the harder it becomes to achieve sustainability. As a result, the most energy-efficient systems on Green500 are typically smaller with a small number of cores compared to top-performing systems on TOP500, except in rare cases, as already discussed.

5.2. Power Consumption

Figure 7 illustrates the power consumption of the top-performing supercomputers and the top-EE supercomputers. Among the top-performing systems, Tuolumne has the lowest power consumption at 3387 kW, whereas Venado has the highest power consumption among the top-EE supercomputers at 1662 kW. This highlights the significant disparity in power consumption between these two groups of supercomputers. It is evident that the top-performing supercomputers generally consume much more power than the top-EE supercomputers. Notably, there is no power consumption data available for Eagle. In 2023 (Figure 8), a similar pattern to 2024 is observed. The power consumption of the top-performing supercomputers remains significantly higher, except for Frontier and LUMI, which were listed in both ranking categories.
The data presented in Figure 9 reveals that the pattern of power consumption of top-performing supercomputers is similar to the pattern observed in the years 2024 and 2023. Frontier occupies the number one spot among the top-performing supercomputers while ranking second in EE ratings. The power consumption of these supercomputers varies, as indicated by the differing lengths of the bars in the chart (Figure 9). However, a direct relationship between performance and energy consumption cannot be deduced. For instance, Fugaku, ranked second in performance, consumes more power than Frontier, which holds the top spot. A similar discrepancy is observed between LUMI and Summit, where power consumption does not align with performance ranking. However, in June 2022, LUMI exhibited both higher performance and greater power consumption compared to Leonardo, ranked fourth. This inconsistency highlights the complexity of the relationship between performance and energy consumption. Interestingly, the patterns observed in 2024, 2023, and 2022 (Figure 9) are consistent with those seen in 2021 (Figure 10) and 2020 (Figure 11), suggesting a recurring trend over this five-year period.
Top-performing supercomputers do not necessarily consume more power. Frontier’s top performance combined with lower power consumption relative to Fugaku indicates efficiency rather than raw power usage as a critical factor. The recurring patterns across the five-year period suggest that advancements in supercomputing technology are facing a challenge in balancing top EE with increasing raw power usage. Differences in power consumption among supercomputers of similar rankings, such as Fugaku and Frontier, suggest that supercomputer architecture and energy management strategies play significant roles in determining overall efficiency. The differences observed among Top10 and Green10 (e.g., Fugaku, ranked second in performance, consumes more power than Frontier, LUMI, Summit, etc.) are most likely due to the comparison of supercomputers of different generations (Fugaku is older than Frontier, and Summit is older than LUMI).

Observation and Implication

Balancing high computational performance with top EE remains a critical challenge in the pursuit of sustainability. Although advanced supercomputers are enabling major breakthroughs in artificial intelligence and supporting complex simulations across different scientific domains, the substantial power requirements raise serious sustainability concerns. These systems consume significant amounts of energy, often resulting in elevated carbon dioxide emissions, especially when powered by non-renewable energy sources. Top-performing supercomputers typically consume significantly more power than top-EE systems, which are specifically designed to operate with minimal energy usage. Due to the direct correlation between power consumption and carbon dioxide emissions, this suggests that high-performance supercomputers contribute considerably more to the carbon footprint compared to the EE counterparts during the period under study.

5.3. Top-Performing Supercomputers’ Power Consumption

Table 1 presents the analysis of the power (Kw) consumption of top-performing supercomputers. Note that the data for some supercomputers were not released at the time of the data collection, which explains why the “N” column contains values below 10 for certain periods. The minimum (Min) column indicates the lowest power consumption among the top-performing supercomputers. Observing the Min column reveals an initial increase in power consumption, which stabilizes at 1764 and then reduces to 921 before increasing again, with the exception of a drop in Nov24. Similarly, the maximum (Max) power consumption shows an increase in Nov20, stabilizes for a period, and then rises again before stabilizing in June24 and Nov24.
In general, both the Min and Max power consumption show an upward trend over the five-year period under investigation. The mean power consumption consistently increases throughout the period, indicating a growing demand for power among the most powerful supercomputers. The standard deviation remains high across the years. However, June22 shows an increase, followed by a reduction in Nov22 and June23 before another sharp increase. This reflects the wide variability in the capabilities and efficiency of the supercomputers. Periods of stabilization in power consumption may be attributed to the same system being on the lists for 2 or 3 consecutive years.
Table 2 presents the correlation for the power consumption of top-performing supercomputers. Strong correlations are observed between Nov23 and June24, Nov22 and June23, and June21 and Nov21, indicating robust similarities in power consumption among the top-performing supercomputers during these periods. The correlation result suggests consistency in power consumption, likely because of sustained efforts to improve performance. Moderate correlations (e.g., June22 and Nov23, Nov21 and Nov22) indicate meaningful but weaker relationships. These suggest some diversity in power consumption, reflecting variations in supercomputer configurations among the powerful supercomputers. Weak correlations (e.g., June23 and Nov24, Nov22 and Nov24) highlight weak relationships, likely reflecting significant changes in the list over the period under investigation. Additionally, the weak correlation observed between the earlier periods and later periods clearly suggests an evolution in power consumption trends caused by new entries in the ranked list over the five-year period.

5.4. Analysis of Energy Efficiency

Table 3 presents a statistical analysis for the top-EER supercomputers. The Min EER refers to the supercomputer with the lowest EER among the top-EE supercomputers, typically ranked in the 10th position. The Max EER represents the supercomputer with the highest EER, typically occupying the number 1 position in the top 10 EER rankings.
The Min and Max EER columns indicate improvements in EER, particularly in recent periods (Nov24, June24, and Nov23) compared to the earlier periods (June20, Nov20, and June21). In the early periods, the EER values were relatively low, especially for the Min EER, with only a slight increase observed for both Min and Max values. However, EER has shown continuous, gradual improvement over the five-year period. The increasing Min EER values suggest that even the supercomputers with the lowest EE within the top-performing group have improved significantly. Notably, a significant improvement in EER begins from June22, particularly for the Max EER, with a continued upward trend extending to the current period. The standard deviation values highlight trends in diversity among EE supercomputers. Lower standard deviation values reflect limited diversity, whereas higher values indicate increasing diversity within the top-EER group. The mean EER exhibits a consistent upward trend, signifying persistent efforts by the top-EE supercomputers to achieve sustainability over the years.
Table 4 presents the correlation values for top-EER supercomputers. The strong correlations observed throughout the study period reveal that supercomputers with similar characteristics dominated the top 10 spots in the EER rankings. The consistent high correlation across the periods suggests uniform progress in EE among the top-performing energy-efficient supercomputers. Observations from Table 4 further indicate that correlations steadily increase from the early periods (June20–Nov21) to the recent periods (Nov23–Nov24), reaching extremely high values. This trend likely reflects global efforts toward sustainability. Adjacent periods consistently maintain strong correlations, likely due to gradual improvement in sustainability, across the periods under study.

5.5. Performance (Rmax) Analysis

Table 5 presents a statistical analysis of the performance of top-performing supercomputers. The worst-case scenario refers to the supercomputer with the lowest performance, typically ranked in the 10th position, whereas the best-case scenario represents the top-performing supercomputer, consistently occupying the 1st position in the top 10. An analysis of the worst-case scenario suggests performance improvement. A similar trend is observed in the best-case scenario, where performance shows a rapid increase, particularly in recent periods compared to the earlier periods (June20, Nov20, June21 and Nov21).
In the early periods, the performance levels were relatively low, especially in the worst case, with slight improvements in both the worst case and the best case. The performance continued to improve modestly with stable variability, as indicated by the standard deviation. The worst-case performance improvement suggests improvements in the top lower-performing supercomputers. A significant jump in performance is observed starting from June22, especially in the best case. The performance keeps increasing continuously. The likely reason for this surge is the entry of the first exascale system, Frontier, into the list in June 2022. The performance for the top system does not evolve in a smooth, linear manner, but rather in “jumps” each time a new top machine is deployed. The values remain constant for a period, as shown in Table 5, because it takes time for the next best system to be deployed. Moreover, for a new system to claim the top position, it must outperform the previous state-of-the-art top performer. The standard deviation remains high, highlighting the diversity in supercomputing performance levels. The mean performance exhibits a consistent improvement throughout the observed period, highlighting the relentless race for higher performance. The period under study indicates that exascale supercomputers dominated the number one spot in the top-performing group. Exascale supercomputers have held the number one position since June22, indicating a new benchmark in high-performance computing. This trend suggests that, moving forward, any supercomputer aiming for the top spot must operate at the exascale level. The substantial leap in performance also signals the beginning of the race toward zettascale computing.
Table 6 presents the correlation analysis for the performance of top-performing supercomputers across the five-year period. It is observed that the correlation relationship is strong, with all correlations exceeding 0.8, indicating a robust relationship between the performances of top-performing supercomputers during the study period. The strong correlations suggest a uniform trajectory in performance improvements, highlighting cohesive progress across the top-performing supercomputers. This consistency likely reflects a global trend of prioritizing comparable performance enhancements among supercomputers. Furthermore, the strong relationships indicate that mostly the same supercomputers with similar characteristics dominated the top performance rankings within the period under study.

5.6. Performance Analysis of Top-Performing and Most Energy-Efficient Supercomputers

An analysis of the performance (Rmax) of top-performing supercomputers and the performance (Rmax) of top-EE supercomputers has been conducted to determine the relationship between the two groups. Generally, Table 7 indicates a negative correlation between the performance of top-performing supercomputers and the performance of top-energy-efficiency supercomputers, with the exception of three weak positive correlations. This inverse relationship suggests that as the performance of top-performing supercomputers increases, the performance of energy-efficient supercomputers tends to decrease. This trend has been consistently observed over the five-year period from 2020 to 2024.
In other words, achieving higher computational power among top-performing supercomputers often comes at the expense of EE. Consequently, supercomputers with the highest computational performance may not necessarily operate at peak EE, highlighting a trade-off between maximizing computational power and optimizing EE. The likely reasons for the differences in performance and EE between the two groups of supercomputers are different system sizes, different types of processors and accelerators (CPU vs. GPU or other accelerators), different energy systems, and different hardware generations (i.e., “age” of systems leading to different silicon sizes).

6. Conclusions and Future Research

In this era of AI, the race toward faster and more energy-efficient supercomputers has brought critical challenges, especially in balancing computational performance with sustainability. This article has comprehensively examined the interplay between top 10 highest performance systems and EE of top 10 EER supercomputers over a period of five years, providing rare longitudinal insight into the trade-offs that define modern HPC. The findings reveal that while performance has increased rapidly, EE has not kept pace at the same rate, revealing a persistent challenge between maintaining performance and sustainability. Through comparative analyses of the TOP500 and Green500 rankings, the study shows that supercomputers with the top performance typically operate at significant power costs, with core counts in the millions and power demands reaching tens of megawatts. However, the most energy-efficient supercomputers tend to be smaller in scale, highlighting a structural limitation in current HPC architecture. Notably, a few exceptional supercomputers like Frontier, LUMI, and MareNostrum demonstrated the rare capability to appear in the top 10 of both rankings, signaling that it is feasible to achieve a balance between top performance and energy efficiency, though this balance is difficult to maintain. As AI continues to evolve rapidly and increase global energy demands, the sustainability of supercomputing infrastructures becomes a critical concern. The projected increase in carbon emissions from AI-powered data centers shows the need for green innovation in hardware design, cooling systems, and renewable energy integration. This study recommends a paradigm shift to a situation where performance metrics are evaluated not only in terms of FLOPS, but also through the lens of ecological responsibility and long-term sustainability. This research can provide policymakers, researchers, and technologists with foundational evidence for rethinking supercomputing in the era of AI. As the HPC community and industries prepare for the next phase of supercomputing, which is zettascale computing, there is an urgent need to ensure that the future of HPC not only emphasizes high performance but also is greener and more sustainable. It can be interesting for researchers to quantify the carbon footprint of the top-performing supercomputers over study periods. The study suggests similar research should be conducted in the future using data from Graph500. The effects of architectures or cooling technologies of different supercomputers (e.g., CPUs vs. GPUs) on energy and performance can be studied in the future.

Funding

This research was funded by the Deanship of Research and Innovation at the University of Hafr Al Batin through project number 0164-1446-S.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Home-|TOP500: https://www.top500.org/ (accessed on 12 December 2024).

Conflicts of Interest

The author declares that there are no conflicts of interest.

References

  1. McCann, K. Top 10 Supercomputers|AI Magazine. 2024. Available online: https://aimagazine.com/articles/top-10-supercomputers (accessed on 9 December 2024).
  2. Baker, H. Japan to Start Building 1st ‘Zeta-Class’ Supercomputer in 2025, 1,000 Times More Powerful than Today’s Fastest Machines|Live Science. 2024. Available online: https://www.livescience.com/technology/computing/japan-to-start-building-1st-zeta-class-supercomputer-in-2025-1000-times-more-powerful-than-todays-fastest-machines (accessed on 2 November 2024).
  3. Mann, A. Nascent exascale supercomputers offer promise, present challenges. Proc. Natl. Acad. Sci. USA 2020, 117, 22623–22625. [Google Scholar] [CrossRef]
  4. Goldman Sachs Research. AI is Poised to Drive 160% Increase in Data Center Power Demand|Goldman Sachs. 2024. Available online: https://www.goldmansachs.com/insights/articles/AI-poised-to-drive-160-increase-in-power-demand (accessed on 1 November 2024).
  5. Jones, N. How to stop data centres from gobbling up the world’s electricity. Nature 2018, 561, 163–166. [Google Scholar] [CrossRef]
  6. Patterson, D.; Gonzalez, J.; Hölzle, U.; Le, Q.; Liang, C.; Munguia, L.-M.; Rothchild, D.; So, D.R.; Texier, M.; Dean, J. The carbon footprint of machine learning training will plateau, then shrink. Computer 2022, 55, 18–28. [Google Scholar] [CrossRef]
  7. Luccioni, A.S.; Viguier, S.; Ligozat, A.L. Estimating the carbon footprint of bloom, a 176b parameter language model. J. Mach. Learn. Res. 2023, 24, 1–15. [Google Scholar]
  8. McCann, K. Top 10 AI Investments, AI Magazine. 2024. Available online: https://aimagazine.com/articles/top-10-ai-investments?utm_campaign=&utm_medium=email&utm_source=Newsletter (accessed on 3 December 2024).
  9. Cook, G.; Jardim, E.; Craighill, C. Clicking Clean Virginia: The Dirty Energy Powering Data Center Alley. Greenpeace. 2020. Available online: https://www.greenpeace.org/usa/reports/click-clean-virginia/ (accessed on 21 March 2020).
  10. The Batch. Amazon, Google, and Microsoft Bet on Nuclear Power to Meet AI Energy Demands (Deeplearning.ai). 2024. Available online: https://www.deeplearning.ai/the-batch/amazon-google-and-microsoft-bet-on-nuclear-power-to-meet-ai-energy-demands/ (accessed on 2 November 2024).
  11. Dongarra, J.; Keyes, D. The co-evolution of computational physics and high-performance computing. Nat. Rev. Phys. 2024, 6, 621–627. [Google Scholar] [CrossRef]
  12. Swain, S.R.; Parashar, A.; Singh, A.K.; Lee, C.N. An intelligent virtual machine allocation optimization model for energy-efficient and reliable cloud environment. J. Supercomput. 2025, 81, 237. [Google Scholar] [CrossRef]
  13. Yang, H.; Wang, G. The impact of computing infrastructure on carbon emissions: An empirical study based on China National Supercomputing Center. Environ. Res. Commun. 2023, 5, 095015. [Google Scholar] [CrossRef]
  14. Bao, K.; Yan, M.; Allen, R.; Salama, A.; Lu, L.; Jordan, K.E.; Sun, S.; Keyes, D. High-performance modeling of carbon dioxide sequestration by coupling reservoir simulation and molecular dynamics. SPE J. 2016, 21, 0853–0863. [Google Scholar] [CrossRef]
  15. Lannelongue, L.; Grealey, J.; Inouye, M. Green algorithms: Quantifying the carbon footprint of computation. Adv. Sci. 2021, 8, 2100707. [Google Scholar] [CrossRef]
  16. Masciari, E.; Napolitano, E.V. The Environmental Cost of High Performance Computing System Simulation. In Proceedings of the 2024 32nd Euromicro International Conference on Parallel, Distributed and Network-Based Processing (PDP), Dublin, Ireland, 20–22 March 2024; IEEE: New York, NY, USA, 2024; pp. 289–292. [Google Scholar]
  17. Portelli, A. Optimisation of Lattice Simulations Energy Efficiency; University of Edinburg, School of Physics & Astronomy: Edinburg, UK, 2025. [Google Scholar] [CrossRef]
  18. Sterling, T.; Brodowicz, M.; Anderson, M. High Performance Computing: Modern Systems and Practices; Morgan Kaufmann: Cambridge, IA, USA, 2015. [Google Scholar]
  19. TOP500 TOP500 Description. 2024. Available online: https://www.top500.org/project/top500_description/ (accessed on 30 December 2024).
  20. Johnson-Groh, M. Achieving More with Less: Optimizing Efficiency in Supercomputing. 2023. Available online: https://www.simonsfoundation.org/2023/02/22/achieving-more-with-less-optimizing-efficiency-in-supercomputing/ (accessed on 30 December 2024).
  21. Zhao, W.X.; Zhou, K.; Li, J.; Tang, T.; Wang, X.; Hou, Y.; Min, Y.; Zhang, B.; Zhang, J.; Dong, Z.; et al. A survey of large language models. arXiv 2023, arXiv:2303.18223. [Google Scholar]
  22. Zhao, L.; Zhang, L.; Wu, Z.; Chen, Y.; Dai, H.; Yu, X.; Liu, Z.; Zhang, T.; Hu, X.; Jiang, X.; et al. When brain-inspired ai meets agi. Meta-Radiology 2023, 1, 100005. [Google Scholar] [CrossRef]
  23. Raiaan, M.A.K.; Mukta, M.S.H.; Fatema, K.; Fahad, N.M.; Sakib, S.; Mim, M.M.J.; Ahmad, J.; Ali, M.E.; Azam, S. A review on large language models: Architectures, applications, taxonomies, open issues and challenges. IEEE Access 2024, 12, 26839–26874. [Google Scholar] [CrossRef]
  24. Buber, E.; Banu, D.I.R.I. Performance analysis and CPU vs. GPU comparison for deep learning. In Proceedings of the 2018 6th International Conference on Control Engineering & Information Technology (CEIT), Istanbul, Turkey, 25–27 October 2018; IEEE: New York, NY, USA, 2018; pp. 1–6. [Google Scholar]
  25. Restackio. Understanding Supercomputing for AI. 2025. Available online: https://www.restack.io/p/supercomputing-for-ai-knowledge-answer-cat-ai (accessed on 22 April 2025).
  26. Merritt, R. Why GPUs Are Great for AI|NVIDIA Blog. 2023. Available online: https://blogs.nvidia.com/blog/why-gpus-are-great-for-ai/ (accessed on 21 April 2025).
  27. Khalid, A. Nvidia Is Now Worth More than Amazon and Alphabet|The Verge. 2024. Available online: https://www.theverge.com/2024/2/14/24073384/nvidia-market-cap-passes-amazon-alphabet (accessed on 22 April 2025).
  28. Patnaik, S. Nvidia (NVDA) Stock Passes Alphabet (GOOG), Amazon Market Value—Bloomberg. 2024. Available online: https://www.bloomberg.com/news/articles/2024-02-14/nvidia-overtakes-alphabet-one-day-after-eclipsing-amazon (accessed on 22 April 2025).
  29. McCann, K. Top500: Why Nvidia’s Supercomputers Dominated the List|AI Magazine. 2024. Available online: https://aimagazine.com/articles/how-nvidia-dominated-the-top500-list-with-ai-supercomputers (accessed on 20 April 2025).
  30. Fraternali, F.; Bartolini, A.; Cavazzoni, C.; Benini, L. Quantifying the impact of variability and heterogeneity on the energy efficiency for a next-generation ultra-green supercomputer. IEEE Trans. Parallel Distrib. Syst. 2017, 29, 1575–1588. [Google Scholar] [CrossRef]
  31. Sun, J.; Gao, Z.; Grant, D.; Nawaz, K.; Wang, P.; Yang, C.M.; Boudreaux, P.; Kowalski, S.; Huff, S. Energy dataset of Frontier supercomputer for waste heat recovery. Sci. Data 2024, 11, 1077. [Google Scholar] [CrossRef]
  32. Rrapaj, E.; Bhalachandra, S.; Zhao, Z.; Austin, B.; Nam, H.A.; Wright, N.J. Power Consumption Trends in Supercomputers: A Study of NERSC’s Cori and Perlmutter Machines. In Proceedings of the ISC High Performance 2024 Research Paper Proceedings (39th International Conference), Hamburg, Germany, 12–16 May 2024; pp. 1–10. [Google Scholar]
  33. Chang, J.; Lu, K.; Guo, Y.; Wang, Y.; Zhao, Z.; Huang, L.; Zhou, H.; Wang, Y.; Lei, F.; Zhang, B. A survey of compute nodes with 100 TFLOPS and beyond for supercomputers. CCF Trans. High Perform. Comput. 2024, 6, 243–262. [Google Scholar] [CrossRef]
  34. Shankar, S.; Reuther, A. Trends in energy estimates for computing in ai/machine learning accelerators, supercomputers, and compute-intensive applications. In Proceedings of the 2022 IEEE High Performance Extreme Computing Conference (HPEC), Greater Boston Area, MA, USA, 19–23 September 2022; IEEE: New York, NY, USA, 2022; pp. 1–8. [Google Scholar]
  35. Nieves-Pírez, I.; Muñoz, A.; Almeida, F.; Blanco, V. Energy efficiency and performance analysis of a legacy atomic scale materials modeling simulator (VASP). J. Supercomput. 2024, 80, 16679–16702. [Google Scholar] [CrossRef]
  36. Silva, C.A.; Vilaça, R.; Pereira, A.; Bessa, R.J. A review on the decarbonization of high-performance computing centers. Renew. Sustain. Energy Rev. 2024, 189, 114019. [Google Scholar] [CrossRef]
  37. Cooper, B.; Scogland, T.R.; Ge, R. Shared Virtual Memory: Its Design and Performance Implications for Diverse Applications. In Proceedings of the 38th ACM International Conference on Supercomputing, Kyoto, Japan, 4–7 June 2024; pp. 26–37. [Google Scholar]
  38. Li, B.; Basu Roy, R.; Wang, D.; Samsi, S.; Gadepally, V.; Tiwari, D. Toward sustainable hpc: Carbon footprint estimation and environmental implications of hpc systems. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, Denver, CO, USA, 11–17 November 2023; pp. 1–15. [Google Scholar]
  39. Kumar, N.; Aujla, G.S.; Garg, S.; Kaur, K.; Ranjan, R.; Garg, S.K. Renewable energy-based multi-indexed job classification and container management scheme for sustainability of cloud data centers. IEEE Trans. Ind. Inform. 2018, 15, 2947–2957. [Google Scholar] [CrossRef]
  40. León, E.A.; Karlin, I.; Grant, R.E.; Dosanjh, M. Program optimizations: The interplay between power, performance, and energy. Parallel Comput. 2016, 58, 56–75. [Google Scholar] [CrossRef]
  41. Liu, G.; Hu, X. Performance and efficiency evaluation and analysis of supercomputers. In Proceedings of the 2010 3rd International Conference on Computer Science and Information Technology, Chengdu, China, 9–11 July 2010; IEEE: New York, NY, USA, 2010; pp. 642–646. [Google Scholar]
  42. Tan, L.; Song, S.L.; Wu, P.; Chen, Z.; Ge, R.; Kerbyson, D.J. Investigating the interplay between energy efficiency and resilience in high performance computing. In Proceedings of the 2015 IEEE International Parallel and Distributed Processing Symposium, Hyderabad, India, 25–29 May 2015; IEEE: New York, NY, USA, 2015; pp. 786–796. [Google Scholar]
  43. Wu, X.; Taylor, V.; Cook, J.; Mucci, P.J. Using performance-power modeling to improve energy efficiency of HPC applications. Computer 2016, 49, 20–29. [Google Scholar] [CrossRef]
  44. Nazaré, T.; Gadelha, J.; Nepomuceno, E.; Lozi, R. Green computing for energy transition: A survey. IEEE Lat. Am. Trans. 2023, 21, 937–948. [Google Scholar] [CrossRef]
  45. Liu, H.; Zhai, J. Carbon emission modeling for high-performance computing-based AI in new power systems with large-scale renewable energy integration. Processes 2025, 13, 595. [Google Scholar] [CrossRef]
  46. Oyewole, O.O.; Joseph, J.F. Sustainable AI and Green Computing: Reducing the Environmental Impact of Large-Scale Models with Energy-Efficient Techniques. Int. J. Sci. Res. Netw. Secur. Commun. 2025, 13, 19–26. [Google Scholar]
  47. Huang, X.; Shu, C.; Wang, S.; Liu, X. High-Performance Computing for Predicting the Environmental Impact of Renewable Energy Sources. Adv. Eng. Technol. Res. 2025, 13, 1109. [Google Scholar] [CrossRef]
  48. Yu, G.; Wang, Z.; Xu, Y.; Shun, Z.J.; Chen, S. From energy to ecology: Decarbonization pathways for sustainable high-performance computing through global carbon-energy nexus analysis. Front. Appl. Math. Stat. 2025, 11, 1595365. [Google Scholar] [CrossRef]
  49. Kindratenko, V.; Trancoso, P. Trends in high-performance computing. Comput. Sci. Eng. 2011, 13, 92–95. [Google Scholar] [CrossRef]
Figure 1. Stages involved in the procedure for implementing the methods.
Figure 1. Stages involved in the procedure for implementing the methods.
Applsci 15 08570 g001
Figure 2. Supercomputer core counts for 2024: (A,C) core counts of the top-performing supercomputers in November and June, respectively; (B,D) core counts of the most energy-efficient supercomputers in November and June, respectively.
Figure 2. Supercomputer core counts for 2024: (A,C) core counts of the top-performing supercomputers in November and June, respectively; (B,D) core counts of the most energy-efficient supercomputers in November and June, respectively.
Applsci 15 08570 g002
Figure 3. Supercomputer core counts for 2023: core counts of the top-performing supercomputers for (A) November and (C) June; core counts of the most energy-efficient supercomputers for (B) November and (D) June.
Figure 3. Supercomputer core counts for 2023: core counts of the top-performing supercomputers for (A) November and (C) June; core counts of the most energy-efficient supercomputers for (B) November and (D) June.
Applsci 15 08570 g003
Figure 4. Supercomputer core counts for 2022: core counts of the top-performing supercomputers for (A) November and (C) June; core counts of the most energy-efficient supercomputers for (B) November and (D) June.
Figure 4. Supercomputer core counts for 2022: core counts of the top-performing supercomputers for (A) November and (C) June; core counts of the most energy-efficient supercomputers for (B) November and (D) June.
Applsci 15 08570 g004
Figure 5. Supercomputer core counts for 2021: core counts of the top-performing supercomputers for (A) November and (C) June; core counts of the most energy-efficient supercomputers for (B) November and (D) June.
Figure 5. Supercomputer core counts for 2021: core counts of the top-performing supercomputers for (A) November and (C) June; core counts of the most energy-efficient supercomputers for (B) November and (D) June.
Applsci 15 08570 g005
Figure 6. Supercomputer core counts for 2020: core counts of the top-performing supercomputers for (A) November and (C) June; core counts of the most energy-efficient supercomputers for (B) November and (D) June.
Figure 6. Supercomputer core counts for 2020: core counts of the top-performing supercomputers for (A) November and (C) June; core counts of the most energy-efficient supercomputers for (B) November and (D) June.
Applsci 15 08570 g006
Figure 7. Supercomputer power consumption for 2024: power consumption of the top-performing supercomputers for (A) November and (C) June; power consumption and energy efficiency rating of the most energy-efficient supercomputers for (B) November and (D) June.
Figure 7. Supercomputer power consumption for 2024: power consumption of the top-performing supercomputers for (A) November and (C) June; power consumption and energy efficiency rating of the most energy-efficient supercomputers for (B) November and (D) June.
Applsci 15 08570 g007
Figure 8. Supercomputer power consumption for 2023: power consumption of the top-performing supercomputers for (A) November and (C) June; power consumption and energy efficiency rating of the most energy-efficient supercomputers for (B) November and (D) June.
Figure 8. Supercomputer power consumption for 2023: power consumption of the top-performing supercomputers for (A) November and (C) June; power consumption and energy efficiency rating of the most energy-efficient supercomputers for (B) November and (D) June.
Applsci 15 08570 g008
Figure 9. Supercomputer power consumption for 2022: power consumption of the top-performing supercomputers for (A) November and (C) June; power consumption and energy efficiency rating of the most energy-efficient supercomputers for (B) November and (D) June.
Figure 9. Supercomputer power consumption for 2022: power consumption of the top-performing supercomputers for (A) November and (C) June; power consumption and energy efficiency rating of the most energy-efficient supercomputers for (B) November and (D) June.
Applsci 15 08570 g009
Figure 10. Supercomputer power consumption for 2021: power consumption of the top-performing supercomputers for (A) November and (C) June; power consumption and energy efficiency rating of the most energy-efficient supercomputers for (B) November and (D) June.
Figure 10. Supercomputer power consumption for 2021: power consumption of the top-performing supercomputers for (A) November and (C) June; power consumption and energy efficiency rating of the most energy-efficient supercomputers for (B) November and (D) June.
Applsci 15 08570 g010
Figure 11. Supercomputer power consumption for 2020: power consumption of the top-performing supercomputers for (A) November and (C) June; power consumption and energy efficiency rating of the most energy-efficient supercomputers for (B) November and (D) June.
Figure 11. Supercomputer power consumption for 2020: power consumption of the top-performing supercomputers for (A) November and (C) June; power consumption and energy efficiency rating of the most energy-efficient supercomputers for (B) November and (D) June.
Applsci 15 08570 g011
Table 1. Statistical analysis for the supercomputers’ power consumption.
Table 1. Statistical analysis for the supercomputers’ power consumption.
NMin Power (Kw)Max Power (Kw)MeanStd. Dev.
Nov249338738,69817,373.1113,210.231
June248415938,69815,679.1213,076.048
Nov238256029,89913,986.7510,162.440
June2310258929,89912,264.409098.486
Nov2210258929,89911,924.709028.653
June221092129,89911,148.409744.542
Nov219176429,89910,059.679618.801
June219176429,89910,052.899624.743
Nov208176429,89910,993.509837.126
June209134428,3359686.449409.253
Table 2. Correlation analysis of power consumption for top-performing supercomputers.
Table 2. Correlation analysis of power consumption for top-performing supercomputers.
Nov24June24Nov23June23Nov22June22Nov21June21Nov20
June240.522
Nov230.6440.928
June230.1560.6940.473
Nov220.1390.6580.4070.997
June220.3790.6880.6400.5010.488
Nov210.2690.4590.6170.5990.5480.205
June210.2700.4590.6170.5990.5480.2051.000
Nov200.6310.3900.5470.3420.2850.6000.6070.607
June200.3100.3970.5180.3160.2570.3070.5820.5810.549
Table 3. Statistical description of energy efficiency.
Table 3. Statistical description of energy efficiency.
Min EER (GFlops/watts)Max EER (GFlops/watts)MeanStd. Deviation
Nov2462.80372.73367.570503.183307
June2456.98372.73363.823204.945468
Nov2345.11765.39655.123906.331100
June2341.41165.39654.284607.193580
Nov2238.55565.09152.374609.320747
June2229.92662.68442.5124011.222643
Nov2126.19539.37930.200304.027269
June2124.05829.70026.368602.139020
Nov2015.41826.19520.538004.868887
June2014.66121.10816.857902.389266
Table 4. Correlation analysis of most energy-efficient supercomputers.
Table 4. Correlation analysis of most energy-efficient supercomputers.
Nov24June24Nov23June23Nov22June22Nov21June21Nov20
June240.984
Nov230.9870.975
June230.9710.9660.991
Nov220.9390.9670.9230.922
June220.9720.9740.9520.9490.930
Nov210.9040.9450.9090.8990.9500.863
June210.9280.9620.9330.9300.9630.9320.934
Nov200.8580.8920.8710.8900.9150.9070.8260.926
June200.9040.9470.9170.9040.9360.8920.9540.9870.872
Table 5. The performance (Rmax) of top-performing supercomputers.
Table 5. The performance (Rmax) of top-performing supercomputers.
Worst Case (Rmax)Best Case (Rmax)MeanStd. Deviation
Nov24208.101742.00685.2010512.92329
June24121.401206.00455.7410373.37786
Nov2394.641194.00390.3790336.75663
June2361.441194.00271.5830347.76592
Nov2261.441102.00255.9830321.98705
June2246.101102.00227.4030328.35762
Nov2130.05442.01108.3650122.27634
June2123.52442.01107.0840122.98281
Nov2022.40442.01102.8650125.30433
June2021.23415.5394.2640120.53683
Table 6. Performance analysis with correlation for top-performing supercomputers.
Table 6. Performance analysis with correlation for top-performing supercomputers.
Nov24June24Nov23June23Nov22June22Nov21June21Nov20
June240.984
Nov230.9430.938
June230.9170.9090.967
Nov220.9300.9190.9650.998
June220.8990.9020.9310.9890.989
Nov210.8800.8740.9450.9930.9890.990
June210.8830.8760.9450.9930.9890.9901.000
Nov200.8900.8850.9550.9950.9910.9900.9990.999
June200.9030.9030.9670.9980.9940.9890.9950.9940.997
Table 7. Performance (Rmax) analysis for top-performing (TP) and most energy-efficient (EP) supercomputers.
Table 7. Performance (Rmax) analysis for top-performing (TP) and most energy-efficient (EP) supercomputers.
TP vs. EPrTP vs. EPr
June24TP vs. June24EP−0.481Nov22TP vs. June22EP0.224
Nov24TP vs. Nov24EP−0.289June22TP vs. Nov22EP0.224
Nov24TP vs. June24EP−0.42June21TP vs. June21EP−0.21
June24TP vs. Nov24EP−0.271Nov21TP vs. Nov21EP−0.199
June23TP vs. June23EP−0.23Nov21TP vs. June21EP−0.218
Nov23TP vs. Nov23EP−0.35June21TP vs. Nov21EP−0.197
Nov23TP vs. June23EP−0.216June20TP vs. June20EP−0.302
June23TP vs. Nov23EP−0.281Nov20TP vs. Nov20EP−0.265
June22TP vs. June22EP0.232Nov20TP vs. June20EP−0.305
Nov22TP vs. Nov22EP−0.224June20TP vs. Nov20EP−0.25
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chiroma, H. Investigating Supercomputer Performance with Sustainability in the Era of Artificial Intelligence. Appl. Sci. 2025, 15, 8570. https://doi.org/10.3390/app15158570

AMA Style

Chiroma H. Investigating Supercomputer Performance with Sustainability in the Era of Artificial Intelligence. Applied Sciences. 2025; 15(15):8570. https://doi.org/10.3390/app15158570

Chicago/Turabian Style

Chiroma, Haruna. 2025. "Investigating Supercomputer Performance with Sustainability in the Era of Artificial Intelligence" Applied Sciences 15, no. 15: 8570. https://doi.org/10.3390/app15158570

APA Style

Chiroma, H. (2025). Investigating Supercomputer Performance with Sustainability in the Era of Artificial Intelligence. Applied Sciences, 15(15), 8570. https://doi.org/10.3390/app15158570

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop