1. Introduction
Supercomputers are the main components driving innovations in the artificial intelligence (AI) boom, particularly in the development and inference of large language models (LLMs). Companies adopting supercomputers aiming for AI innovations typically gain a competitive edge, positioning themselves to lead in the race to dominate the AI industry. With the emergence of NVIDIA as the leader in AI, many organizations are increasingly shifting focus to high-performance computing (HPC) systems as a catalyst for rapid AI advancement. However, not all supercomputers are optimized for AI. Some supercomputers are specifically designed for handling both AI workloads and other scientific computations. Additionally, some supercomputers have the flexibility to be upgraded or extended to support AI capabilities [
1].
For example, NVIDIA’s Eos offered an impressive 18.4 exascale floating-point operations per second (Eflops) for AI computational performance and advancing the AI research field. NVIDIA’s Selene extends its capabilities to support research in robotics, data analytics, and autonomous vehicle technology. Fujitsu’s Fugaku operates at the exascale level. The integration of HPC and AI workloads has been instrumental in advancing AI research and development. Notably, the Japanese LLM is a product of Fugaku’s capabilities. Aurora is another powerful HPC designed for scientific computations, including brain mapping [
1]. In addition, other supercomputers with the potential to accommodate AI capabilities include Leonardo Frontier, the Large Unified Modern Infrastructure (LUMI), Sierra, Perlmutter, and Summit [
1]. The HPC community has entered the era of exascale supercomputers. However, with the rapid evolution of AI technology, it is expected that exascale supercomputers may struggle to maintain pace with the increasing advancement of AI models in the future. To address this challenge, Japan has announced a five-year project for the development of zettascale supercomputers [
2]. Typically, an enormous amount of energy is required to power supercomputers [
3]. The amount of power is in the range of tens of thousands of kilowatts to almost 50,000 kw; for example, Aurora consumes 38,698 kw of power. LLMs consume a lot of power in different phases, such as pre-training, storage, data transfer, inference, and hardware manufacturing.
The workload demand in data centers for the year 2023 was estimated at 1088 million computer instances, with an estimated power consumption of 411 terawatts per hour [
4]. For years, power consumption in data centers has been stable despite the growing workload [
5]. However, the AI evolution has sparked an increasing demand for energy to power supercomputers in data centers. It is estimated that the power demand will increase by 160% every year up to 2030, requiring 200 terawatts per hour. Europe is estimated to require USD 1 trillion to upgrade its power grid to cope with the increasing AI energy demand [
4]. For example, GPT-3 was pre-trained on 10,000 graphical processing units (GPUs) at the Microsoft datacenter, and GLaM was pre-trained on TPUv4s at the Oklahoma datacenter [
6]. The effectiveness of data center power usage for BLOOM, GPT-3, OPT, and Gopher is 1.2, 1.1, 1.09
2, and 1.08, respectively [
7].
Currently, companies are investing billions of dollars into AI projects to gain a competitive edge over their competitors and win the AI race to become the leader in the industry. For example, Workday injected USD 500 million into AI, focusing on generative AI. In another financial war chest, Meta significantly increased its AI investment from USD 30 billion in 2023 to USD 40 billion in 2024. A consortium called the Global AI Infrastructure Investment Partnership comprising Microsoft, MGX, Global Infrastructure partners, NVIDIA as a technical adviser, and Blackrock is raising an ambitious amount ranging between USD 80 and USD 100 billion to invest in the construction of a data center and energy-efficient infrastructure for powering the data center [
8]. Alphabet, the parent company of Google, has invested a substantial amount in AI. In the last quarter of 2023, IBM pumped USD 500 million into enterprise AI projects. In the second quarter of 2024, CISCO injected USD 1 billion as a fund for AI investment. Out of the USD 1 billion, USD 200 million was allocated for investment in generative AI in AI companies such as Mistral AI, Cohere, and Scale AI. Microsoft pledges to inject USD 3.2 billion over a period of three years. In addition, Microsoft backed OpenAI with an investment of USD 13 billion, leading to the integration of ChatGPT into the Bing search engine. Sapphire Ventures has announced its investment of USD 1 billion to support startups focusing on AI technology [
8].
The significant investments in AI, amounting to hundreds of billions of US dollars, made by companies have the potential to significantly increase demand for faster supercomputers, scale up supercomputer operations, and exacerbate power consumption issues. This, in turn, could lead to higher carbon dioxide emissions, particularly if the power sources are not renewable. However, giant companies like Amazon, Meta, Google, Microsoft, and Apple have pledged to power their data centers with renewable energy [
9,
10]. As mentioned previously, the world has entered the era of exascale supercomputing and generative AI [
11], thus drawing significant attention to sustainability [
12]. Supercomputers contribute to carbon dioxide footprints, particularly in regions where those datacenters are located [
13], with HPC applications like large-scale modeling adding to emissions [
14]. The carbon dioxide footprint from computation is substantial and poses a threat to the climate [
15], with HPC and data centers emitting approximately 100 megatonnes of carbon dioxide annually. This is significantly contributing to climate change [
5]. By 2030, the carbon footprint of the IT industry is projected to increase ten-fold, with power consumption estimated at 200 TWh per year [
5]. Supercomputers emit greenhouse gases [
16], whose atmospheric concentration profoundly influences climate change. The consequences of climate change include rising sea levels, severe wildfires (e.g., in Australia), destructive typhoons (e.g., in the Pacific), devastating droughts (e.g., in Africa), and adverse impacts on human health [
5]. The growing demand for faster supercomputers with powerful computational capabilities has significantly increased power consumption and environmental impact [
16]. Therefore, it is critical for the scientific community to consider both environmental impact and large-scale computational operations running on supercomputers [
17]. The rapid development in machine learning poses a challenge for the environment because model pre-training requires extensive computational resources, materials, and energy [
7]. This creates a challenge in balancing supercomputer performance with sustainability, resulting in the creation of a multi-objective optimization problem.
Many studies have analyzed the performance, power consumption, and carbon footprint of supercomputers. However, most research has extensively examined the issues of energy efficiency and performance in isolation. Only limited studies have attempted to explore the relationship between performance and energy efficiency (EE), and these are typically restricted to one or two supercomputers within a given year. Such studies often fail to provide insights into trends over time. Another issue is that previous research has overlooked the behavior of groups of top-performing supercomputers and top-energy-efficiency supercomputers in terms of performance and sustainability simultaneously over time. The relationship between the top-performing and most energy-efficient supercomputers has frequently been ignored in past studies.
This paper’s intent is to comprehensively analyze the interplay between performance and EE in supercomputers by examining trends over year-over-year periods. Also, the study examines the behavior of top-performing and top-EE supercomputers. The study aims to provide insights into the sustainability and performance trade-offs, addressing the lack of longitudinal and group-based analyses in already published research.
A summary of the study contributions is presented as follows:
The article compares top-performing supercomputers with top-energy-efficiency supercomputers, revealing a consistent trade-off showing high performance typically means lower energy efficiency in most cases.
The paper identifies rare supercomputers such as Frontier, LUMI, MareNostratrum, and Adastra showing the feasibility of achieving both top performance and top energy efficiency.
A detailed statistical analysis of power usage, core counts, and EE shows that top-performing supercomputers have core counts in the millions. Energy-efficient supercomputers typically operate with thousands of cores. Frontier achieved an exceptional balance but faced a challenge in maintaining it over the five-year period under study, indicating an unstable trade-off.
The research provides empirical evidence that supercomputers with higher performance tend to consume significantly more power, thereby having the potential to contribute more to carbon emissions.
The study systematically analyzes five years of data from 2020 to 2024 collected from the TOP500 rankings; such an analysis is rarely found in the literature for such a long period of time.
The article highlights the multi-objective challenges of balancing performance with sustainability.
The study highlights a shift from CPU-dominated supercomputers to GPU-dominated supercomputers as a result of AI workloads, especially LLMs. Also, it discusses how AI evolution is driving demand for GPUs, increasing power needs, and affecting the sustainability of supercomputers.
The research can provide policymakers, researchers, and technologists with foundational evidence for rethinking supercomputing in the era of AI.
The rest of this article is organized as follows:
Section 2 introduces the fundamental concepts required to understand the study.
Section 3 reviews related work, placing the current research within the context of the existing literature.
Section 4 describes the methodology used in the study, and
Section 5 presents the results along with a detailed discussion. Finally,
Section 6 offers concluding remarks.
4. Methodology
The detailed procedure of the methodology adopted in this study is presented in this section. It outlines the systematic stages as shown in
Figure 1 followed to achieve the research objectives, including data source selection, data collection, and data analysis strategies. Each component of the methodology is carefully described to ensure transparency and reproducibility. The section also highlights the statistical tools and techniques employed. By detailing the stages, this section provides a comprehensive understanding of how the study was carried out.
The three most relevant HPC ranking lists in the HPC community are as follows: Top500, Green500, and Graph500 [
44,
49]. This study focuses on Top500 and Green500 to cover the scope of performance and sustainability. Supercomputers must first be listed in TOP500 before qualifying for the Green500 ranking, encouraging designers to prioritize both high performance and EE.
The study utilized TOP500 and Green500 as data sources because they are well-established and widely accepted reference points for information within the HPC community [
30,
32,
34]. The primary source for data collection is the official TOP500 website, and the data is publicly available. The TOP500 releases rankings of the 500 most powerful supercomputers worldwide two times a year, in June and November. The most recent data at the time of this investigation was from November 2024. For this study, we focused on the top 10 supercomputers for each year over a span of five years. This approach can account for variations in rankings driven by technological advancements and the deployment of new systems over time. The study considered five years of data to provide a recent perspective in this era of AI. Data on Rmax (PFlop/s), Rpeak (PFlop/s), cores, and power consumption (kW) for the top 10 supercomputers in terms of performance were collected from June 2020 to November 2024, resulting in a total of 10 lists.
Unlike TOP500, Green500 ranks the 500 most energy-efficient supercomputers globally, promoting the balance of performance and sustainability. The data is released simultaneously with the TOP500 rankings. Data on the top-EE supercomputers were extracted for the same period, from June 2020 to November 2024. The variables collected include cores, power consumption (kW), Rmax, Rpeak, and EE rating (EER) (GFlops/watts). The data focuses on the top 10 supercomputers in terms of EE, as the study addresses both top performance and sustainability. This approach creates two groups of supercomputers: the top-performing category and the top-EE category, enabling an in-depth analysis of both performance and sustainability. In this study, the format MonthYear[index measure] is used for clearer representation during analysis. For example, Nov23E refers to the performance of the top 10 supercomputers in terms of EE in November 2023, while June23P represents the top 10 supercomputers in terms of performance in June 2023.
Data Analytics
The data collected as described in the preceding section was analyzed using statistical tools. The SPSS 8 statistical package served as the platform for the analysis. Descriptive statistics were employed to summarize the data, providing insights into central tendency, variability, distribution, core trends, power consumption trends, yearly comparisons, and EE trends. Correlation analysis was employed to examine the relationships between variables in the two groups of supercomputers. This was aimed at identifying potential trade-offs. The relationships analyzed include the following:
- i.
Performance (Rmax) across years;
- ii.
EE across years;
- iii.
Power consumption across years;
- iv.
Performance (Rmax) of top-performing supercomputers vs. performance (Rmax) of top-EE supercomputers.
5. Results and Discussion
This section presents the results of the data analysis conducted over a five-year period, examining supercomputers from two perspectives: top-performing systems and those with the highest EE. The analysis focuses on key performance indicators that distinguish these two groups and provides insights into characteristics and trends. The evaluation covers four main dimensions: computational performance, power consumption, core count, and EER. By comparing these metrics across the two groups, the study highlights how priorities differ between achieving raw performance and optimizing for EE.
5.1. Core Counts
Figure 2 illustrates the stark difference in the number of cores between top-performing supercomputers in
Figure 2A,C and top-EE supercomputers in
Figure 2B,D. The top-performing supercomputers boast core counts in the millions, whereas the core counts of the top-energy-efficiency supercomputers are typically limited to the thousands. Notably, in 2024, none of the supercomputers categorized as top-EE supercomputers has more than 500,000 cores. On the contrary, among the top-performing supercomputers, every system has at least 1 million cores. An exception to this trend is observed where only
EoS and
MareNostrum fall below the 1-million-core threshold among the top-performing supercomputers.
Figure 3 shows the core counts of supercomputers listed in the top-performing and top-energy-efficiency categories for the year 2023. With the exception of
MareNostrum and
EoS, the top-performing supercomputers have core counts ranging from at least 1 million to over 10 million, as seen in
Sunway. In contrast, the top-EE supercomputers consistently maintain core counts in the thousands, similar to the trend observed in 2024. Interestingly,
Frontier and
LUMI, despite having millions of cores, managed to secure spots in the top 10 EE rankings, appearing in both the top 10 performance (
Figure 3A,C) and top 10 EE rankings (
Figure 3B,D) categories. In addition,
MareNostrum, with 680,960 cores, just less than 1 million, also achieved positions in both rankings, highlighting its versatility in balancing performance with sustainability.
Figure 4 presents the core counts of supercomputers listed in both the top performance rankings (
Figure 4A,C) and the top EER rankings (
Figure 4B,D) for 2022. Similar to the trends observed in 2023 and 2024, the core counts of top-performing supercomputers are generally in the millions, with the exceptions of
Perlmutter,
Selene, and
Adastra. On the contrary, the core counts for supercomputers in the top EE rankings remain in the thousands, except for
LUMI and
Frontier, which stand out with millions of cores. Notably,
Frontier and
LUMI demonstrate exceptional balance by securing positions in both the top 10 supercomputers in terms of performance and the top 10 EE rankings, despite the large core counts. Additionally,
Adastra, with fewer than 500,000 cores, is listed in both rankings, showcasing how a mix of hundreds of thousands of cores and millions of cores can achieve a balance between top computational performance and sustainability. A particularly remarkable observation is
Frontier’s position as the top-performing supercomputer, holding the number one spot, while ranking second in the top-EE category. This achievement highlights an impressive balance between unparalleled computational power and sustainability, marking a significant milestone in the history of supercomputing for its focus on balancing performance with sustainability.
Figure 5 presents the core counts of the top-performing supercomputers (
Figure 5A,C) and top-energy-efficiency supercomputers (
Figure 5B,D) for 2021. Among the top-performing supercomputers, the core counts exhibit a balanced distribution: 50% have core counts in the millions, while the other 50% have core counts in the hundreds of thousands. This pattern is consistent in both November and June, where half of the supercomputers have fewer than 1 million cores. Despite changes in the top performance rankings with new entries, this 50%–50% ratio in core counts remains a consistent trend.
In contrast, the core counts of supercomputers in the top EE rankings are much lower, consistently remaining in the thousands. None of the top-EE supercomputers in 2021 had core counts of 1 million or higher. Notably, MN-3, which secured the number 1 spot on the list of the 10 most energy-efficient supercomputers, had only 1664 cores, a stark contrast to the millions of cores seen in the top 10 systems in terms of performance. Interestingly, Perlmutter and Juwels, both having fewer than a million cores, achieved recognition in both the top 10 for performance and the top 10 for EE. This demonstrates the ability to balance high computational performance with sustainability, challenging the notion that large core counts are essential for achieving peak performance.
Figure 6 illustrates the core counts of supercomputers ranked in the top 10 for both performance and EE in 2020. Similar to the trend observed in 2021, the core counts of the top-performing supercomputers show a balanced distribution: 50% have core counts in the millions, while the remaining 50% have core counts in the hundreds of thousands. For the EE supercomputers, most core counts remain in the thousands, with the notable exceptions of
Fugaku,
Summit, and
NA-1 with core counts in the millions. In addition,
Juwels,
Selene,
HPC5,
Fugaku, and
Marconi-100 are notable for appearing in both the top 10 for performance and the top 10 for EER.
The year 2020 stands out as a significant milestone for achieving a balance between core counts for powerful computation and sustainability. Five supercomputers, comprising a combination of systems with hundreds of thousands and millions of cores, achieved placement in both the top performance and top energy efficiency rankings. This highlights the ability to balance exceptional computational power with sustainable energy use.
Observations and Implications
A closer look at the 2024 trend reveals that Frontier and LUMI find it challenging to maintain positions in the top 10 EE rankings achieved in 2023 and 2022 but retained spots among the top 10 supercomputers in terms of performance. This shift is likely because new supercomputing systems that are more energy efficient entered the Green500 list. Therefore, it could be concluded that more work is being performed to improve the energy efficiency of new machines than to improve the performance of the machines. It has been observed that supercomputers consistently securing the number 1 spot in the top 10 EE rankings typically have a very small number of cores. The only exception is Frontier TDS, which once achieved the top position with 120,832 cores. However, in most cases, the number of cores for the top-EE supercomputers remains below 20,000 throughout the period under study. The disparity in number of cores between top-performing supercomputers and top-EE supercomputers highlights a significant trade-off in achieving peak computational performance. Conversely, supercomputers optimized for EE appear to have lower core counts, possibly to reduce energy consumption and improve power utilization ratios. This trend likely underscores the strategy of minimizing core counts as a critical factor in achieving and maintaining sustainability. The cases of Frontier and LUMI demonstrate that it is possible to achieve a balance between powerful computational performance and EE with millions of cores, challenging the conventional notion that a lower core count is necessary to attain top EE. Similarly, the inclusion of MareNostrum suggests that balancing performance and sustainability does not always require millions of cores; a lower core count can also succeed in bridging this gap. In this subset, there is usually very little overlap between the Top10 and Green10 lists. This is mainly because the systems on these lists are very different in size. To achieve the highest performance, a computer system needs to be very large, with many cores. However, the larger a system is, the harder it becomes to achieve sustainability. As a result, the most energy-efficient systems on Green500 are typically smaller with a small number of cores compared to top-performing systems on TOP500, except in rare cases, as already discussed.
5.2. Power Consumption
Figure 7 illustrates the power consumption of the top-performing supercomputers and the top-EE supercomputers. Among the top-performing systems,
Tuolumne has the lowest power consumption at 3387 kW, whereas
Venado has the highest power consumption among the top-EE supercomputers at 1662 kW. This highlights the significant disparity in power consumption between these two groups of supercomputers. It is evident that the top-performing supercomputers generally consume much more power than the top-EE supercomputers. Notably, there is no power consumption data available for
Eagle. In 2023 (
Figure 8), a similar pattern to 2024 is observed. The power consumption of the top-performing supercomputers remains significantly higher, except for
Frontier and
LUMI, which were listed in both ranking categories.
The data presented in
Figure 9 reveals that the pattern of power consumption of top-performing supercomputers is similar to the pattern observed in the years 2024 and 2023.
Frontier occupies the number one spot among the top-performing supercomputers while ranking second in EE ratings. The power consumption of these supercomputers varies, as indicated by the differing lengths of the bars in the chart (
Figure 9). However, a direct relationship between performance and energy consumption cannot be deduced. For instance,
Fugaku, ranked second in performance, consumes more power than
Frontier, which holds the top spot. A similar discrepancy is observed between
LUMI and
Summit, where power consumption does not align with performance ranking. However, in June 2022,
LUMI exhibited both higher performance and greater power consumption compared to
Leonardo, ranked fourth. This inconsistency highlights the complexity of the relationship between performance and energy consumption. Interestingly, the patterns observed in 2024, 2023, and 2022 (
Figure 9) are consistent with those seen in 2021 (
Figure 10) and 2020 (
Figure 11), suggesting a recurring trend over this five-year period.
Top-performing supercomputers do not necessarily consume more power. Frontier’s top performance combined with lower power consumption relative to Fugaku indicates efficiency rather than raw power usage as a critical factor. The recurring patterns across the five-year period suggest that advancements in supercomputing technology are facing a challenge in balancing top EE with increasing raw power usage. Differences in power consumption among supercomputers of similar rankings, such as Fugaku and Frontier, suggest that supercomputer architecture and energy management strategies play significant roles in determining overall efficiency. The differences observed among Top10 and Green10 (e.g., Fugaku, ranked second in performance, consumes more power than Frontier, LUMI, Summit, etc.) are most likely due to the comparison of supercomputers of different generations (Fugaku is older than Frontier, and Summit is older than LUMI).
Observation and Implication
Balancing high computational performance with top EE remains a critical challenge in the pursuit of sustainability. Although advanced supercomputers are enabling major breakthroughs in artificial intelligence and supporting complex simulations across different scientific domains, the substantial power requirements raise serious sustainability concerns. These systems consume significant amounts of energy, often resulting in elevated carbon dioxide emissions, especially when powered by non-renewable energy sources. Top-performing supercomputers typically consume significantly more power than top-EE systems, which are specifically designed to operate with minimal energy usage. Due to the direct correlation between power consumption and carbon dioxide emissions, this suggests that high-performance supercomputers contribute considerably more to the carbon footprint compared to the EE counterparts during the period under study.
5.3. Top-Performing Supercomputers’ Power Consumption
Table 1 presents the analysis of the power (Kw) consumption of top-performing supercomputers. Note that the data for some supercomputers were not released at the time of the data collection, which explains why the “N” column contains values below 10 for certain periods. The minimum (Min) column indicates the lowest power consumption among the top-performing supercomputers. Observing the Min column reveals an initial increase in power consumption, which stabilizes at 1764 and then reduces to 921 before increasing again, with the exception of a drop in Nov24. Similarly, the maximum (Max) power consumption shows an increase in Nov20, stabilizes for a period, and then rises again before stabilizing in June24 and Nov24.
In general, both the Min and Max power consumption show an upward trend over the five-year period under investigation. The mean power consumption consistently increases throughout the period, indicating a growing demand for power among the most powerful supercomputers. The standard deviation remains high across the years. However, June22 shows an increase, followed by a reduction in Nov22 and June23 before another sharp increase. This reflects the wide variability in the capabilities and efficiency of the supercomputers. Periods of stabilization in power consumption may be attributed to the same system being on the lists for 2 or 3 consecutive years.
Table 2 presents the
correlation for the power consumption of top-performing supercomputers. Strong correlations are observed between Nov23 and June24, Nov22 and June23, and June21 and Nov21, indicating robust similarities in power consumption among the top-performing supercomputers during these periods. The correlation result suggests consistency in power consumption, likely because of sustained efforts to improve performance. Moderate
correlations (e.g., June22 and Nov23, Nov21 and Nov22) indicate meaningful but weaker relationships. These suggest some diversity in power consumption, reflecting variations in supercomputer configurations among the powerful supercomputers. Weak correlations (e.g., June23 and Nov24, Nov22 and Nov24) highlight weak relationships, likely reflecting significant changes in the list over the period under investigation. Additionally, the weak
correlation observed between the earlier periods and later periods clearly suggests an evolution in power consumption trends caused by new entries in the ranked list over the five-year period.
5.4. Analysis of Energy Efficiency
Table 3 presents a statistical analysis for the top-EER supercomputers. The Min EER refers to the supercomputer with the lowest EER among the top-EE supercomputers, typically ranked in the 10th position. The Max EER represents the supercomputer with the highest EER, typically occupying the number 1 position in the top 10 EER rankings.
The Min and Max EER columns indicate improvements in EER, particularly in recent periods (Nov24, June24, and Nov23) compared to the earlier periods (June20, Nov20, and June21). In the early periods, the EER values were relatively low, especially for the Min EER, with only a slight increase observed for both Min and Max values. However, EER has shown continuous, gradual improvement over the five-year period. The increasing Min EER values suggest that even the supercomputers with the lowest EE within the top-performing group have improved significantly. Notably, a significant improvement in EER begins from June22, particularly for the Max EER, with a continued upward trend extending to the current period. The standard deviation values highlight trends in diversity among EE supercomputers. Lower standard deviation values reflect limited diversity, whereas higher values indicate increasing diversity within the top-EER group. The mean EER exhibits a consistent upward trend, signifying persistent efforts by the top-EE supercomputers to achieve sustainability over the years.
Table 4 presents the correlation values for top-EER supercomputers. The strong correlations observed throughout the study period reveal that supercomputers with similar characteristics dominated the top 10 spots in the EER rankings. The consistent high correlation across the periods suggests uniform progress in EE among the top-performing energy-efficient supercomputers. Observations from
Table 4 further indicate that correlations steadily increase from the early periods (June20–Nov21) to the recent periods (Nov23–Nov24), reaching extremely high values. This trend likely reflects global efforts toward sustainability. Adjacent periods consistently maintain strong correlations, likely due to gradual improvement in sustainability, across the periods under study.
5.5. Performance (Rmax) Analysis
Table 5 presents a statistical analysis of the performance of top-performing supercomputers. The worst-case scenario refers to the supercomputer with the lowest performance, typically ranked in the 10th position, whereas the best-case scenario represents the top-performing supercomputer, consistently occupying the 1st position in the top 10. An analysis of the worst-case scenario suggests performance improvement. A similar trend is observed in the best-case scenario, where performance shows a rapid increase, particularly in recent periods compared to the earlier periods (June20, Nov20, June21 and Nov21).
In the early periods, the performance levels were relatively low, especially in the worst case, with slight improvements in both the worst case and the best case. The performance continued to improve modestly with stable variability, as indicated by the standard deviation. The worst-case performance improvement suggests improvements in the top lower-performing supercomputers. A significant jump in performance is observed starting from June22, especially in the best case. The performance keeps increasing continuously. The likely reason for this surge is the entry of the first exascale system,
Frontier, into the list in June 2022. The performance for the top system does not evolve in a smooth, linear manner, but rather in “jumps” each time a new top machine is deployed. The values remain constant for a period, as shown in
Table 5, because it takes time for the next best system to be deployed. Moreover, for a new system to claim the top position, it must outperform the previous state-of-the-art top performer. The standard deviation remains high, highlighting the diversity in supercomputing performance levels. The mean performance exhibits a consistent improvement throughout the observed period, highlighting the relentless race for higher performance. The period under study indicates that exascale supercomputers dominated the number one spot in the top-performing group. Exascale supercomputers have held the number one position since June22, indicating a new benchmark in high-performance computing. This trend suggests that, moving forward, any supercomputer aiming for the top spot must operate at the exascale level. The substantial leap in performance also signals the beginning of the race toward zettascale computing.
Table 6 presents the correlation analysis for the performance of top-performing supercomputers across the five-year period. It is observed that the correlation relationship is strong, with all correlations exceeding 0.8, indicating a robust relationship between the performances of top-performing supercomputers during the study period. The strong correlations suggest a uniform trajectory in performance improvements, highlighting cohesive progress across the top-performing supercomputers. This consistency likely reflects a global trend of prioritizing comparable performance enhancements among supercomputers. Furthermore, the strong relationships indicate that mostly the same supercomputers with similar characteristics dominated the top performance rankings within the period under study.
5.6. Performance Analysis of Top-Performing and Most Energy-Efficient Supercomputers
An analysis of the performance (Rmax) of top-performing supercomputers and the performance (Rmax) of top-EE supercomputers has been conducted to determine the relationship between the two groups. Generally,
Table 7 indicates a negative correlation between the performance of top-performing supercomputers and the performance of top-energy-efficiency supercomputers, with the exception of three weak positive correlations. This inverse relationship suggests that as the performance of top-performing supercomputers increases, the performance of energy-efficient supercomputers tends to decrease. This trend has been consistently observed over the five-year period from 2020 to 2024.
In other words, achieving higher computational power among top-performing supercomputers often comes at the expense of EE. Consequently, supercomputers with the highest computational performance may not necessarily operate at peak EE, highlighting a trade-off between maximizing computational power and optimizing EE. The likely reasons for the differences in performance and EE between the two groups of supercomputers are different system sizes, different types of processors and accelerators (CPU vs. GPU or other accelerators), different energy systems, and different hardware generations (i.e., “age” of systems leading to different silicon sizes).
6. Conclusions and Future Research
In this era of AI, the race toward faster and more energy-efficient supercomputers has brought critical challenges, especially in balancing computational performance with sustainability. This article has comprehensively examined the interplay between top 10 highest performance systems and EE of top 10 EER supercomputers over a period of five years, providing rare longitudinal insight into the trade-offs that define modern HPC. The findings reveal that while performance has increased rapidly, EE has not kept pace at the same rate, revealing a persistent challenge between maintaining performance and sustainability. Through comparative analyses of the TOP500 and Green500 rankings, the study shows that supercomputers with the top performance typically operate at significant power costs, with core counts in the millions and power demands reaching tens of megawatts. However, the most energy-efficient supercomputers tend to be smaller in scale, highlighting a structural limitation in current HPC architecture. Notably, a few exceptional supercomputers like Frontier, LUMI, and MareNostrum demonstrated the rare capability to appear in the top 10 of both rankings, signaling that it is feasible to achieve a balance between top performance and energy efficiency, though this balance is difficult to maintain. As AI continues to evolve rapidly and increase global energy demands, the sustainability of supercomputing infrastructures becomes a critical concern. The projected increase in carbon emissions from AI-powered data centers shows the need for green innovation in hardware design, cooling systems, and renewable energy integration. This study recommends a paradigm shift to a situation where performance metrics are evaluated not only in terms of FLOPS, but also through the lens of ecological responsibility and long-term sustainability. This research can provide policymakers, researchers, and technologists with foundational evidence for rethinking supercomputing in the era of AI. As the HPC community and industries prepare for the next phase of supercomputing, which is zettascale computing, there is an urgent need to ensure that the future of HPC not only emphasizes high performance but also is greener and more sustainable. It can be interesting for researchers to quantify the carbon footprint of the top-performing supercomputers over study periods. The study suggests similar research should be conducted in the future using data from Graph500. The effects of architectures or cooling technologies of different supercomputers (e.g., CPUs vs. GPUs) on energy and performance can be studied in the future.