Next Article in Journal
Backward Signal Propagation: A Symmetry-Based Training Method for Neural Networks
Previous Article in Journal
UCINet: A Multi-Task Network for Umbilical Coiling Index Measurement in Obstetric Ultrasound
Previous Article in Special Issue
Comparative Analysis of Artificial Neural Networks and Evolutionary Algorithms in DEA-β-MSV Portfolio Optimization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Comparison of Energy Consumption and Quality of Solutions in Evolutionary Algorithms

by
Francisco Javier Luque-Hernández
1,
Sergio Aquino-Britez
1,
Josefa Díaz-Álvarez
2 and
Pablo García-Sánchez
1,*
1
Department of Computer Engineering, Automation and Robotics, E.T.S. de Ingenierías Informática y de Telecomunicación, Universidad de Granada, 18071 Granada, Spain
2
Department of Computer and Communications Technology, Centro Universitario de Mérida, Universidad de Extremadura, 06800 Mérida, Spain
*
Author to whom correspondence should be addressed.
Algorithms 2025, 18(9), 593; https://doi.org/10.3390/a18090593
Submission received: 31 July 2025 / Revised: 2 September 2025 / Accepted: 11 September 2025 / Published: 22 September 2025

Abstract

Evolutionary algorithms are extensively used to solve optimisation problems. However, it is important to consider and reduce their energy consumption, bearing in mind that programming languages also significantly affect energy efficiency. This research work compares the execution of four frameworks—ParadisEO (C++), ECJ (Java), DEAPand Inspyred (Python)—running on two different architectures: a laptop and a server. The study follows a design that combines three population sizes ( 2 6 , 2 10 , 2 14 individuals) and three crossover probabilities ( 0.01 ; 0.2 ; 0.8 ) applied to four benchmarks (OneMax, Sphere, Rosenbrock and Schwefel). This work makes a relevant methodological contribution by providing a consistent implementation of the metric η = f i t n e s s / k W h . This metric has been systematically applied in four different frameworks, thereby setting up a standardized and replicable protocol for the evaluation of the energy efficiency of evolutionary algorithms. The CodeCarbon software was used to estimate energy consumption, which was measured using RAPL counters. This unified metric also indicates the algorithmic productivity. The experimental results show that the server speeds up the number of generations by a factor of approximately 2.5 , but the energy consumption increases four- to sevenfold. Therefore, on average, the energy efficiency of the laptop is five times higher. The results confirm the following conclusions: the computer power does not guarantee sustainability, and population size is a key factor in balancing quality and energy.

1. Introduction

Currently, data centres and high-performance computing infrastructures massively contribute to climate change, emitting over 100 megatons of CO2 per year—a figure comparable to that of American commercial aviation [1]. With regard to the exponentially increasing computational and energy demands of modern machine learning (ML), most research articles in ML do not regularly report energy metrics or CO2 emissions [2].
Within the field of Computational Intelligence, evolutionary algorithms (EAs) are optimisation and population-based algorithms that mimic the process of natural biological evolution to find high-quality solutions for complex problems. They apply evolutionary principles to iteratively improve the set of potential solutions through selection and variation. Even though EAs are not as massive as Large Language Models (LLM), they are widely applied in network optimisation, hyperparameter tuning, drug design, and industrial simulation [3]. The ongoing improvement of performance has been for decades the main motivation behind research efforts in the domain of EAs [4,5,6]. Research works are usually evaluated using two metrics: the quality of the solution to the problem and the time to convergence.
Two distinctive characteristics of evolutionary algorithms exacerbate the problem: their stochastic nature forces many iterations to be run to achieve robust results, and the hyperparameter exploration multiplies the number of executions. Moreover, some studies reveal that handheld devices such as Raspberry Pi and tablets require an order of magnitude less energy to run evolutionary algorithms than standard computers such as laptops and iMacs, and even parameters such as the population size (potential solutions in the evolutionary process) also have an impact on the energy requirements [7].
Consequently, it is essential to convey that energy awareness, which has already been discussed in the context of machine learning [8], to the specific domain of evolutionary algorithmss. In this specific case, two main questions must be considered. First, what would be the difference in consumption when running the same EA on hardware devices with different computing characteristics, with the same configurations and stopping criterion? Second, how does this consumption relate to the quality of the solution obtained and to the internal configuration of the algorithm itself?
The following research questions (RQs) have arisen as a result of raising these issues: RQ 1 (hardware-energy). Does a server consume significantly more energy than a laptop when running the same EA with identical settings and a time stop criterion? RQ 2 (setting-efficiency). How do the parameters impact energy efficiency ( η = f i t n e s s / k W h )? Further, is this effect consistent across the assessed frameworks? RQ 3 (framework-performance). Are the compiled frameworks (C++ and Java) more energy efficient than the interpreted ones (Python) when solving identical problems with EAs?
The rest of the paper is organised into six sections. Section 2 reviews the state of the art on the energy footprint of the EAs and current monitoring tools. Section 3 details the experimental methodology, including the benchmarks chosen, the frameworks compared, the common genetic setting, the execution environments, and the energy consumption instrumentation. Section 4 outlines the quantitative results supported by its statistical and graphical analysis. In Section 5, the implications and limitations of these findings are discussed in light of the research questions posed and the previous literature. Finally, the main conclusions are summarised and suggestions for future research lines are provided in Section 6.

2. Background

The concept of green computing has emerged in the last decade to accurately address the issue of energy consumption, which is inextricably linked to computing, especially in large infrastructures such as data centres [7]. This growing interest is particularly evident in large-scale infrastructures, where increasing physical resources inevitably leads to higher energy consumption [9].
The relevance of the energy issue in evolutionary computation has been considerably stepped up. Henderson et al. [2] pointed out that with the compute and energy demands of the modern machine learning methods exponentially growing, ML systems have the potential to significantly contribute to carbon emissions. However, the majority of the research articles do not regularly report energy or carbon emissions metrics. Data centres and high-performance computing infrastructure contribute 100 megatons of CO2 emissions per year, similar to US commercial aviation [1].
In this context, EAs exhibit characteristics that exacerbate the problem: their stochastic nature forces many iterations to be run to achieve robust results, and hyperparameter exploration multiplies the number of executions.
First, quantifying energy consumption in EAs has evolved from descriptive studies to standardized measurement protocols. Ref. [7] carried out the first cross-platform trials and demonstrated that handheld devices such as Raspberry Pi and tablets require an order of magnitude less energy to run the same EA compared to standard computers such as laptops and iMacs, while population size also has a significant impact on energy requirements.
Notwithstanding, there has been significant advancement in energy measurement instrumentation; ref. [10] implemented considerably precise measurements using the Pinpoint tool. This tool accesses the RAPL API and facilitates the collection of consistent measurements of the energy consumption of the system while a process is running. They achieved 5% precision in hardware counters. However, ref. [11] warns that accuracy depends on the instrumental proportion representing a significant part of CPU time. Conversely, the ’noise’ of the system may mask subtle differences. Furthermore, the influence of the programming language on energy consumption has generated extensive research with unexpected results. Ref. [12] settled that there is no clear category of superior languages, due to the performance that varies according to the specific operation, although Java is almost always among the fastest along with C or Go.
Secondly, comparative experiments convey additional complexities: interpreted languages such as PHP may overcome C++ with particular operations, blurring the differences between fast compiled languages and slow scripting languages [12]. More recent studies have delved deeper into this research line, using more accurate methodologies. Merelo et al. [10] evaluated three representative languages at different levels of abstraction using computationally intensive fitness functions, such as HIFF (Hierarchical If and only IF). Their results showed that Kotlin, which compiles JVM bytecode, achieved the best energy efficiency at 131.02 operations per Joule, followed by Zip (114.40 ops/J) as a native, compiled low-level language, and JavaScript using bun as an interpreted language (93.92 ops/J). These findings demonstrated that the difference in energy consumption is so low that it falls below the average for the generation or the chromosomes in specific scenarios, suggesting that optimisations at the compiler level and virtual machine are more determining that than those at the language’s abstraction level.
Furthermore, the granularity of implementation also influences energy consumption significantly. Authors in [13] documented that the use of fixed-size data structures consumes approximately one-eighth of the energy required by variable-size structures such as arrays. Moreover, experiments with transprecision computing using customised 8 and 16-bit formats reduced energy consumption by up to 30%.
Thirdly, the explicit consideration of energy as an optimisation objective has evolved from being used for specific applications to being used for generalised, multi-objective approaches. Ref. [14] developed immediately a simulation–optimisation approach using evolutionary programming for the effective management of the energy used by HVAC systems. They achieved a 7% energy savings compared to existing operational settings.
Lee et al. [15] extended this approximation and demonstrated that a Differential Evolution algorithm identified combinations of parallelisation that, at low loads (40%), reduce energy consumption by up to 23.4% compared to the Lagrangian method. More recent applications include the management of residential demand. In this context, ref. [16] proposed a multi-file differential evolution that can simultaneously reduce costs and increase user comfort.
Finally, recent literature explores strategies that adjust algorithmic parameters during execution to optimise energy efficiency. Cotta et al. [11] demonstrated that the introduction of short pauses between executions can reduce the energy consumption up to 9%. Paradoxically, ref. [7] documented that population size can influence energy consumption, with larger populations sometimes requiring less total energy due to automated adjustments in the processor frequency. The use of surrogate models has shown a significant energy impact, as [17] reported that in cache optimisation for embedded systems, a combination of NSGA-II and Dinero IV simulator achieved an average reduction of 92% in kWh compared to an exhaustive search.

3. Methodology

This section outlines the methodology used. This includes details of the benchmarks used for the analysis, the frameworks implemented, the algorithms’ configurations, and the characteristics of the hardware on which the experiments were performed.

3.1. Problems

In order to evaluate the energy efficiency of the different frameworks, a diverse set of optimisation problems is required. These problems cover a range of characteristics of the search space, while keeping the computational cost reasonable. The recommendation in many publications is to introduce test functions that incur a moderate cost, and to impose a limit on the number of evaluations. This is so that algorithms and configurations can be compared in a reproducible way [11]. Four test functions were chosen as a representation of the complete set of BBOB functions from the point of view of the energy consumption [13]: OneMax, Sphere, Rosenbrock and Schwefel. Table 1 summarizes these functions.
OneMax is the most basic problem in binary optimisation, where the objective is to maximize the number of activated bits in a string of length n, and its unimodal nature makes it an excellent benchmark to compare framework performance, as well as being the basic pattern for binary GAs, acting as a measure of implementation efficiency [18].
Sphere represents the most basic continuous function, which aims to minimize the sum of the squares of all variables. The key feature is its ability the complete separability, which enables it to isolate the energy cost of floating-point fitness functions [13]. This property facilitates the analysis of how frameworks handle the optimisation of real variables in the most advantageous scenario.
Rosenbrock introduces complexity by linking consecutive variables. Through its famous “curve valley” towards the global optimum, it conveys the energy impact of coupling among variables [11]. This function evaluates the ability of the frameworks to balance exploration and exploitation in non-separable problems.
Schwefel brings the multimodal component to the set of tests. Its landscape is riddled with multiple local optima that emphasise diversification and highlight consumption peaks [13]. The irregular nature of this function generates patterns of CPU usage, which in turn allow us to study how architectures respond to fluctuating loads.

3.2. Frameworks to Compare

Four frameworks representing different compilation paradigms were selected: ParadisEO 3.0.0 (C++ 11.4.0), ECJ v.27 (Java opendjdk 17.0.15), DEAP 1.4.2 and Inspyred 1.0.3 (Python 3.12.4). This allows a cross-platform analysis of energy impact based on the programming language used. DEAP is a toolboxes’ design that combines simplicity with algorithmic transparency, facilitating the explicit implementation of evolutionary algorithms [19]. Its architecture enables energy instrumentation to be seamlessly integrated with CodeCarbon 3.0.0 with no impact on the algorithmic logic [8].
Inspyred uses a lighter ’observer’ system than DEAP, enabling us to draw parallelism as to whether energy efficiency depends on the structural overhead of each library. Multilingual experiments demonstrate that interpreted languages, such as Python, can reach competitive performance in specific operations [18].
ECJ represents the ecosystem of bytecode compilation on the JVM and acts as an intermediate point between interpreted code and native binary. Its status as the standard in Java evolutionary research [20] and its lower energy consumption compared to C++ for lower dimensions make it a suitable candidate [11]. This selection is based on recent empirical evidence, which demonstrates that languages compiling to bytecode, such as Java and Kotlin, can efficiently take advantages of JIT (Just-In-Time) optimisations. This is particularly relevant in highly parallel architectures, where the JVM can benefit from architectural optimisation that mitigates the environmental impact of scalability [10].
ParadisEO is a template-based C++ framework that separates algorithmic components from problem-dependent ones [21]. Comparative studies show that implementations based on arrays consume approximately half the energy of those based on vectors [22].

3.3. Experimental Setup

A common genetic configuration was established to guarantee a fair comparison between frameworks. According to some authors [23], stopping criteria such as the number of evaluations required to achieve a solution can be misleading depending on the studied scenario. Therefore, we set a time limit of two minutes for each execution. The population size was set to N ( 2 6 , 2 10 , 2 14 ) individuals. OneMax individuals are represented as a binary string of length n = 1024 , whereas continuous problems use a vector of real values of the same length. To ensure a fair comparison between programming languages, the implementations are as similar as possible in logical terms. This means that any differences observed can be attributed to fundamental aspects of the language itself, rather than to high-level implementation decisions [11].
The selection uses binary tournament without replacement, thereby maintaining stable selective pressure regardless of N and minimising the necessary arithmetic operations. Controlled variability is introduced by applying Simulated Binary Crossover (SBX) to continuous problems with a distribution index of η c = 0.1 and three crossover probabilities of p c = 0.01 , 0.2 , 0.8 . This covers a range from an almost asexual regime to a highly intense recombination. The MOODY ontology details the rules that validate parameter configurations, such as the distribution index of the SBX operator, to reduce configurations that produce poor results [24]. The stochastic variability is controlled by setting 10 independent executions per configuration, giving a total of 90 executions per framework and benchmark. All results from all 2880 possible configurations (10 runs × 3 population sizes × 3 crossover probabilities × 4 frameworks × 4 benchmarks × 2 computers) are available in our GitHub repository: https://github.com/SEECS-Project/Energy-Comsumption-of-EA-frameworks (accessed on 31 July 2025).

3.4. Hardware and Energy Measurements

Experiments were conducted on two intentionally different machines: a 15W TDP Intel Core i7-7500U laptop with 2 cores/4 threads and 7.7 GiB RAM, and a 241W maximum Intel Core i9-12900KF server with 16 cores/24 threads and 125 GiB RAM. This setup enables us to examine the energy elasticity of the genetic algorithm when confronted with different architectures. These machines have distinct thermal profiles, which can result in energy consumption that varies by orders of magnitude between portable devices and standard computers, even when executing the same algorithm [7].
Comparing the two machines enables us to evaluate the portability of the frameworks. Python and Java use interpreters and virtual machines to optimise code ’on the fly’, whereas C++ produces a single binary dependent on the compiler’s decisions. The methodology states that benchmarks should include a wide variety of sizes. This is because performance does not always depend linearly on size and it is affected by technical details such as loop implementation and memory management [18].
Both machines share identical distribution (Ubuntu 22.04 LTS) and kernel (6.8.0-59-generic) to minimise differences not related to the language before comparing the results [11]. The electricity consumption is measured with RAPL counters, and using CodeCarbon to estimate thereof [25]. Algorithmic productivity is expressed using the unified metric, η = f i t n e s s / k W h , which is defined as the ratio of fitness to kWh. This allows the relationship between the solution found and the energy expended to be condensed into a single value.

4. Results and Analysis

The primary quantitative findings derived from the experimental analysis are presented below, organised by key metrics and concluding with the overall ranking of frameworks. For better generalisation, in this Section we have grouped the results at the framework and benchmark levels.

4.1. Estimated Energy Consumption

Table 2 shows the average energy consumption on the laptop platform. ParadisEO emerges as the most energy-efficient framework, with an average consumption of 0.000331 kWh, and DEAP comes second with 0.000335 kWh.
This closeness, with a difference of only 7 % , demonstrates that optimised implementations in Python can achieve energy efficiency comparable to native compilation in C++ under moderate computational loads. Conversely, ECJ shows the highest consumption (0.000373 kWh), denoting that the JVM overhead imposes a substantial penalty on performance in environments with limited resources.
The energy hierarchy is completely reversed in the server architecture (see Table 3). With a consumption of 0.001706 kWh, ECJ emerges as the most efficient framework, leveraging its substantial processing capabilities and JIT optimisation to the fullest extent. The values of the frameworks are similar to each other (only 7.3% variation), which is very different to the 13.6% variation observed on the laptop. This shows that high-performance architectures can balance the differences in how languages are implemented.
The scaling factor of the server/laptop (see Table 4) reveals systematic patterns that defy common assumptions about computational efficiency. DEAP demonstrates the most aggressive scaling in OneMax (7.15×), but exhibits more moderate scaling in other benchmarks, resulting in an overall factor of 5.08×. ParadisEO shows the most balanced scaling, with an overall factor of 5.20× and relatively steady ratios ranging from 4.04× (Rosenbrock) to 6.90× (Sphere). ECJ yields the lowest overall scaling factor (4.58×), which is due to its architecture being optimized for parallelism. This benefit is particularly evident in OneMax (4.17×). Inspyred maintains an intermediate scaling factor of 5.03×, with little variability between benchmarks. These results show that the server consistently uses 4–6 times more energy than the laptop, suggesting that there is no direct relationship between computational power and energy efficiency with the same amount of time allocated.

4.2. Computing Performance Measured by the Maximum Number of Generations

The computational performance of the laptop (see Table 5) highlights significant differences between the frameworks. ECJ dominates categorically with an average of 32,685 generations, reaching extraordinary peaks in Sphere (43,335) and Rosenbrock (45,158), where its compiled loop efficiency is fully demonstrated. ParadisEO is in a solid middle position with an average of 8700 generations, showing particular strength in Schwefel (11,626). Python frameworks are severely limited: Inspyred reaches an average of merely 964 generations, while DEAP achieves and average of just 975. This 33× disparity between ECJ and Python frameworks reflects the fundamental differences between JIT-compiled bytecode and dynamic interpretation in loop intensive applications.
The server (see Table 6) dramatically amplifies the computational differences. ECJ achieves an extraordinary average of 91,252 generations, with outstanding performance in the Rosenbrock test (112,902). This demonstrates the superior scalability of the JVM in massively parallel environments. ParadisEO maintains a competitive position with 25,251 generations, with especially good results achieved in Schwefel (35,284). The Python frameworks show modest improvements: DEAP achieves 2522 generations and Inspyred obtains 2392, representing accelerations of just 2.6× and 2.8×, respectively, compared to the laptop. Meanwhile, ECJ and ParadisEO achieve accelerations of 2.9×. This difference in scalability demonstrates that compiled frameworks are better suited to take advantage of parallel architectures.
The server/laptop acceleration ratios ( Table 7) demonstrate the inherent scaling capabilities of each framework. OneMax is led by ECJ with a factor of 4.06×, with steady solid accelerations across all benchmarks. ParadisEO shows the most consistent acceleration with factors between 2.40× and 3.38×, demonstrating a robust design for parallelisation. Python frameworks demonstrate more modest acceleration: DEAP achieves factors of between 2.5 and 2.7, while Inspyred remains within a similar range. The superior scaling of compiled frameworks (ECJ and ParadisEO) over interpreted ones suggests that, when selecting a framework, it is important to consider not only absolute performance, but also the ability to leverage additional resources.

4.3. Maximum Fitness Reached

All fitness results in this Section have been normalized in the range of 0 to 1, with 1 representing the optimal value. Table 8 shows the average fitness values obtained on the laptop platform, which disclose that computational speed does not automatically result in better search space exploration. ParadisEO leads the way with an overall fitness score of 0.797, demonstrating the effectiveness of its SBX operator and polynomial mutation with exceptional convergence in Rosenbrock (0.993). Inspyred ranks closely behind with a score of 0.794, exhibiting outstanding performance in Rosenbrock with a score of 0.984. This confirms that well-designed Python implementations can compete in terms of quality. Despite its computational superiority, ECJ ranks third (0.785), demonstrating particular strength in Sphere (0.893). DEAP exhibits the most variability in fitness (0.756 overall), and evinces excellence in Sphere (0.856) and weaknesses in Schwefel (0.589).
As depicted in Table 9, Inspyred emerges as the leader in convergence quality (with an overall score of 0.815), leveraging its greater computational capacity, which enables it to execute more sophisticated exploration strategies. Its performance on Rosenbrock (0.991) and Sphere (0.867) is particularly noteworthy. ParadisEO maintains a competitive position with an overall score of 0.808 and consistent convergence across benchmarks. ECJ improves slightly on the laptop (0.789), but still fails to capitalise fully on its computational advantage in terms of solution quality. DEAP demonstrates the most significant relative improvement (0.782), particularly in the Sphere benchmark (0.893), indicating that its architecture leverages the available parallelism on the server.
As shown in Table 9, the server/laptop fitness ratios demonstrate remarkable stability, with variations of less than 6% across all frameworks. This confirms that changes in architecture primarily affect execution speed without fundamentally altering the dynamics of the algorithmic search when spending the same amount of time. As it can be seen in Table 10, DEAP exhibits the greatest variability, with ratios ranging from 1.01 to 1.06, suggesting architectural sensitivity in its convergence. Conversely, ParadisEO, ECJ and Inspyred demonstrate platform-independent algorithmic robustness with very stable ratios (1.01–1.03). This consistency across platforms is crucial for reproducible results, suggesting that high-quality findings obtained in one architecture can be transferred to another.

4.4. η Metric

Table 11 depicts the computed values of metric η = f i t n e s s / k W h on the laptop. This score reveals the real balance between solution quality and energy consumption. ParadisEO is the most efficient framework overall (2520.472 fitness/kWh), particularly standing out in Sphere (3087.423), where it combines excellent convergence with low energy use. ECJ ranks second in Sphere (3460.072), but performs worse in other benchmarks, resulting in modest overall efficiency (2230.042). Inspyred maintains competitive efficiency (2206.367), demonstrating particular strength in Rosenbrock (2615.694). DEAP exhibits significant variability, achieving the highest score in OneMax (2782.618), but performing poorly in Schwefel (1656.521). This dispersion between frameworks highlights that energy efficiency is not a uniform property, but rather depends critically on the type of problem and the specific implementation.
As shown in Table 12, the compression of efficiency differences on the server is dramatic, with only 8% variation between frameworks. ParadisEO retains first place (471.511 fitness/kWh), excelling particularly in Rosenbrock (600.927), where its genetic operators make optimal use of the available parallelism. ECJ ranks second with a score of 462.718, demonstrating consistent performance across benchmarks. Inspyred (447.962) and DEAP (434.594) exhibit comparable efficiency, primarily being penalised in Schwefel. This convergence on the server suggests that massively parallel architectures homogenise efficiency, reducing the specific competitive advantages of each implementation and rendering framework selection less critical from an energy perspective.
Table 13 depicts the laptop/server energy efficiency ratios, which reveal the ‘power paradox’: the laptop is consistently 4–7 times more efficient than the server. DEAP presents the most extreme ratios, which suggests that its architecture is particularly susceptible to server overload. ParadisEO shows more consistent ratios (4.07–6.78×), averaging 5.346×, which demonstrates its ability to adapt across different platforms. ECJ exhibits the most stable ratios (4.078–6.4×), benefiting from architectural optimisations that partially offset the server’s energy penalty. This paradox highlights a key trade-off: although the server can accelerate computation by 2.5–3 times measured in number of generations during the same amount of time, it reduces energy efficiency by 4–7 times, raising questions about the sustainability of high-performance computing for evolutionary algorithms.
Figure 1, Figure 2, Figure 3 and Figure 4 graphically depict the result for η = f i t n e s s / k W h for each benchmark (OneMax, Sphere, Rosenbrock, and Schwefel), and all different experimental configurations carried out in this research work. In all graphs, X-axis represents individual experimental configurations and Y-axis the η metric value. As stated previously, there are evident differences between server and laptop results, but population size is also a critical factor: Fernández de Vega et al. [7] detected a general trend of increasing energy costs as population size increases, an effect that may be due to problems related to memory management.
Figure 5 presents a two-dimensional evaluation of the thorough, which combines energy efficiency and solution quality. This reveals the holistic performance of each framework. ParadisEO emerges as the most balanced framework, positioning itself in the upper right quadrant, which combines high energy efficiency with competitive convergence quality. This dominant position reflects the advantages of native C++ compilation and optimised memory management, which achieve the best balance between sustainability and algorithmic effectiveness.
Inspyred ranks second, demonstrating that Python can compete with compiled languages when optimised effectively. Its favourable position indicates an excellent balance between ease of implementation and effective performance, making it particularly valuable for researchers who prioritise results and sustainability.
DEAP ranks third, excelling in terms of energy efficiency on laptops, but being penalised by its lower scalability on servers. While its declarative architecture favours experimental reproducibility, it introduces significant overhead in large configurations, restricting its applicability in computationally intensive environments.
Despite showing the highest raw performance in terms of number of generations per second, ECJ ranks last in the overall classification. Most experimental scenarios are adversely affected by the high energy consumption of these devices, particularly laptops, which leads to an unfavourable speed/energy ratio. This highlights that ECJ is better suited to problems where pure speed is prioritised over energy efficiency, such as real-time optimisation or problems involving extremely costly evaluations.
This comprehensive ranking demonstrates that computational power alone is insufficient for ensuring overall efficiency. It highlights the need for selecting the framework carefully, balancing speed, consumption and quality based on the specific needs of the application. The results suggest that, for most evolutionary algorithm research applications where sustainability and reproducibility are important, frameworks such as ParadisEO and Inspyred offer the best overall compromise.

5. Discussion

The experimental results reveal four key findings that challenge conventional assumptions about the computational efficiency of evolutionary algorithms. Firstly, an analysis of 2880 controlled experiments identified a clear trend of increased energy consumption and a ‘computational paradox’: an increase in processing power from 15 W (an i7-7500U laptop) to 241 W (an i9-12900KF server) results in an improvement in speed of only 2.5 generations, but increases energy consumption by between four and seven times. According to the metric η = f i t n e s s / k W h , this indicates that the laptop is systematically 5–7 times more energy efficient, which fundamentally undermines assumptions about the relationship between computing power and energy efficiency.
Secondly, the systematic influence of population size on energy efficiency is a key factor in achieving a balance between solution quality and energy consumption. This finding remains consistent for both laptops and servers, suggesting a fundamental trade-off between population diversity and computational cost that is independent of implementation and hardware differences.
Thirdly, the data disclose specialisation by context, with no dominant universal framework. ParadisEO stands out for its consistency and balance (with an average η of 2.408 on the laptop), while ECJ stands out for its pure processing capacity (with an average of 91,251 generations on the server). DEAP and Inspyred, meanwhile, show greater variability, but reach peaks of efficiency in specific configurations. A comprehensive analysis combining energy efficiency and solution quality through a two-dimensional evaluation confirms that ParadisEO is the most balanced framework, as it is positioned in the quadrant combining high energy efficiency and competitive convergence quality.
Finally, energy performance varies significantly depending on the topology of the problem. The greatest disparities between frameworks are observed in OneMax (up to a 40% difference in η ), whereas in continuous benchmarks such as Sphere and Rosenbrock, the differences are reduced to 20–25%. Due to its multimodal nature, Schwefel amplifies implementation differences, confirming that framework selection must consider the specific characteristics of the search space.
The experimental validation of the research questions provides robust quantitative evidence. RQ1 (hardware-energy) is strongly confirmed: for identical configurations, the server consumes between 4.3 and 7.4 times more energy than the laptop during the same amount of time, while accelerating generations by only 2.3 to 2.9 times. This finding corroborates the observations of [7], who stated that portable devices such as Raspberry Pis and tablets require an order of magnitude less energy to run evolutionary algorithms than standard computers such as laptops and iMacs. Beyond the numerical results, these differences can be explained by the higher baseline power requirements and the greater number of active resources in the server compared to the laptop, as well as differences in memory access and instruction throughput, which lead to higher energy needs even under the same time budget. This is a relevant point that should be explored in more detail. Therefore, resource estimation for running specific computing experiments, in turn, will contribute to increased energy efficiency and reduced environmental footprint.
RQ2 (configuration-efficiency) is also answered: population size systematically influences energy efficiency ( η ) in all analysed cases, establishing a robust empirical pattern that transcends differences in implementation and hardware. This corroborates the observations of [7] that population size also significantly influences energy requirements.
RQ3 (framework-performance) is confirmed, but with important nuances: while compiled frameworks offer advantages in specific contexts, the relationship is not linear. ParadisEO (C++) leads in overall efficiency, ECJ (Java) in raw speed, but DEAP (Python) achieves higher η peaks in optimised configurations. This supports the “no fast lunch” principle of [26]: there is no clear category of superior languages, as performance varies depending on the specific operation.
The findings’ contributions to the field are twofold. Firstly, they provide a systematic quantification of the computational power paradox in evolutionary algorithms. Secondly, they fundamentally challenge assumptions about computational efficiency. The homogeneous implementation of the metric, η = f i t n e s s / k W h , across four frameworks establishes a reproducible protocol that the community can adopt in response to ref. [1]’s recommendation for `carbon impact statements’ for computational experiments.
The results are consistent with the emerging green computing paradigm described by [7]. They extend this concept to the level of algorithm and framework selection and demonstrate that implementation decisions have a measurable environmental impact. Confirmation that DEAP and Inspyred can compete with compiled frameworks in terms of energy usage validates [13]’s assertion that ’some interpreted languages, such as Python, can achieve competitive speeds in certain specific operations’.
Finally, while this study provides a comparative analysis of several frameworks, benchmarks, and parameter configurations, it is crucial to acknowledge the inherent limitations of our approach that prevent broad, universal conclusions. Our goal was not to declare a single `best’ framework universally, but rather to provide a methodological illustration and a controlled investigation into how specific implementations behave under defined conditions.

6. Conclusions

This research work presents the first systematic evaluation of energy efficiency in evolutionary algorithms using the unified metric η = f i t n e s s / k W h , comparing four representative frameworks in two different architectures. The study reveals an interesting computational power behaviour. The increase in processing power from 15 W to 241 W results in only 2.5 times greater progress (measured in completed generations and in the improvement in fitness obtained), while energy consumption increases by a factor of 4 to 7. This confirms that lower-power devices are, in some cases, more efficient.
The results confirm that there is no universally dominant framework; rather, there is contextual specialisation. ParadisEO stands out for its overall consistency, ECJ for its pure computational speed and Python frameworks (DEAP and Inspyred) for their efficiency in specific configurations. Population size emerges as a crucial factor in achieving a balance between solution quality and energy consumption, exerting a consistent influence that surpasses differences in implementation and hardware.
The research sets out a step-by-step plan for assessing sustainability in evolutionary computing, and shows that the decisions made during implementation have a quantifiable effect on the environment. For research applications where sustainability is a priority, the selection of a framework must carefully balance speed, consumption and quality according to the specific context, with ParadisEO and Inspyred being the options that offer the best overall compromise between energy efficiency and algorithmic effectiveness.
It is important to mention that our study is limited to the specific set of benchmark problems, algorithmic parameters, frameworks, and hardware configurations tested. Therefore, our findings should be interpreted as a robust methodological illustration rather than a definitive, global performance ranking of the experimental configurations or frameworks.
This study paves the way for future research, with potential areas for exploration including the analysis of more parameters or the evaluation of high-cost functions, such as the most demanding variants of the BBOB set [27,28] or even the CEC benchmark industrial cases [29]. This will help to verify whether the η metric maintains its discriminatory power when fitness time dominates energy consumption. Secondly, other hardware configurations, including GPU accelerators and systematically comparing them with CPUs, will reveal the circumstances in which massive parallelisation reduces or increases consumption per unit of solution. The third step is to come up with a set of rules for stopping the process that take into account the quality of the results, how much time and energy has been used, and the sustainability of the process. The algorithm will then decide when to stop. Finally, real-time adaptive strategies, such as self-adjusting crossover intensity, should be explored to optimise energy efficiency during execution.

Author Contributions

Conceptualization, P.G.-S. and J.D.-Á.; methodology, P.G.-S. and S.A.-B.; software, F.J.L.-H.; validation, F.J.L.-H., S.A.-B. and P.G.-S.; formal analysis, F.J.L.-H.; investigation, F.J.L.-H.; resources, P.G.-S.; data curation, F.J.L.-H.; writing—original draft preparation, F.J.L.-H.; writing—review and editing, F.J.L.-H., P.G.-S. and J.D.-Á.; visualization, F.J.L.-H.; supervision, P.G.-S. and S.A.-B.; project administration, P.G.-S. and J.D.-Á.; funding acquisition, P.G.-S. and J.D.-Á. All authors have read and agreed to the published version of the manuscript.

Funding

This research has been funded by the Spanish Ministry of Science, Innovation and Universities MICIU/AEI/10.13039/501100011033 and FEDER/UE under project numbers PID2023-147409NB-C21 and PID2023-147409NB-C22.

Data Availability Statement

All obtained results and source code of the experiments are available in https://github.com/SEECS-Project/Energy-Comsumption-of-EA-frameworks (accessed on 31 July 2025).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Lannelongue, L.; Grealey, J.; Inouye, M. Green algorithms: Quantifying the carbon footprint of computation. Adv. Sci. 2021, 8, 2100707. [Google Scholar] [CrossRef] [PubMed]
  2. Henderson, P.; Hu, J.; Romoff, J.; Brunskill, E.; Jurafsky, D.; Pineau, J. Towards the systematic reporting of the energy and carbon footprints of machine learning. J. Mach. Learn. Res. 2020, 21, 10039–10081. [Google Scholar]
  3. Slowik, A.; Kwasnicka, H. Evolutionary algorithms and their applications to engineering problems. Neural Comput. Appl. 2020, 32, 12363–12379. [Google Scholar] [CrossRef]
  4. Doerr, B.; Neumann, F. A survey on recent progress in the theory of evolutionary algorithms for discrete optimization. Acm Trans. Evol. Learn. Optim. 2021, 1, 16. [Google Scholar] [CrossRef]
  5. Bartz-Beielstein, T.; Branke, J.; Mehnen, J.; Mersmann, O. Evolutionary algorithms. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2014, 4, 178–195. [Google Scholar] [CrossRef]
  6. Mayer, D.G.; Kinghorn, B.; Archer, A.A. Differential evolution–an easy and efficient evolutionary algorithm for model optimisation. Agric. Syst. 2005, 83, 315–328. [Google Scholar] [CrossRef]
  7. de Vega, F.F.; Chávez, F.; Díaz, J.; García, J.; Castillo, P.A.; Merelo, J.J.; Cotta, C. A Cross-Platform Assessment of Energy Consumption in Evolutionary Algorithms: Towards Energy-Aware Bioinspired Algorithms. In Proceedings of the International Conference on Parallel Problem Solving from Nature, Berlin, Germany, 22–26 September 1996; Springer: Berlin/Heidelberg, Germany, 2016; pp. 548–557. [Google Scholar]
  8. Aquino-Brítez, S.; García-Sánchez, P.; Ortiz, A.; Aquino-Brítez, D. Towards an Energy Consumption Index for Deep Learning Models: A Comparative Analysis of Architectures, GPUs, and Measurement Tools. Sensors 2025, 25, 846. [Google Scholar] [CrossRef] [PubMed]
  9. Maryam, K.; Sardaraz, M.; Tahir, M. Evolutionary algorithms in cloud computing from the perspective of energy consumption: A review. In Proceedings of the 2018 14th International Conference on Emerging Technologies (ICET), Islamabad, Pakistan, 21–22 November 2018; pp. 1–6. [Google Scholar]
  10. Merelo-Guervós, J.J.; García-Valdez, M. How evolutionary algorithms consume energy depending on the language and its level. In Proceedings of the International Conference on Learning and Intelligent Optimization, Ischia Island, Italy, 9–13 June 2024; Springer: Berlin/Heidelberg, Germany, 2024; pp. 254–268. [Google Scholar]
  11. Cotta, C.; Martínez-Cruz, J. Evaluating the impact of hysteretic phenomena and implementation choices on energy consumption in evolutionary algorithms. In Proceedings of the International Conference on the Applications of Evolutionary Computation (Part of EvoStar), Trieste, Italy, 23–25 April 2025; Springer: Berlin/Heidelberg, Germany, 2025; pp. 227–239. [Google Scholar]
  12. Merelo, J.; Castillo, P.; Blancas, I.; Romero, G.; García-Sanchez, P.; Fernández-Ares, A.; Rivas, V.; García-Valdez, M. Benchmarking languages for evolutionary algorithms. In Proceedings of the European Conference on the Applications of Evolutionary Computation, Porto, Portugal, 30 March–1 April 2016; Springer: Berlin/Heidelberg, Germany, 2016; pp. 27–41. [Google Scholar]
  13. Merelo-Guervós, J.J.; Romero López, G.; García-Valdez, M. Measuring Energy Consumption of BBOB Fitness Functions. In Proceedings of the International Conference on the Applications of Evolutionary Computation (Part of EvoStar), Trieste, Italy, 23–25 April 2025; Springer: Berlin/Heidelberg, Germany, 2025; pp. 240–254. [Google Scholar]
  14. Fong, K.F.; Hanby, V.I.; Chow, T.T. HVAC system optimization for energy management by evolutionary programming. Energy Build. 2006, 38, 220–231. [Google Scholar] [CrossRef]
  15. Lee, W.S.; Chen, Y.T.; Kao, Y. Optimal chiller loading by differential evolution algorithm for reducing energy consumption. Energy Build. 2011, 43, 599–604. [Google Scholar] [CrossRef]
  16. Essiet, I.O.; Sun, Y.; Wang, Z. Optimized energy consumption model for smart home using improved differential evolution algorithm. Energy 2019, 172, 354–365. [Google Scholar] [CrossRef]
  17. Álvarez, J.D.; Risco-Martin, J.L.; Colmenar, J.M. Evolutionary design of the memory subsystem. Appl. Soft Comput. 2018, 62, 1088–1101. [Google Scholar] [CrossRef]
  18. Merelo-Guervós, J.J.; Blancas-Álvarez, I.; Castillo, P.A.; Romero, G.; Rivas, V.M.; García-Valdez, M.; Hernández-Águila, A.; Romáin, M. A comparison of implementations of basic evolutionary algorithm operations in different languages. In Proceedings of the 2016 IEEE Congress on Evolutionary Computation (CEC), Vancouver, BC, Canada, 24–29 July 2016; pp. 1602–1609. [Google Scholar]
  19. Fortin, F.A.; De Rainville, F.M.; Gardner, M.A.G.; Parizeau, M.; Gagné, C. DEAP: Evolutionary algorithms made easy. J. Mach. Learn. Res. 2012, 13, 2171–2175. [Google Scholar]
  20. Scott, E.O.; Luke, S. ECJ at 20: Toward a general metaheuristics toolkit. In Proceedings of the Genetic and Evolutionary Computation Conference Companion, Prague, Czech Republic, 13–17 July 2019; pp. 1391–1398. [Google Scholar]
  21. Liefooghe, A.; Jourdan, L.; Talbi, E.G. A software framework based on a conceptual unified model for evolutionary multiobjective optimization: ParadisEO-MOEO. Eur. J. Oper. Res. 2011, 209, 104–112. [Google Scholar] [CrossRef]
  22. Alba, E.; Ferretti, E.; Molina, J.M. The influence of data implementation in the performance of evolutionary algorithms. In Proceedings of the International Conference on Computer Aided Systems Theory, Heidelberg, Germany, 24–28 July 2017; Springer: Berlin/Heidelberg, Germany, 2007; pp. 764–771. [Google Scholar]
  23. García-Sánchez, P.; Ortega, J.; González, J.; Castillo, P.A.; Guervós, J.J.M. Distributed multi-objective evolutionary optimization using island-based selective operator application. Appl. Soft Comput. 2019, 85, 105757. [Google Scholar] [CrossRef]
  24. Aldana-Martín, J.F.; del Mar Roldán-García, M.; Nebro, A.J.; Aldana-Montes, J.F. MOODY: An ontology-driven framework for standardizing multi-objective evolutionary algorithms. Inf. Sci. 2024, 661, 120184. [Google Scholar] [CrossRef]
  25. Aquino-Brítez, S.; García-Sánchez, P.; Ortiz, A.; Aquino-Brítez, D. Energy Efficiency Evaluation of Frameworks for Algorithms in Time Series Forecasting. Eng. Proc. 2024, 68, 30. [Google Scholar] [CrossRef]
  26. Merelo-Guervós, J.J.; Blancas-Álvarez, I.; Castillo, P.A.; Romero, G.; García-Sánchez, P.; Rivas, V.M.; García-Valdez, M.; Hernández-Águila, A.; Román, M. Ranking programming languages for evolutionary algorithm operations. In Proceedings of the European Conference on the Applications of Evolutionary Computation, Amsterdam, The Netherlands, 19–21 April 2017; Springer: Berlin/Heidelberg, Germany, 2017; pp. 689–704. [Google Scholar]
  27. Hansen, N.; Auger, A.; Finck, S.; Ros, R. Real-Parameter Black-Box Optimization Benchmarking 2010: Experimental Setup. Ph.D. Thesis, INRIA, Paris, France, 2010. [Google Scholar]
  28. Garden, R.W.; Engelbrecht, A.P. Analysis and classification of optimisation benchmark functions and benchmark suites. In Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC), Beijing, China, 6–11 July 2014; pp. 1641–1649. [Google Scholar]
  29. Sharma, P.; Raju, S. Metaheuristic optimization algorithms: A comprehensive overview and classification of benchmark test functions. Soft Comput. 2024, 28, 3123–3186. [Google Scholar] [CrossRef]
Figure 1. η metric (fitness/energy) for each experimental configuration and framework for OneMax benchmark. The X-axis represents the values of population size and crossover probability, whereas the Y-axis denotes energy consumption in kWh.
Figure 1. η metric (fitness/energy) for each experimental configuration and framework for OneMax benchmark. The X-axis represents the values of population size and crossover probability, whereas the Y-axis denotes energy consumption in kWh.
Algorithms 18 00593 g001
Figure 2. η metric (fitness/energy) for each experimental configuration and framework for Sphere benchmark. The X-axis represents the values of population size and crossover probability, whereas the Y-axis denotes energy consumption in kWh.
Figure 2. η metric (fitness/energy) for each experimental configuration and framework for Sphere benchmark. The X-axis represents the values of population size and crossover probability, whereas the Y-axis denotes energy consumption in kWh.
Algorithms 18 00593 g002
Figure 3. η metric (fitness/energy) for each experimental configuration and framework for Rosenbrock benchmark. The X-axis represents the values of population size and crossover probability, whereas the Y-axis denotes energy consumption in kWh.
Figure 3. η metric (fitness/energy) for each experimental configuration and framework for Rosenbrock benchmark. The X-axis represents the values of population size and crossover probability, whereas the Y-axis denotes energy consumption in kWh.
Algorithms 18 00593 g003
Figure 4. η metric (fitness/energy) for each experimental configuration and framework for Schwefel benchmark. The X-axis represents the values of population size and crossover probability, whereas the Y-axis denotes energy consumption in kWh.
Figure 4. η metric (fitness/energy) for each experimental configuration and framework for Schwefel benchmark. The X-axis represents the values of population size and crossover probability, whereas the Y-axis denotes energy consumption in kWh.
Algorithms 18 00593 g004
Figure 5. Average energy consumption versus fitness value for each framework. The X-axis denotes the average energy consumption in kWh and Y-axis represents the fitness values.
Figure 5. Average energy consumption versus fitness value for each framework. The X-axis denotes the average energy consumption in kWh and Y-axis represents the fitness values.
Algorithms 18 00593 g005
Table 1. Benchmark description.
Table 1. Benchmark description.
FunctionEquationReason for Inclusion
OneMax f ( x ) = i = 1 n b i Basic pattern for binary EAs and prime candidate to measure the efficiency of implementation.
Sphere f ( x ) = i = 1 n x i n It isolates the energy cost of floating-point fitness functions  [13].
Rosenbrook f ( x ) = i = 1 n 1 [ 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 ] It reveals the energy impact of the coupling between variables  [11].
Schwefel f ( x ) = 418.9829 n i = 1 n x i sin ( | x i | ) It stresses diversification and highlights consumption peaks  [13].
Table 2. Average energy consumption in laptop platform (in kWh).
Table 2. Average energy consumption in laptop platform (in kWh).
OneMaxSphereRosenbrockSchwefelGlobal
DEAP0.0002540.0004580.0003480.0003610.000355
ParadisEO0.0002550.0002540.0004100.0004050.000331
Inspyred0.0003500.0003360.0003770.0003920.000364
ECJ0.0004090.0002580.0004120.0004120.000373
Table 3. Average energy consumption in server platform (in kWh).
Table 3. Average energy consumption in server platform (in kWh).
OneMaxSphereRosenbrockSchwefelGlobal
DEAP0.0018200.0017500.0018400.0018100.001804
ParadisEO0.0017140.0017520.0016560.0017640.001721
Inspyred0.0017810.0017430.0018540.0019450.001831
ECJ0.0017050.0016560.0017560.0017070.001706
Table 4. Energy consumption rate—server/laptop.
Table 4. Energy consumption rate—server/laptop.
OneMax SphereRosenbrockSchwefelGlobal
DEAP7.1503.8265.2765.0205.080
ParadisEO6.7366.8984.0394.3545.202
Inspyred5.0895.1864.9174.9585.033
ECJ4.1706.4194.2644.1444.576
Table 5. Total number of generations on the laptop platform.
Table 5. Total number of generations on the laptop platform.
OneMax SphereRosenbrockSchwefelGlobal
DEAP1553.2901287.256785.533272.756974.708
ParadisEO6986.46710,703.5115484.16711,625.8788700.005
Inspyred1892.167719.367947.022298.533964.272
ECJ21,277.48943,335.34445,158.18920,968.05632,684.769
Table 6. Total number of generations on the server platform.
Table 6. Total number of generations on the server platform.
OneMax SphereRosenbrockSchwefelGlobal
DEAP4250.6113312.1111842.289682.7442521.939
ParadisEO23,641.97828,924.10013,151.75635,284.41125,250.561
Inspyred4389.0562280.4672162.922735.2562391.925
ECJ86,430.144113,179.400112,902.35652,295.80091,251.925
Table 7. Generations reached rate—server/laptop.
Table 7. Generations reached rate—server/laptop.
OneMax SphereRosenbrockSchwefelGlobal
DEAP2.7372.5732.3462.5032.588
ParadisEO3.3842.7022.3983.0352.902
Inspyred2.3193.1702.2642.4632.481
ECJ4.0622.6162.5002.4842.792
Table 8. Fitness reached in laptop platform.
Table 8. Fitness reached in laptop platform.
OneMax SphereRosenbrockSchwefelGlobal
DEAP0.7020.8560.8760.6890.756
ParadisEO0.7240.7840.9930.6660.797
Inspyred0.7620.8380.8840.5890.793
ECJ0.6320.9930.9220.6910.795
Table 9. Fitness reached in server platform.
Table 9. Fitness reached in server platform.
OneMax SphereRosenbrockSchwefelGlobal
DEAP0.7470.8930.8910.5980.782
ParadisEO0.7400.7970.8950.7020.808
Inspyred0.8030.8670.9910.5980.815
ECJ0.6450.8950.9230.6920.789
Table 10. Fitness reached rate—server/laptop.
Table 10. Fitness reached rate—server/laptop.
OneMax SphereRosenbrockSchwefelGlobal
DEAP1.0641.0441.0151.0141.034
ParadisEO1.0221.0161.0021.0231.015
Inspyred1.0531.0341.0071.0141.027
ECJ1.0221.0021.0011.0021.006
Table 11. Metric η and standard deviation in laptop platform.
Table 11. Metric η and standard deviation in laptop platform.
FrameworkBenchmarkMean ± SD
DEAPOneMax2782.618 ± 298.501
Sphere1873.979 ± 261.300
Rosenbrock2531.792 ± 326.729
Schwefel1656.521 ± 199.491
PARADISEOOneMax2846.125 ± 147.925
Sphere3087.423 ± 112.255
Rosenbrock2450.638 ± 122.079
Schwefel1697.700 ± 58.067
INSPYREDOneMax2187.985 ± 436.273
Sphere2507.641 ± 279.936
Rosenbrock2615.694 ± 139.371
Schwefel1514.149 ± 131.291
ECJOneMax1545.622 ± 61.710
Sphere3460.072 ± 196.769
Rosenbrock2240.322 ± 117.889
Schwefel1674.151 ± 338.117
Table 12. Metric η and standard deviation in server platform.
Table 12. Metric η and standard deviation in server platform.
FrameworkBenchmarkMean ± SD
DEAPOneMax411.214 ± 34.243
Sphere509.279 ± 54.061
Rosenbrock486.606 ± 65.570
Schwefel331.276 ± 27.006
PARADISEOOneMax431.777 ± 24.300
Sphere455.437 ± 18.120
Rosenbrock600.927 ± 7.307
Schwefel397.902 ± 16.288
INSPYREDOneMax450.523 ± 67.406
Sphere497.604 ± 39.695
Rosenbrock534.549 ± 13.209
Schwefel309.171 ± 28.291
ECJOneMax379.023 ± 19.400
Sphere540.659 ± 10.833
Rosenbrock525.546 ± 23.252
Schwefel405.645 ± 88.971
Table 13. Metric η rate—laptop/server.
Table 13. Metric η rate—laptop/server.
OneMax SphereRosenbrockSchwefelGlobal
DEAP6.7673.6805.2035.0005.088
ParadisEO6.5926.7794.0784.2675.346
Inspyred4.8575.0394.8934.8974.925
ECJ4.0786.4004.2634.1274.819
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Luque-Hernández, F.J.; Aquino-Britez, S.; Díaz-Álvarez, J.; García-Sánchez, P. A Comparison of Energy Consumption and Quality of Solutions in Evolutionary Algorithms. Algorithms 2025, 18, 593. https://doi.org/10.3390/a18090593

AMA Style

Luque-Hernández FJ, Aquino-Britez S, Díaz-Álvarez J, García-Sánchez P. A Comparison of Energy Consumption and Quality of Solutions in Evolutionary Algorithms. Algorithms. 2025; 18(9):593. https://doi.org/10.3390/a18090593

Chicago/Turabian Style

Luque-Hernández, Francisco Javier, Sergio Aquino-Britez, Josefa Díaz-Álvarez, and Pablo García-Sánchez. 2025. "A Comparison of Energy Consumption and Quality of Solutions in Evolutionary Algorithms" Algorithms 18, no. 9: 593. https://doi.org/10.3390/a18090593

APA Style

Luque-Hernández, F. J., Aquino-Britez, S., Díaz-Álvarez, J., & García-Sánchez, P. (2025). A Comparison of Energy Consumption and Quality of Solutions in Evolutionary Algorithms. Algorithms, 18(9), 593. https://doi.org/10.3390/a18090593

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop