1. Introduction
The third decade of the XXI century is characterized by rapid development and global proliferation of information technologies, driving the continuous creation of tools for automation and optimization of various processes across numerous domains of human activity. Depending on the domain and specific application, processes may exhibit complex structures, significantly complicating the implementation of tools for their automation or optimization with the required efficiency. For instance, in many intelligent predictive systems and decision support systems based on machine learning algorithms, optimization algorithms are employed to solve parameter search problems—aimed at finding values of certain parameters that achieve an extremum of the objective function. For complex models, such as deep neural networks, the task of hyperparameter tuning can be non-trivial due to the high computational cost of the model, the high dimensionality of the search space, nonlinearity and multimodality of the objective function, stochasticity of the training process, constraints on the feasible domain, and other associated factors [
1]. Considering the combination of these factors, achieving an extremum of the objective function within reasonable time and with the required accuracy poses a significant challenge for many classical optimization algorithms—such as evolutionary algorithms—particularly in the case of multiobjective optimization, where extrema of multiple objective functions are sought simultaneously. However, an alternative approach based on the simulation of quantum computations offers the potential to substantially accelerate the discovery of high-quality solutions. This approach is known as «quantum-inspired».
Quantum-inspired computing enables the solution of standard problems on classical computing devices and systems by leveraging concepts and methods from quantum mechanics and quantum computation. According to the study [
2], which investigates the practical potential of quantum-inspired algorithms for recommendation systems and solving linear systems of equations, such algorithms exhibit a polylogarithmic dependence on data dimensionality up to a certain threshold. This provides an asymptotically exponential speedup compared to classical methods for problems involving low-rank matrices. However, their complexity is upper-bounded by a polynomial dependence, which remains significantly higher than that of true quantum computations.
The application of quantum-inspired computing to numerical and combinatorial single-objective optimization [
3,
4] has yielded algorithms that outperform classical counterparts in execution time, convergence speed, and solution accuracy. However, since quantum-inspired methods run on classical hardware, they cannot fully exploit genuine quantum advantages, such as superposition and entanglement, that enable exponential or polynomial speedups in true quantum algorithms like Grover’s or Shor’s. These speedups typically require structured problems or compact data representations allowing efficient use of quantum parallelism via components like quantum oracles. While quantum-inspired computing simulates selected quantum concepts on classical systems, thereby improving computational logic and performance, it remains fundamentally limited compared to actual quantum devices [
5].
In existing quantum-inspired single-objective optimization algorithms [
3,
4,
6,
7,
8], the qubit, characterized by two basis states,
and
, serves as the core element. Solutions are encoded in superposition states, and their probability amplitudes are iteratively updated via unitary transformations, with the highest-amplitude state ultimately selected as the solution. While this approach yields strong performance in terms of accuracy, convergence speed, and execution time for low-dimensional problems (typically
), its effectiveness diminishes as dimensionality grows, often matching or even falling behind classical methods.
Another approach employs the qudit [
9]—a multilevel quantum system—as the core logical unit. In ref. [
10], a qudit-based adaptation of the quantum-inspired genetic algorithm [
11] was tested on benchmark functions with dimensions
and
, as well as on real-world problems in manufacturing scheduling and machine learning. Results showed consistent superiority over both classical methods and the original qubit-based version in terms of accuracy, convergence speed, execution time, and solution distribution density.
Despite promising results in single-objective settings, quantum-inspired optimization algorithms struggle in high-dimensional multiobjective problems. As shown in refs. [
12,
13], methods like quantum-inspired NSGA-II/III or particle swarm optimization suffer rapid performance degradation with increasing decision variables due to the «curse of dimensionality»: the exponentially growing search space overwhelms qubit-based encodings, causing premature convergence and poor exploration-exploitation balance. Although QIEAs (Quantum-Inspired Evolutionary Algorithms) initially boost diversity via superposition, they often fail to maintain it or navigate complex, non-convex Pareto fronts. In ref. [
13] notes that both classical and quantum-inspired approaches lack efficiency in such scenarios, prompting interest in specialized frameworks like VQAs (Variational Quantum Algorithms), while ref. [
12] highlights that QIEAs’ theoretical advantages are offset by high computational overhead and sensitivity to parameter tuning, limiting their real-world scalability.
Moreover, current hybrid quantum-classical architectures face significant practical limitations. As noted in ref. [
14], integrating emerging quantum processors with existing classical IT infrastructure is complex, often requiring custom middleware and non-standard APIs that hinder interoperability. This fragmentation introduces latency—especially in iterative algorithms that repeatedly transfer data between classical and quantum subsystems—eroding potential quantum advantages. Additionally, these systems demand rare dual expertise in quantum information and classical distributed computing. Most frameworks are rigid, monolithic, or tightly bound to specific quantum backends, lacking the modularity and adaptability needed for evolving hardware or real-world dynamic optimization tasks.
These limitations reveal a critical research gap: existing quantum-inspired multiobjective algorithms lack a principled mechanism to sustain diversity, adaptively balance exploration and exploitation, and scale efficiently in high-dimensional dynamic environments, and they rarely leverage physically grounded, self-regulating dynamics to encode such adaptivity. To address this gap, the present study introduces a novel quantum-inspired algorithm for numerical multiobjective optimization that uniquely integrates multilevel qudit-based agents with physics-inspired dynamics from controlled thermonuclear fusion. By leveraging the qudit’s higher-dimensional state space and explicitly modeling particle interaction, energy release, and plasma cooling, the algorithm establishes a self-regulating mechanism that effectively navigates complex, high-dimensional Pareto landscapes. A hybrid quantum-classical variant is also presented, along with explicit quantum circuit designs for key operations—qudit initialization, phase encoding of objective functions, and Grover-based amplitude amplification. Rigorous testing on dynamic multiobjective benchmarks demonstrates consistent, statistically significant performance advantages over state-of-the-art methods.
2. Related Studies
Quantum-inspired multiobjective optimization is a topical research area due to its potential to solve global optimization problems characterized by nonlinear constraints, exponential complexity of the search space, and correlated objectives with ill-defined priorities—features that are particularly relevant in modern applications such as machine learning, logistics, energy systems, and bioinformatics. Unlike classical optimization methods, quantum-inspired approaches offer fundamentally new ways of information processing based on quantum phenomena such as superposition and entanglement, enabling more efficient exploration of the solution space. However, adapting these quantum phenomena to classical computing devices and systems is a complex and multifaceted challenge that requires careful consideration of numerous specific aspects of quantum mechanics. This has led to a wide range of research efforts exploring diverse methods for simulating quantum particles and novel methodologies for solving optimization problems.
In the context of quantum-inspired modeling for multiobjective optimization algorithms, ref. [
15] proposes the DMQSSA (Decomposition-Based Quantum-Inspired Salp Swarm Algorithm), which combines quantum-inspired principles with the salp swarm algorithm [
16]. DMQSSA employs a delta-potential well model [
17] to enhance convergence, a decomposition-based approach for the crossover operator, and an intelligent strategy for maintaining an archive of non-dominated solutions, thereby ensuring population diversity and generating a well-distributed set of Pareto-optimal solutions. The algorithm QCCEA (Quantum-Inspired Competitive Coevolution Algorithm) [
18], a quantum-inspired coevolutionary approach, employs a quantization technique based on Gaussian distribution, quantum rotation gates applied to qubits on the Bloch sphere for solution diversification, and a «Hall of Fame» strategy [
19] for selecting the best solutions. The study presented in ref. [
20] describes a quantum-inspired optimization algorithm, MOQSOA (Multiobjective Quantum-Inspired Seagull Optimization Algorithm), which combines quantum principles with the natural behavior of seagulls. The algorithm employs quantum state encoding as a linear superposition of positive and deceptive states to represent solutions, an opposition-based learning approach [
21] to enhance the initial population, a grid-based density ranking method [
22] for maintaining an archive of non-dominated solutions, a correction mechanism to prevent premature convergence, and incorporates behavioral patterns of seagulls—such as migration and attacking—for searching optimal solutions.
Further enriching the landscape of quantum-inspired optimization, recent studies have introduced novel algorithms that leverage quantum principles in distinct and innovative ways. One study [
23] presents a pioneering application of the QAOA (Quantum Approximate Optimization Algorithm) to solve the IDP (Independent Domination Problem), a combinatorial optimization challenge with practical implications in network design. This work is significant as it marks the first application of QAOA to the IDP, demonstrating efficacy in finding optimal solutions with computational complexity surpassing that of classical methods. The transformation of the IDP into the QUBO (Quadratic Unconstrained Binary Optimization) model, followed by its encoding into a Hamiltonian, enables solution via the hybrid quantum-classical QAOA framework. Robustness testing reveals strong dependence of the algorithm’s performance on parameter tuning, such as the number of layers and penalty coefficients, offering valuable insights for future QAOA applications in similar discrete optimization problems.
Another recent study [
24] proposes the QPPA (Quantum Predator-Prey Algorithm), a metaheuristic designed for real-parameter optimization. QPPA uniquely fuses fundamentals of quantum mechanics—specifically, the delta-potential well model [
17]—with dynamics of a predator-prey ecological system. Unlike many quantum-inspired algorithms focusing on qubit-based representations, QPPA employs the quantum model to mathematically derive movement equations for «predator» agents as they pursue «prey» agents in the search space. This quantum formulation governs the exploration phase, enabling effective escape from local optima. QPPA demonstrates superior performance and rapid convergence on benchmark functions, outperforming established algorithms such as PSO, GA and GWO—particularly in high-dimensional settings. This underscores the versatility of quantum-inspired principles, illustrating successful adaptation not only to discrete problems via QAOA but also to continuous optimization through novel metaheuristic frameworks like QPPA.
The qudit has found application not only in quantum-inspired genetic algorithms for single-objective optimization [
10], but also in solving the classical combinatorial optimization problem of graph coloring [
25]. The authors of the study proposed using qudits as graph nodes, parameterized by multidimensional spherical coordinates. In ref. [
25], two strategies are considered: one involving initialization of qudits in random states combined with qudit-based gradient descent to minimize the cost function, and an adapted strategy of local quantum annealing based on qudits, which implements an adiabatic transition from a simple initial function to a problem-specific target function.
The application of physical principles of nuclear reactions is reflected in the study presenting the classical optimization algorithm NRO (Nuclear Reaction Optimization) [
26]. This algorithm simulates the behavior of atomic nuclei in a confined space, including the processes of nuclear fission and nuclear fusion. Specifically, the nuclear fission process decomposes complex solutions into simpler ones to explore new regions of the solution space, while the nuclear fusion process combines simple solutions to improve current results. The NRO algorithm operates in three stages. In the first stage, a large number of «small nuclei» are generated, representing initial solutions, which ensures exploration of the new solution space. Then, the «nuclei» interact with each other through nuclear fission and fusion processes. In the final stage, «stable nuclei» are formed, corresponding to optimal solutions. During testing, the NRO algorithm demonstrated superior capability in locating global optima in problems with numerous local minima, adaptability to various problem conditions, and applicability to both single-objective and multiobjective optimization.
In parallel with quantum-inspired developments, significant progress has been made in classical DMOEAs (Dynamic Multiobjective Evolutionary Algorithms), which specifically address the challenges posed by time-varying objectives and constraints. A recent study [
27] proposes a DMOEA based on the classification of environmental change intensity and a collaborative prediction strategy to enhance adaptation in dynamic environments. The algorithm first refines the static optimization phase to improve the spatial resolution of individuals in the objective space, thereby increasing the sensitivity and accuracy of change detection. Upon detecting an environmental shift, it employs a mutual information-based metric to classify the intensity of the change—low, medium, or high—and accordingly adapts the velocity update rules of the particle swarm to avoid misleading evolutionary directions. Furthermore, a collaborative prediction mechanism is introduced to generate a forecasted population that closely approximates the true Pareto set in the new environment. This is followed by a dual individual screening strategy that intelligently combines promising candidates from both the predicted population and the pre-change archive to initialize the new generation. Experimental validation on 20 benchmark DMOPs (Dynamic Multiobjective Optimization Problems) demonstrates that this approach consistently outperforms state-of-the-art DMOEAs in terms of convergence, diversity, and responsiveness to environmental dynamics, highlighting the critical importance of explicitly modeling change intensity and leveraging predictive mechanisms in dynamic multiobjective settings.
Addressing the challenges posed by highly constrained optimization environments, ref. [
28] introduces the MOACO-DCE (Multiobjective Ant Colony Optimization Algorithm Based on a Dynamic Constraint Evaluation Strategy). This method is specifically designed for HCMOPs (Highly Constrained Multiobjective Problems), such as vehicle routing and shop scheduling, where feasible regions are typically small, fragmented, and difficult to locate. To overcome the limitations of conventional evolutionary approaches in such landscapes, MOACO-DCE employs a dynamic constraint violation metric that quantifies the degree to which solutions violate problem constraints. Based on this metric, the population is adaptively partitioned into two subpopulations: one with evolutionary advantage (relatively low constraint violation) and another with severe constraint violations. For the former, a dynamic transfer probability-based evolutionary strategy is applied to accelerate convergence toward high-quality feasible solutions. For the latter, a Gaussian variation operator is introduced to enhance diversity and facilitate escape from infeasible regions. Furthermore, the algorithm features a collaborative pheromone updating mechanism that enables information exchange between subpopulations, along with a constraint-aware pheromone update rule that explicitly incorporates constraint violation levels into the reinforcement process. Experimental comparisons demonstrate that MOACO-DCE achieves superior performance on HCMOP benchmarks, particularly in maintaining feasibility while preserving solution diversity, highlighting the effectiveness of dynamically balancing constraint handling and evolutionary search in severely restricted search spaces.
Further advancing the field of dynamic multiobjective optimization, a recent study [
29] introduces MOEA/D-MDDM—a prediction-based evolutionary algorithm that leverages a MDDM (Multi-Directional Difference Model) to anticipate shifts in the Pareto-optimal solution set. Recognizing that rapid adaptation to environmental changes hinges on accurate initialization near the new Pareto-optimal front, the authors propose a dedicated Pareto-optimal solution set estimation strategy that generates multiple candidate populations based on historical trajectory data. These estimates are then fused through the MDDM to produce a refined prediction of the next Pareto-optimal solution set location, which guides the re-initialization of the population. To enhance robustness across diverse problem landscapes, the algorithm incorporates an adaptive crossover-rate mechanism that dynamically adjusts genetic diversity in response to the geometric structure of the evolving Pareto-optimal solution set, particularly effective for problems with single-modality characteristics and continuous manifolds. Comprehensive experiments on 19 benchmark DMOPs show that MOEA/D-MDDM outperforms six state-of-the-art DMOEAs in tracking accuracy and convergence stability, underscoring the value of combining directional prediction with adaptive variation operators in dynamic environments.
3. Materials and Methods
3.1. The Process of Controlled Thermonuclear Fusion
Controlled thermonuclear fusion is a process of producing heavy atomic nuclei through the fusion of lighter nuclei, aimed at energy generation. Unlike the synthesis of heavy atomic nuclei in thermonuclear explosive devices [
30], this process is controlled and sustained under engineered conditions.
Atomic nuclei consist of two types of nucleons—protons and neutrons—held together by the strong nuclear force. The binding energy of each nucleon to the rest of the nucleus depends on the total number of nucleons in the nucleus. This dependence is quantitatively described by the semi-empirical mass formula, which expresses the total binding energy as a sum of volume, surface, Coulomb, asymmetry, and pairing terms:
where
—nuclear binding energy;
—mass number (total number of nucleons);
,
,
,
—coefficients of volume, surface, Coulomb, and asymmetry energies (according to the liquid drop model [
31]);
—pairing correction term.
For light nuclei, the binding energy per nucleon increases with the number of nucleons, whereas for heavy nuclei, it decreases. Adding nucleons to light nuclei or removing them from heavy nuclei results in a difference in binding energy. This energy difference is released as net energy, manifesting as the difference between the energy input required to initiate the reaction and the kinetic energy of the emitted particles.
Since protons in a nucleus carry a positive electric charge, nuclei must possess significant kinetic energy—sufficient to overcome the Coulomb barrier—in order to approach each other closely enough for nuclear forces to become effective. The Coulomb potential between two nuclei with charges
and
is given by:
where
—Coulomb repulsion potential at the distance
between nuclei;
,
—atomic numbers of the colliding nuclei;
—elementary charge;
—vacuum permittivity.
Nuclear fusion reactions can occur with low probability even under thermodynamically non-equilibrium conditions, for example, by accelerating nuclei of one or more reaction components to high velocities and directing them onto a target containing nuclei of another component [
32]. At low energies, fusion is made possible by the quantum tunneling effect. Under conditions of extremely high matter density, picoscale nuclear reactions may occur, driven by zero-point oscillations of nuclei at the lattice points of a crystalline structure. Nevertheless, substantial energy release is achieved primarily through thermonuclear reactions occurring in a medium heated to extremely high temperatures, where overcoming the Coulomb barrier becomes a collective, large-scale process (with matter in the plasma state). For successful thermonuclear fusion, the high-temperature plasma must be confined for a sufficient duration to ensure effective reaction rates. This requirement is quantitatively expressed by the Lawson criterion [
33], which defines the minimum product of plasma density, confinement time, and temperature necessary to achieve net energy gain:
where
—plasma density;
—energy confinement time;
—plasma temperature;
—threshold value (for the deuterium-tritium reaction,
, corresponding to the minimum triple product required for net energy gain).
In general, the thermonuclear fusion process can be described as follows: at sufficiently high temperatures, light atomic nuclei approach each other to distances where a strong but short-range interaction becomes significant. This interaction overcomes the Coulomb barrier between the positively charged nuclei, leading to their fusion and the formation of a new, heavier nucleus. During this process, part of the nucleon mass is converted into binding energy, and a large amount of energy is released due to the strong nuclear force.
In practice, for achieving controlled thermonuclear fusion, fusion reactions that occur at moderate temperatures and exhibit the highest cross-section values—characterizing the probability of particle reactions upon collision—are considered. Particular interest is focused on the reaction between the nuclei of heavy hydrogen isotopes—deuterium and tritium. Studies [
34,
35] show that initiating the fusion reaction of a deuterium-tritium mixture (
Figure 1) requires less energy than the amount released during the reaction. This net energy gain is explicitly quantified by the deuterium-tritium fusion reaction:
The most promising systems for achieving controlled thermonuclear fusion are those operating in steady-state or quasi-steady-state regimes. Among such systems are magnetic traps, which confine high-temperature plasma using a magnetic field. This field restricts the motion of charged particles, providing a magnetic insulation effect. The most widely used type of magnetic trap is the tokamak—a toroidal system in which the magnetic configuration is formed by external coils and an electric current flowing through the plasma [
36]. Although tokamaks theoretically allow for indefinite confinement of individual charged particles, particle collisions and the development of plasma turbulence lead to plasma losses.
In the context of a quantum-inspired multiobjective optimization algorithm, the concept of controlled thermonuclear fusion is proposed to be utilized in terms of the following aspects:
Each algorithm agent represents a particle in a state corresponding to a solution of the optimization problem. The ensemble of all agents forms a unified system—plasma.
Each agent possesses a certain energy—lower energy corresponds to a better solution.
During algorithm execution, energy is released through interactions between agents. This interaction is modeled as a process of information fusion or information exchange, which may improve solution quality.
The agent system has a temperature that controls the intensity of interactions. The initial temperature sets the initial energy level of the system.
System cooling is modeled as a gradual reduction in temperature, allowing the system to stabilize and converge toward optimal solutions. At high temperature, the system actively explores the solution space. At low temperature, solutions «condense» around local or global optima.
3.2. Qudit as a Multilevel Quantum Agent
A qudit [
9] is a multilevel unit of quantum information that, unlike a conventional qubit, can exist in a discrete set of states of dimensionality
. In general, a qudit can be expressed as:
where
is a quantum system existing simultaneously in more than two states;
is an orthonormal basis vector;
are complex probability amplitudes such that
;
.
To simplify the treatment of the qudit as a physical system, its description in terms of the density matrix
is proposed [
37]. In general, the density matrix is an operator that characterizes the quantum state of a physical system, since any physical observable can be represented as the expectation value of a corresponding Hermitian operator. If the quantum system is in a pure state described by a single wave function, the density matrix takes the following form:
where
is the wave function.
Substituting (5) into (6) yields:
where
denotes the complex conjugate of the amplitudes
,
;
is the element of the density matrix corresponding to the transition between the states
and
.
Since the interpretation of states with dimensionality
using the Bloch sphere is impossible, and Formulation (7) does not allow for an explicit analysis of the system’s coherence and dynamics, an alternative approach based on Heisenberg-Weyl operators [
38] is proposed. These operators are defined as unitary transformations acting on a
-dimensional Hilbert space
:
where
;
is a cyclically shifted basis vector (modulo
);
is a phase vector; for
the property
holds, with
being the identity matrix and
is the Hermitian conjugate of
. Moreover,
, where
and
are
Kronecker delta symbols.
The expansion of a bounded density matrix operator
in
in the Heisenberg-Weyl operator basis takes the following form:
where
are the expansion coefficients, determined based on the fact that the Heisenberg-Weyl operators form a complete orthonormal basis in the
-dimensional space.
The Hermitian conjugate of the Heisenberg-Weyl operators
can be expressed, taking into account (8), as:
Then, using the linearity property of the trace and substituting (11) into (10), we obtain:
The trace of the product of operators can be written in terms of matrix elements as
. Re-defining the index
to align with the state
by substituting
(modulo
) leads to the transformation
. As a result, the expression for the expansion coefficients
reduces to the following equation:
Substituting the expansion coefficients (13) and the operators (8) into (9) results in the final density matrix
taking the form:
where
, since the exponential term completes a full cycle when
;
denotes the elements of the density matrix.
Since, in the context of a quantum-inspired multiobjective optimization algorithm, a qudit represents an agent, and a collection of such agents forms a unified system simulating a plasma, there arises a need to describe the interaction characteristics of agents in terms of the concept of controlled thermonuclear fusion.
Let
be the density matrix of the
-th qudit, satisfying the conditions
,
,
. The number of qudits in the system is
. Then, the probability of interaction between adjacent qudits
and
is calculated as:
where
and
denote the energies of the
-th and (
)-th qudits, respectively, with
being the Hamiltonian.
If the condition
, where
is a random number modeling the stochasticity of the quantum process, is satisfied, then the new density matrices are calculated as a linear combination of the current ones:
where
is a coefficient with a random value drawn uniformly from the interval
, determining the contribution of each density matrix to the new state. This allows the creation of new states described by the density matrices
and
, which represent a «mixture» of the original states. Specifically, the state described by
, is formed as a weighted sum of the states represented by
and
, where the weight coefficient for
is
and that for
is
. Similarly, the state described by
is formed as the same weighted combination of
and
with identical weights.
The new density matrices
and
preserve the Hermitian property, since a linear combination of Hermitian matrices with real coefficients is also Hermitian:
After updating, the density matrices
and
are normalized to preserve the property
:
When agents in the system interact, energy is released. This process is simulated through local optimization of solutions from the Pareto front obtained by the quantum-inspired algorithm. Here, the Pareto front represents a set of non-dominated solution vectors
, where a non-dominated solution vector
with
corresponds to a solution vector
(
—total number of variables) from the set of all solutions
with
(
—total number of solutions). Each non-dominated solution vector
is characterized by a vector of objective function values
(
—number of objectives) and belongs to the feasible region defined by the constraints
(
—set of constraints,
,
—total number of constraints). For each solution vector
, the non-domination condition is checked. For example, in a minimization problem, for two solution vectors
, we say that
dominates
if the following conditions are satisfied:
Similarly, in a maximization problem, for two solution vectors
, we say that
dominates
if the following conditions are satisfied:
During the physical process of thermonuclear fusion, plasma inevitably cools down due to energy losses through radiation, absorption by the reactor walls, plasma instabilities, and other factors [
39]. By analogy with physical plasma, cooling in the agent system is modeled as a gradual reduction in the parameter
, which controls the system temperature. This leads to stabilization of the probabilities of quantum states, and the system evolves toward a state of minimum energy.
During the cooling process, for each qudit the density matrix
(
,
being the number of qudits in the system) is scaled using the current temperature
:
where
.
The resulting matrix
is normalized such that its trace equals 1:
After updating all density matrices, the system parameter
is decreased according to the formula:
where
is the current temperature;
is the new temperature;
is the cooling coefficient.
3.3. Description of the Algorithm
In general, the structure of the quantum-inspired multiobjective optimization algorithm based on thermonuclear fusion (hereinafter referred to as TF-QIMOA) consists of a set of interconnected components: initialization of density matrices of quantum states, evaluation of solution fitness, simulation of quantum interactions between qudits as particles, local optimization of solutions from the Pareto front, and simulation of the cooling process of the qudit system.
The execution of the algorithm for initializing density matrices of quantum states (Algorithm 1) begins with the construction of the generalized Hadamard gate matrix
, which is required to create a uniform superposition of all basis states of the qudit in
-dimensional space [
10]. Next, for each qudit
in the system of
qudits, the initial state is set to
,
. The generalized Hadamard gate is applied to the initial state
, transforming it into the superposition state
. Then, a random noise term
is added to the superposition state
to simulate quantum fluctuations in the system, where
is defined as a noise vector generated from a normal distribution with zero mean and variance
. The resulting state
is normalized to obtain the state
, where
. Afterwards, the density matrix
is constructed for the current qudit
, computed according to (14) as
, taking into account that
, where
is the probability amplitude of the state
, and
is the complex conjugate of the probability amplitude of the state
. The resulting density matrix
is added to the list
. After the list
is populated with density matrices for all
qudits, it is returned as the output of Algorithm 1.
| Algorithm 1. Initialization of Density Matrices |
| Input: | ▷ dimension of the qudit (number of basic states). |
| | ▷ number of qudits in the system. |
| | ▷ list for storing density matrices. |
| 1. | ▷ define the generalized Hadamard gate matrix |
| 2. | . |
| 3. |
do: |
| 4. | |
| 5. | ▷ apply generalized Hadamard gate to initial state. |
| 6. | to the superposition state |
| 7. | . |
| 8. | ▷ normalize the state. |
| 9. | ▷ construct the density matrix for the |
| 10. | current qudit. |
| 11. | . |
| 12. | End loop. |
| 13. | ▷ list of density matrices. |
Fitness evaluation (Algorithm 2) begins with performing the Cartesian product of the set of qudit indices
and the set of state sample indices
, where
is the number of qudits in the system,
is the number of state samples per qudit. From the resulting set
after computing the Cartesian product, the qudit index
is iteratively selected and passed to the function
for mapping the qudit to the feasible region. This function can be written as:
where
is the lower bound of the variable’s value range,
is the upper bound of the variable’s value range;
,
is the number of variables.
Based on the mapping
, a random solution vector from the feasible region
is generated as
, where
denotes the uniform distribution. If the condition of satisfying all constraints is met, i.e.,
(
is the indicator function,
), then the values of all objective functions
are calculated, and the set of non-dominated solution vectors
is updated as:
where
is the vector of the current candidate solution;
is a solution vector from the set of already found non-dominated solutions;
is the index of the current solution;
is the index of a solution in the non-dominated set; the operator
denotes the dominance relation.
Then, the new solution vector is added to the list of all solution vectors . At the end of Algorithm 2, the set of non-dominated solution vectors and the list of all solution vectors are returned.
| Algorithm 2. Fitness Evaluation |
| Input: | ▷ list of all solution vectors. |
| | ▷ set of non-dominated solution vectors. |
| | ▷ number of qudits in the system. |
| | ▷ number of samples per qudit. |
| 1. | ▷ Cartesian product of the set of |
| 2. | qudit indices and the set of sample indices. |
| 3. |
do: |
| 4. | ▷ map qudit to feasible region. |
| 5. | . |
| 6. |
then: |
| 7. | ▷ evaluate all objective functions. |
| 8. | ▷ update the |
| 9. | set of non-dominated solutions. |
| 10. | ▷ add new solution vector to the list of all |
| 11. | solution vectors. |
| 12. | End loop. |
| 13. | ▷ updated set of non-dominated solution vectors and list of |
| 14. | all solution vectors. |
Algorithm 3 simulates quantum interaction between qudits in the system. The algorithm takes as input the list of density matrices . In a loop, the current qudit is iteratively selected under the condition , and the interaction probability between and is computed using Formula (15). If the condition , where , is satisfied, the following steps are performed sequentially: calculation of the new density matrices and according to Equation (16), normalization of the density matrices and using Equation (18), update of the original density matrices and . Upon completion of the algorithm, the updated list of density matrices is returned.
| Algorithm 3. Quantum Interaction |
| Input: | . |
| | ▷ number of qudits in the system. |
| 1. |
do: |
| 2. | ▷ calculate the probability of interaction between |
| 3. | . |
| 4. |
then: |
| 5. | . |
| 6. | . |
| 7. | . |
| 8. | . |
| 9. | . |
| 10. | . |
| 11. | End loop. |
| 12. | ▷ updated list of density matrices. |
Algorithm 4 implements local optimization of solutions from the Pareto front. In the first step, the weight vector
is initialized. If
, then the elements
(
) of the vector
are set as
. Then, an empty set
is populated with discrete grids generated as:
where
is the range of variable values,
and
is
the number of variables.
After that, the Cartesian product of all discrete grids is calculated as:
where
.
The result is converted into a list of grid point combination vectors:
where
,
.
The second step of the algorithm begins with the creation of a list
, into which the vectors of diagonal elements of the density matrices
are added. Each diagonal element represents the probability of the quantum system being in the corresponding basis state. Next, a unified vector of averaged probability values is created:
where
,
,
.
Then, a list
is generated, containing tuples of solutions and the corresponding products of the scalar function value
and a randomly selected probability
for each feasible point
. Here, the value of the function
is calculated as:
where
;
;
is the set of objective functions;
is
the number of objective functions.
If the list remains empty, the values of all objective functions are computed, and a tuple containing the solution vector and the corresponding objective function value vector is returned.
At the end of the second step, the list is sorted according to an order determined by the optimization type : if , the sorting is performed in ascending order as , if , in descending order as , where denotes the sorting permutation. The first solutions from the resulting sorted list are then selected and placed into the list .
In the third step of the algorithm, a set
is created, containing tuples of candidate solutions
and perturbed point vectors [
40,
41,
42], calculated as:
where
denotes the operation of clamping the values of
within the bounds
,
is a random perturbation vector.
Then, for each perturbed point vector from , if the feasibility condition is satisfied—i.e., ( is the indicator function, )—the value is calculated as the product of the scalar function and a randomly selected probability . The scalar function is calculated analogously to (30). Subsequently, if the condition or holds, the best solution vector is updated to , and the best value is replaced with .
In the fourth step of the algorithm, the condition
is checked. If this condition holds, the values of all objective functions
are calculated, and a tuple containing the solution vector
and the objective function value vector
is returned. Otherwise, the values of all objective functions
are calculated and the set of non-dominated solution vectors
is updated as:
where
is the current solution vector.
The updated set of non-dominated solutions is then returned at the end of Algorithm 4.
| Algorithm 4. Local Optimization of Solutions |
| Input: | , where |
| | . |
| | —number of constraints. |
| | ▷ set of objective functions. |
| | . |
| | ▷ set of discrete grids for variables. |
| | ▷ list of tuples of solutions and function values. |
| | . |
| | ▷ vector storing the best solution values. |
| | ▷ number of qudits in the system. |
| | ▷ number of objective functions. |
| | ▷ qudit dimension (number of basic states). |
| | ▷ number of best solutions. |
| | ![Algorithms 18 00793 i001 Algorithms 18 00793 i001]() |
| | ▷ optimization type. |
| | ▷ best value. |
| 1. | . |
| 2. |
do: |
| 3. |
then: |
| 4. |
then: |
| 5. | . |
| 6. | ▷ generate a discrete grid for |
| 7. | ) and add it to the set of all |
| 8. | discrete grids. |
| 9. | ▷ Cartesian product of all discrete grids. |
| 10. | into a list of grid point vectors, |
| 11. | . |
| 12. | ▷ create a list of vectors containing the diagonal |
| 13. | . |
| 14. | ▷ calculate the vector of mean probability |
| 15. | . |
| 16. | |
| 17. | and the |
| 18. | scaled by |
| 19. | . |
| 20. |
then: |
| 21. | ▷ create the objective function |
| 22. | . |
| 23. | ▷ a tuple containing the original solution and |
| 24. | its corresponding objective function value vector. |
| 25. | ▷ |
| 26. | , ordered according to |
| 27. | . |
| 28. | ▷ select the first |
| 29. | . |
| 30. | ![Algorithms 18 00793 i002 Algorithms 18 00793 i002]() |
| 31. |
do: |
| 32. | ▷ create a set of tuples |
| 33. | containing candidate solutions and perturbed point vectors, |
| 34. | . |
| 35. |
End loop. |
| 36. |
End loop. |
| 37. |
do: |
| 38. |
then: |
| 39. | as the product of the |
| 40. | and a randomly selected probability |
| 41. | . |
| 42. |
then: |
| 43. | . |
| 44. | . |
| 45. |
End loop. |
| 46. |
then: |
| 47. | ▷ create the objective function |
| 48. | . |
| 49. | ▷ a tuple containing the original solution and |
| 50. | its corresponding objective function value vector. |
| 51. | ▷ create the objective |
| 52. | . |
| 53. | ▷ update the set of |
| 54. | . |
| 55. | End loop. |
| 56. | ▷ updated set of non-dominated solution vectors. |
Algorithm 5 describes the cooling process of the qudit system. The algorithm takes as input the list of density matrices and the current system temperature . In a loop, each density matrix is scaled using the current temperature according to Equation (21). The resulting matrix is then normalized using Equation (22), and the result is written back to . After all matrices in have been updated, the temperature is decreased according to Equation (23). At the end of the algorithm, the updated list of density matrices and the updated temperature are returned.
| Algorithm 5. Qudit System Cooling |
| Input: | . |
| | ▷ number of qudits in the system. |
| | ▷ initial temperature of the system. |
| 1. |
do: |
| 2. | . |
| 3. | . |
| 4. | End loop. |
| 5. | ▷ reduce the system temperature. |
| 6. | ▷ updated list of density matrices and system temperature. |
The TF-QIMOA algorithm runs until at least one of two termination conditions is met. The first condition is satisfied when the current system temperature falls below the minimum allowable temperature, i.e., . The second condition is met when convergence is achieved, based on the average change in objective function values for neighboring solutions.
Convergence assessment involves measuring the difference between the current Pareto front
and the previous Pareto front
. If
or
, convergence assessment is not performed. If
and
, then for each pair of solution vectors
and
, the following is calculated:
where
is the difference in objective function values between the two solution vectors;
is a solution vector from the current Pareto front
;
is a solution vector from the previous Pareto front
;
is the objective function value vector for
;
is the objective function value vector for
.
Next, the total change is computed as
and the number of matches is calculated as
, where
is a small positive number defining the acceptable deviation threshold (by default set to 1 × 10
−6). Convergence is considered achieved if:
where
is the average change;
is a small positive number defining the threshold value (by default set to 1 × 10
−3).
During the execution of the TF-QIMOA algorithm, the number of non-dominated solutions may exceed the population size. To prevent unrestricted growth of the archive of non-dominated solutions and maintain computational efficiency, a truncation procedure based on the density distribution of solutions in the objective function space is applied. The essence of this approach is as follows: first, for each solution, a so-called «crowding distance» is calculated—a measure indicating how close the nearest neighbors are to the given solution [
43]. Solutions located in densely populated areas of the Pareto front receive smaller values of this distance, whereas solutions at the edges or in sparse regions receive larger values. After evaluating all distances, the most «crowded» solutions are selected and sequentially removed until the total number of non-dominated solutions equals the original population size. This mechanism not only limits the size of the archive but also preserves solution diversity, including extreme (boundary) points and uniformly distributed internal solutions, which is critically important for comprehensive coverage of the entire Pareto front.
3.4. Adaptation of the Algorithm to a Hybrid Quantum-Classical System
The value of existing quantum-inspired algorithms lies in their practical applicability and accessibility to any industry [
44], due to their ability to operate without quantum computing devices and systems. In the future, with the emergence of quantum hardware on the market possessing sufficient power and robustness of qubits against environmental noise [
10], it will become possible to seamlessly transfer quantum-inspired algorithms to such hardware [
44]. However, research in the implementation of quantum-inspired algorithms based on quantum logic circuits [
45,
46,
47,
48] indicates the need to adapt these algorithms for quantum-classical computing systems. Therefore, the task of interpreting the TF-QIMOA algorithm as a set of quantum logic circuits integrated with classical algorithms that process solutions and prepare data for quantum encoding becomes increasingly relevant, forming a unified quantum-classical algorithmic system.
Since the qudit is used as the primary logical unit in the TF-QIMOA algorithm, and states of dimensionality
cannot be represented using the Bloch sphere, the implementation of a qudit in a quantum logic circuit is proposed to be realized using qubits. The number of qubits required per qudit can be calculated using the formula:
where
is the number of qubits;
is the number of basis states of the qudit.
Then the qudit state
(where
) is encoded into the basis states of
qubits:
where
is a classical bit (with
);
is the bit position (with
).
For example, if a qudit of dimensionality
is required, 2 qubits are needed, encoding the qudit’s basis states as follows:
where the subscript
indicates that the state belongs to the qudit.
It is important to note that when is not a power of two, the number of required qubits is still . However, in this case, some states available in the qubit space remain unused. For example, for , two qubits are still required, but the state is ignored.
To create a superposition of all possible states in a system of
qubits
, the tensor product of
Hadamard gates is applied:
where
is the Hadamard gate (with
);
is the qudit state encoded in the basis states of
qubits.
After creating the superposition of states in the system
, it is necessary to retain only those states that are valid according to the number
of qudit basis states. To this end, an ancillary qubit in the state
is added to the system
. The system now becomes:
A
gate is applied between the first main qubit and the ancillary qubit in the system
. This operation transforms the first
states in the system
into values corresponding to the basis states of
qubits. In general, this can be expressed as:
where
is the control qubit,
is the target qubit;
and
are projectors onto the control qubit’s states;
is the identity operator;
is the Pauli-
operator.
The Pauli- operator is applied to the ancillary qubit, which inverts its state to the opposite one.
The gate is now applied again to the first main qubit and the ancillary qubit, but with the roles reversed: the main qubit acts as the target and the ancillary qubit as the control. This operation sets the state of the ancillary qubit to for all states of the system .
Then, the set of valid qudit states is defined as
and a quantum oracle
is applied, which transforms the ancillary qubit into the state
if the main qubits are in one of the valid states:
After applying the quantum oracle
, the resulting state of the system is:
Next, the ancillary qubit is measured. If its state is
, the main qubits collapse into the subspace of valid states [
49]:
where
is the number of valid states.
The complete process of qudit initialization via qubits on a quantum logic circuit is illustrated in
Figure 2.
Taking (43) into account, the overall state of the qudit system consisting of
qudits is given by the tensor product of the states of all individual qudits:
The encoding of objective function values
can be achieved through phase shift. Let
,
,
is the number of variables,
is the number of objectives. For each state
(
) in the superposition
the objective function values
are calculated based on the given constraints and the variable values assigned in
, with the values in
varying for each state
. Then, the phase encoding of the values for all objective functions in
for the state
can be written as:
where
is the weight coefficient for the
-th objective function,
.
The encoding is performed using the
gate, which is applied to each qubit of the state
as follows:
where
is the state of the qubit.
The state of the qudit system after phase encoding can be written as follows:
Next, the quantum oracle
acts on the system
, flipping the phase of the state
if it corresponds to a non-dominated solution, while leaving all other states unchanged. The quantum oracle
operates according to the rule:
where
is a non-dominated solution; the operator
denotes the correspondence relation.
The Grover diffusion operator or «inversion about the mean» [
50,
51], amplifies the amplitudes of «marked» states, thereby enhancing the identified solutions from (48), which are encoded in these «marked» states, among all possible states of the system.
According to the «inversion about the mean» method, the average amplitude
of the probability amplitudes of the quantum states in the system
is calculated using the formula:
where
is the probability amplitude of the state
.
Each amplitude
is transformed under the action of the Grover diffusion operator as follows:
The transformation of the state amplitudes of the system
by the Grover diffusion operator is implemented on a quantum logic circuit, according to [
51], by applying a Hadamard gate, a Pauli-
gate, and a multi-qubit
gate to each qubit forming the multi-qudit state
.
The implementation of the process of phase encoding of objective function values and amplitude amplification of states on a quantum logic circuit is shown in
Figure 3.
Local optimization of solutions from the Pareto front
is performed by generating a neighborhood of candidate solutions for each solution
,
, and selecting among the new solutions those that are non-dominated. To this end, the phase of qudits in the system
is updated by sequentially applying the
gate to each qubit of the multi-qudit state
. In this case, the new phase
is formed by adding a small perturbation to the original phase
from (45) [
52,
53]:
where
is a random perturbation drawn from a uniform distribution over the interval
,
is
a parameter controlling the scale of the perturbation.
Then, the application of the gate to each qubit of the state is expressed analogously to (46), with replaced by .
Then, the quantum oracle is applied again to the system , flipping the phase of the state if it corresponds to a non-dominated solution, while all other states remain unchanged. Subsequently, the Grover diffusion operator is applied to amplify the amplitudes of the «marked» states.
The implementation of the process of local optimization of solutions from the Pareto front on a quantum logic circuit is presented in
Figure 4.
The overall structure of the quantum-classical algorithmic system implementing the TF-QIMOA algorithm is shown in
Figure 5.
The quantum-classical algorithmic system is executed in the following order.
Initially, a system of quantum logic circuits is constructed, each implementing a qudit using qubits. The set of valid basis states in the qubit space is then defined. Next, the resulting qudit system is acted upon by gates, which encode the values of the objective functions , computed on the classical computing device, into the phases (where ). Subsequently, on the classical computing device, solutions are checked for non-domination. If a solution is non-dominated, the quantum oracle flips the phase of the corresponding multi-qudit quantum state , and the Grover diffusion operator amplifies the amplitude of this state. Non-dominated solutions are then stored in the Pareto front on the classical computing device.
In the second stage, Hadamard gates are applied to the resulting system , creating a superposition of all possible states. The application of gates to the system updates the phases to new values , which are computed for each state on the classical computing device. As a result, a neighborhood of candidate solutions is generated around each solution in the Pareto front . Subsequently, these new solutions are evaluated on the classical computing device to check for non-domination, while the quantum oracle and the Grover diffusion operator identify and amplify each multi-qudit quantum state corresponding to a non-dominated solution from the neighborhood. Finally, the new non-dominated solutions are used to update the Pareto front .
4. Experiments
The performance evaluation of the implemented TF-QIMOA algorithm is proposed to be conducted based on the IEEE CEC 2025 benchmark suite [
54], specifically using the test problems from the competition on dynamic multiobjective optimization [
55].
Multiobjective optimization problems with time-varying properties (Dynamic Multiobjective Optimization Problems, hereinafter—DMOPs) are more complex than static multiobjective problems, which creates significant difficulties for their solution by optimization algorithms [
56,
57,
58]. This is due to several factors. First, changes in the environment of the optimized system are difficult to detect, and failure to detect such changes may lead to disruption of the Pareto front formation process due to inconsistency of non-dominated solutions for two different optimized environments [
59]. Second, the dynamic nature of DMOPs is characterized by irregularity of environmental changes, multimodality, and discreteness of the Pareto front, which greatly complicates the optimization process [
60]. Third, the reaction time of optimization algorithms to changes in the environment of the optimized system is an important factor. Time constraints in solving DMOPs require optimization algorithms to maintain a balance between diversity and convergence, enabling them to quickly respond to changes in the environment of the optimized system [
61].
The following constrained dynamic multiobjective optimization problems [
55] were selected for testing:
subject to the constraint:
where
;
is the number of variables;
where
is the number of current fitness function evaluations,
is the population size,
is the frequency of environmental changes,
is the intensity of environmental changes;
.
- 2.
DMOP
subject to the constraints:
where
;
is the number of variables;
where
is the number of current fitness function evaluations,
is the population size,
is the frequency of environmental changes,
is the intensity of environmental changes;
.
- 3.
DMOP
subject to the constraints:
where
;
is the number of variables;
where
is the number of current fitness function evaluations,
is the population size,
is the frequency of environmental changes,
is the intensity of environmental changes;
.
- 4.
DMOP
subject to the constraint:
where
;
is the number of variables;
where
is the number of current fitness function evaluations,
is the population size,
is the frequency of environmental changes,
is the intensity of environmental changes;
.
- 5.
DMOP
subject to the constraints:
where
;
is the number of variables;
where
is the number of current fitness function evaluations,
is the population size,
is the frequency of environmental changes,
is the intensity of environmental changes;
.
- 6.
DMOP
subject to the constraint:
where
;
is the number of variables;
where
is the number of current fitness function evaluations,
is the population size,
is the frequency of environmental changes,
is the intensity of environmental changes;
;
.
- 7.
DMOP
subject to the constraint:
where
;
is the number of variables;
where
is the number of current fitness function evaluations,
is the population size,
is the frequency of environmental changes,
is the intensity of environmental changes;
.
- 8.
DMOP
subject to the constraints:
where
;
is the number of variables;
where
is the number of current fitness function evaluations,
is the population size,
is the frequency of environmental changes,
is the intensity of environmental changes;
.
- 9.
DMOP
subject to the constraints:
where
;
is the number of variables;
where
is the number of current fitness function evaluations,
is the population size,
is the frequency of environmental changes,
is the intensity of environmental changes;
.
- 10.
DMOP
subject to the constraint:
where
;
is the number of variables;
where
is the number of current fitness function evaluations,
is the population size,
is the frequency of environmental changes,
is the intensity of environmental changes;
.
For the purpose of conducting a comparative analysis of the performance of the quantum-inspired algorithm TF-QIMOA, the following algorithms also participate in the testing: the classical multiobjective optimization algorithms NSGA-III, MOEA/D, GDE3 [
62,
63,
64], the quantum-inspired multiobjective optimization algorithm QI-NSGA-III [
65], the dynamic multiobjective evolutionary algorithms DMOEA, MOACO-DCE, MOEA/D-MDDM [
30,
31,
32,
33,
34], and the hybrid quantum-classical version of the TF-QIMOA algorithm implemented as a quantum-classical algorithmic system (hereinafter—Hybrid TF-QIMOA).
The test results of the optimization algorithms are compared based on the average values of the solution distribution measure (hereinafter—DM), the Pareto dominance indicator (hereinafter—NR) [
65], the mutual dominance rate (hereinafter—MDR), the normalized deviation measure (hereinafter—E), the radial coverage metric (hereinafter—
) [
66], and the execution time
.
For each optimization algorithm, the population size, the maximum number of generations and the number of variables are set to 100. The frequency of environmental changes for the test problems is set to 20. The number of sectors for calculating the values of the metric is equal to 12. The optimization algorithms are executed iteratively over 30 independent runs.
All experiments are conducted on a classical computing device equipped with an Intel Core i9-13980HX processor and 32 GB of RAM. In the Hybrid TF-QIMOA algorithm, the computation of objective function values, the calculation of quantum state phases, the check for solution non-domination, and the storage and update of solutions in the Pareto front are performed on the classical computing device, while all quantum logic circuits are executed on a quantum computing device based on eight superconducting qubits [
67].
The results of testing the optimization algorithms on DMOP problems
–
with environmental change intensities
,
and
are presented in
Table 1,
Table 2 and
Table 3, respectively. The best metric values are highlighted in bold.
Bar charts of the optimization algorithm test results from
Table 1,
Table 2 and
Table 3 are presented in
Figure 6,
Figure 7 and
Figure 8. The
-axis represents the performance metrics used to evaluate the optimization algorithms and the
-axis represents the average values of these metrics.
Based on the test results presented in
Table 1, it can be seen that the TF-QIMOA algorithm achieves the best values in terms of the DM metric in problems
,
,
, in terms of the NR metric in problems
–
and
–
, in terms of the MDR metric in problems
,
,
,
,
,
, in terms of the E metric in problem
, in terms of the
metric in problems
,
,
,
,
,
. The Hybrid TF-QIMOA algorithm achieves the best values in terms of the NR metric in problems
,
,
,
, in terms of the MDR metric in problems
,
, in terms of the
metric in problems
,
, in terms of execution time
in problems
,
,
,
. These observations indicate that, when using TF-QIMOA, solutions on the Pareto front are distributed most uniformly in 30% of experiments, a larger number of unique non-dominated solutions are identified in 70% of experiments, significant progress in solution dominance across generations is observed in 60% of experiments, solutions are closest to the ideal solution in 10% of experiments, and the largest portion of the objective space of non-dominated solutions is covered in 60% of experiments. In the case of Hybrid TF-QIMOA, a larger number of unique non-dominated solutions is identified in 40% of experiments, significant progress in solution dominance across generations is observed in 20% of experiments, the largest portion of the objective space is covered in 20% of experiments, and Hybrid TF-QIMOA outperforms the other algorithms in terms of average execution time
in 40% of experiments. It should also be noted that the GDE3 algorithm achieves the best values in terms of the DM metric in problems
,
,
,
, thus demonstrating better solution distribution uniformity in 40% of experiments. However, due to its low performance in identifying unique non-dominated solutions and covering only about one-third of the objective space, this result may indicate GDE3’s limited ability to effectively explore the overall solution space and discover diverse solutions. At the same time, the MOEA/D algorithm achieves the best values in terms of the E metric in 80% of experiments (problems
–
,
–
,
), demonstrating the smallest distance between the solutions found by MOEA/D and the ideal solution.
The test results of the optimization algorithms under increased environmental change intensity to
(
Table 2) show that the TF-QIMOA algorithm outperforms the other algorithms in terms of the DM metric in problems
,
,
(30% of experiments), in terms of the NR metric in problems
–
(80% of experiments), in terms of the MDR metric in problems
,
–
(60% of experiments), in terms of the E metric in problems
,
,
,
(40% of experiments), in terms of the
metric in problems
-
,
,
(70% of experiments). Meanwhile, the Hybrid TF-QIMOA algorithm achieves the best values in terms of the NR metric in problems
,
,
(30% of experiments), in terms of the MDR metric in problem
(10% of experiments), in terms of the
metric in problem
(10% of experiments), and in terms of average execution time
in problems
,
,
(30% of experiments). The GDE3 algorithm outperforms the other algorithms in terms of the DM metric in problems
,
,
(30% of experiments) and in terms of average execution time
in problems
,
,
,
(40% of experiments). However, consistent with the results in
Table 1, GDE3 demonstrates poor performance in identifying unique non-dominated solutions and insufficient coverage of the objective space of non-dominated solutions. At the same time, the MOEA/D algorithm shows superior convergence performance according to the E metric in problems
–
,
,
,
(50% of experiments).
On the test problems with environmental change intensity
(
Table 3), the TF-QIMOA algorithm outperforms the other algorithms in terms of the DM metric in problems
,
,
(30% of experiments), in terms of the NR metric in problems
–
,
,
(70% of experiments), in terms of the MDR metric in problems
–
,
–
(60% of experiments), in terms of the E metric in problems
,
,
,
(40% of experiments), in terms of the
metric in problems
–
,
,
,
(80% of experiments). The Hybrid TF-QIMOA algorithm achieves the best values in terms of the DM metric in problem
(10% of experiments), in terms of the NR metric in problems
,
,
(30% of experiments), in terms of the MDR metric in problems
,
(20% of experiments), in terms of the E metric in problems
,
(20% of experiments), in terms of average execution time
in problems
,
(20% of experiments). The GDE3 algorithm outperforms other algorithms in terms of the DM metric in problems
,
,
(30% of experiments) and in terms of average execution time
in problems
–
,
,
(60% of experiments). However, GDE3 continues to exhibit poor performance in identifying unique non-dominated solutions and insufficient coverage of the objective space of non-dominated solutions, which is consistent with its performance on test problems with
and
(
Table 1 and
Table 2, respectively). Furthermore, in terms of convergence quality (E metric), MOEA/D is dominant, achieving the best metric values in problems
,
,
–
(50% of experiments).
It should be noted, however, that the experimental results of the DMOEA, MOACO-DCE, and MOEA/D-MDDM algorithms demonstrate performance patterns closely resembling those of the TF-QIMOA algorithm across all metrics and environmental change intensities. This similarity can be attributed to the fact that all three methods incorporate adaptive mechanisms inspired by dynamic response strategies, such as memory-based archives and decomposition-based guidance, which align conceptually with the quantum-inspired thermonuclear simulation principles underpinning TF-QIMOA. Nevertheless, despite these structural and behavioral parallels, the quantitative results consistently show that DMOEA, MOACO-DCE, and MOEA/D-MDDM remain slightly inferior to TF-QIMOA in terms of the solution distribution measure, the Pareto dominance indicator, the mutual dominance rate, the normalized deviation measure, and the radial coverage metric.
Table 4,
Table 5 and
Table 6 present the results of the Wilcoxon non-parametric statistical test applied to the Pareto dominance indicator distributions for DMOP problems
–
with environmental change intensity levels
,
, and
, respectively, obtained using the TF-QIMOA algorithm, as well as the distributions obtained by the Hybrid TF-QIMOA, QI-NSGA-III, NSGA-III, MOEA/D, GDE3, DMOEA, MOACO-DCE, and MOEA/D-MDDM algorithms. The threshold value for
p-value is set at 0.05.
The results of the non-parametric Wilcoxon statistical test performed based on the DMOP problems with an environmental change intensity of
(
Table 4) demonstrate the superiority of the TF-QIMOA algorithm over other presented algorithms, except for the QI-NSGA-III algorithm in problem
and the Hybrid TF-QIMOA algorithm in problems
,
, and
. In the case of problem
, the TF-QIMOA algorithm shows identical performance to the Hybrid TF-QIMOA and MOACO-DCE algorithms. When performing the non-parametric Wilcoxon statistical test based on the DMOP problems with an environmental change intensity of
(
Table 5), the TF-QIMOA algorithm demonstrates worse results than the Hybrid TF-QIMOA algorithm in problems
and
, and compared to the MOEA/D-MDDM algorithm in problem
. For the DMOEA algorithm, similar effectiveness is observed in problem
. Upon analyzing the results of the non-parametric Wilcoxon statistical test conducted based on the DMOP problems with an environmental change intensity of
(
Table 6), it is noted that the TF-QIMOA algorithm outperforms all other algorithms, except for the Hybrid TF-QIMOA algorithm when solving problem
. It should also be noted that the TF-QIMOA algorithm shows similar effectiveness to the MOEA/D-MDDM algorithm in problem
and to the DMOEA, MOACO-DCE, and MOEA/D-MDDM algorithms in problem
.
Analysis of the experimental results conducted on constrained dynamic multiobjective optimization problems demonstrates the advantages of the TF-QIMOA, Hybrid TF-QIMOA, and GDE3 algorithms, which manifest depending on the characteristics of the optimization problem and the intensity of environmental changes. Despite GDE3’s superiority in terms of average execution time in 40% of all experiments and in solution distribution uniformity in 33% of all experiments, the algorithm shows unsatisfactory performance in the NR, MDR and metrics regardless of the intensity of environmental changes. This indicates its inability to fully explore the search space, lack of progress in solution dominance across generations, and poor capability in identifying unique solutions. The TF-QIMOA algorithm demonstrates leadership in the NR metric (73% of all experiments), MDR (60% of all experiments) and (70% of all experiments), highlighting its superior ability to detect specific non-dominated solutions and cover a larger portion of the objective space under high (), moderate () and low () environmental change intensities. It is also worth noting that the Hybrid TF-QIMOA and TF-QIMOA algorithms achieve approximately the same values for the DM, NR, MDR and metrics in 67% of all experiments. However, Hybrid TF-QIMOA outperforms TF-QIMOA in terms of execution speed in 73% of all experiments.
The non-parametric Wilcoxon statistical test yields results that further substantiate these findings by offering a detailed breakdown of algorithmic performance across varying environmental change intensities. Specifically, the TF-QIMOA algorithm consistently exhibits strong performance, particularly excelling in scenarios with moderate and high environmental change intensities, where it demonstrates leadership in key metrics such as NR, MDR, and . These outcomes align well with the broader analysis indicating the algorithm’s robustness in detecting non-dominated solutions and maintaining solution diversity under varying conditions.
However, the statistical tests reveal differences in algorithmic behavior at environmental change intensities . Here, while the TF-QIMOA algorithm maintains its competitive edge overall, it occasionally encounters rivals like the Hybrid TF-QIMOA and MOEA/D-MDDM algorithms, especially in problems and . This observation underscores the importance of adaptability, as even the most effective algorithms may face limitations in specific problem contexts or under particular environmental dynamics. Moreover, the statistical results highlight the complementary strengths of the Hybrid TF-QIMOA algorithm, which surpasses TF-QIMOA in execution speed in a majority of cases. This suggests that while TF-QIMOA may offer superior solution quality, the Hybrid variant provides a trade-off by optimizing computational efficiency.