Next Article in Journal
Quantum Electrodynamics from Quantum Cellular Automata, and the Tension Between Symmetry, Locality, and Positive Energy
Previous Article in Journal
Time Dilation of Quantum Clocks in a Relativistic Gravitational Potential
Previous Article in Special Issue
A Consistent Approach to Modeling Quantum Observers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Exploring Quantum Neural Networks for Demand Forecasting

by
Gleydson Fernandes de Jesus
1,2,*,
Maria Heloísa Fraga da Silva
1,2,3,*,
Otto Menegasso Pires
1,2,
Lucas Cruz da Silva
4,
Clebson dos Santos Cruz
3 and
Valéria Loureiro da Silva
1
1
QuIIN–Quantum Industrial Innovation, EMBRAPII CIMATEC Competence Center in Quantum Technologies, SENAI CIMATEC, Av. Orlando Gomes, Salvador 41650-010, Bahia, Brazil
2
Latin America Quantum Computing Center, SENAI CIMATEC, Salvador 41650-010, Bahia, Brazil
3
Grupo de Informação Quântica e Física Estatística, Centro de Ciências Exatas e das Tecnologias, Universidade Federal do Oeste da Bahia, Barreiras 47810-059, Bahia, Brazil
4
Robotics Department, SENAI CIMATEC, Salvador 41650-010, Bahia, Brazil
*
Authors to whom correspondence should be addressed.
Entropy 2025, 27(5), 490; https://doi.org/10.3390/e27050490
Submission received: 28 December 2024 / Revised: 15 April 2025 / Accepted: 29 April 2025 / Published: 1 May 2025
(This article belongs to the Special Issue Classical and Quantum Networks: Theory, Modeling and Optimization)

Abstract

:
Forecasting demand for assets and services can be addressed in various markets, providing a competitive advantage when the predictive models used demonstrate high accuracy. However, the training of machine learning models incurs high computational costs, which may limit the training of prediction models based on available computational capacity. In this context, this paper presents an approach for training demand prediction models using quantum neural networks. For this purpose, a quantum neural network was used to forecast demand for vehicle financing. A classical recurrent neural network was used to compare the results, and they show a similar predictive capacity between the classical and quantum models, with the advantage of using a lower number of training parameters and also converging in fewer steps. Utilizing quantum computing techniques offers a promising solution to overcome the limitations of traditional machine learning approaches in training predictive models for complex market dynamics.

1. Introduction

A common problem experienced by companies is financial market uncertainty [1,2], which makes accurate forecasting and budgeting challenging, becoming a risk to investments and financial stability [1,2,3,4]. Companies must adapt quickly to these changes in order to remain competitive and mitigate any negative impacts [5]. In this context, demand forecasting is defined as a predictive analysis strategy used to overcome this challenge using traditional computational methods or more advanced technologies, including machine learning [6,7,8].
In this scenario, the demand estimation process helps companies internally plan to meet market needs [8,9]. It involves forecasting the number of services or products a company will sell in a future period [10]. The duration of this period can be customized and may vary based on the company’s size and objectives [11].
In order to help managers make more assertive decisions about team planning and demand management, forecasting takes into account internal and external factors that meet customer needs [12]. The benefits encompass enhancements in efficiency, operational performance, and the supply chain as it predicts the number of goods to be sold and, subsequently, the amount that has to be manufactured [13], thereby preventing the occurrence of insufficient or excessive production. Quantitatively, if a bank with an annual vehicle financing volume of USD 10 billion, for example, uses a predictive model that results in a 10% loss in potential revenue, an improvement in accuracy that reduces this rate to 5% could represent an additional gain of USD 500 million annually.
Furthermore, demand forecasting can be influenced by either a qualitative or quantitative methodology [14]. The first case is a superficial subjective analysis of customer behavior and market trends. In the second case, statistical data are compared and analyzed from both the sales history and customer base to provide a more in-depth picture of the future [6]. Accordingly, demand forecasting was traditionally based on statistical methods and expert opinion, which often made it difficult to capture complex patterns and dynamic market trends [15]. Given this scenario, the implementation of machine learning algorithms in demand forecasting has resulted in notable improvements in forecast accuracy [15].
In this context, classical machine learning (ML) is a widely used computational tool for solving the problem of demand forecasting [16]. It helps identify patterns in large volumes of historical data and make accurate forecasts. However, as data volumes increase and models become more complex, the processing limitations of classical models become more apparent due to the difficulty of capturing multiple characteristics of high-dimensional data [17]. Conversely, quantum computing has emerged as a promising solution. The ability of quantum computers to process information in parallel offers a significant advantage, allowing for the efficient and quick analysis of massive datasets and the optimization of machine learning models [18,19].
Thus, quantum machine learning (QML) emerges as a promising alternative that can accelerate information processing and provide notable improvements in the machine learning research area [19,20]. Recent advancements in this area suggest that the integration of quantum computing with machine learning is poised to lead to groundbreaking developments in technology and data analysis [21,22]. This approach offers a way to address the difficulties presented by conventional machine learning techniques [22,23], such as increased learning duration caused by the expanding amount of data [24]. Hence, quantum computing and quantum machine learning (QML) have recently experienced increased utilization across various domains, including finance [25].
In this regard, this work presents an application for QML to predict vehicle financing demand using a quantum neural network. For this purpose, we utilized a dataset obtained from the Brazilian bank BV, containing financing data and other relevant features collected from 2019 to 2023. These data were pre-processed, and smaller sets were extracted using feature reduction techniques [26,27]. The quantum neural network was trained on these pre-processed data to accurately predict vehicle financing demand, showcasing the potential of quantum computing in enhancing predictive analytics. The results obtained from this study demonstrate the promising capabilities of QML models in solving complex real-world problems such as financial forecasting. The integration of quantum computing in predictive analytics can revolutionize the way financial institutions make decisions and manage risks. By leveraging the power of QML, banks can gain a competitive edge in the market by making more accurate and timely predictions. Therefore, this research highlights the importance of leveraging quantum computing in the financial sector to improve decision-making processes, which represents a significant advancement in the field of predictive analytics.

2. Quantum Neural Networks

Quantum neural networks (QNNs) represent a new approach to machine learning, combining classical data processing with the power of quantum computing [19,20,28,29]. Despite their classical foundations, QNNs are considered pure quantum models because their execution depends on classical computing only for circuit preparation and statistical analysis [30]. These QNNs fall under Variational Quantum Algorithms, employing Parameterized Quantum Circuits (PQCs) known as ansätze (plural of ansatz), which are trained using classical optimization techniques. The behavior of quantum neural networks (QNNs) reflects that of classical neural networks, consisting of three main stages: data preparation, data processing, and data output [30].
In the data preparation stage, the classical input is encoded into a quantum state using a feature map, a circuit parameterized exclusively by the original data [31]. This coding facilitates the integration of the classical information into the quantum structure of the QNN. In particular, classical data may require pre-processing, such as normalization or scaling, to optimize the coding process [31,32].
Subsequently, in the data processing stage, the QNN operates within the framework of its ansatz. Usually structured as a layered variational circuit, the ansatz consists of multiple layers, each defined by an independent parameter vector. Variational circuits V j dependent on these parameters make up each layer, with layers of entanglement E n t interspersed. An example of a quantum ansatz is shown in Figure 1. The ansatz effectively processes quantum-coded data, taking advantage of entanglement and parameterized gates for computation.
Finally, in the data output stage, the processed quantum state is converted into a classical output via a final layer [30]. This operation is adapted to the specific problem being addressed. For example, in a binary classification, the expected value of a single qubit selected in the measurement can be used as the output [31]. Overall, QNNs offer a promising path for quantum-assisted machine learning, uniting classical and quantum paradigms to address complex computational tasks [19,20,31,32].
In this article, we use a QNN, whose architecture is described in Section 3.2, to perform the task of forecasting the demand for used vehicle financing in Brazil.

3. Quantum Data Analysis and Model Implementation

3.1. Data Scaling and Selection Techniques

The case analyzed in this paper was the forecasting of used car prices in Brazil from May 2022 to April 2023. The data used for training covered the period from January 2019 to April 2022, and they were provided by the Brazilian bank BV, so the application is of practical interest in the financial sector. In all, 25 features were provided for the training. However, 6 features were discarded because they did not contain data for 2019. The remaining 19 features were subjected to a feature reduction process using the Principal Component Analysis (PCA) so that the features with the greatest variance were selected. PCA is a statistical technique that enables the reduction of data dimensionality while preserving variance. It identifies principal components, linear combinations of initial features, ranked based on their total variance, ensuring that significant new features are identified [26].
The smallest sets used had 4 and 8 features, which represented 68.69% and 92.03% of the total variance of the dataset, respectively. In addition, the complete dataset represents 100% of the total variance of the distribution. The cumulative variances are shown in Figure 2.
After reducing the number of features, the data were standardized according to the following expression [32]:
x ^ = x μ σ ,
where x is the original data, μ is the mean of the values, σ is the standard deviation, and x ^ is the standardized data. This standardization assumes that the distribution of the data is approximately normal. Standardizing the data helps to ensure that all variables are on the same scale, which is important for many machine learning algorithms since it reduces the scale of the dataset, thereby decreasing the differences in scale between features and avoiding biases in features with larger scales. This process makes it easier to compare and interpret the coefficients of different features in the model.

3.2. Quantum Neural Network Architecture

In quantum neural network architecture, qubits are used to represent data and parameters in the model [33]. By leveraging quantum superposition and entanglement, quantum neural networks have the potential to outperform classical neural networks in certain tasks by processing information in a more efficient way [30,34]. This architecture holds promise for solving complex problems in fields such as optimization, machine learning, and cryptography. The quantum neural network model used to process these data is presented in Figure 3. In this model, we used Angle Embedding with R Y rotation gates as our feature map, performed after initializing the circuits in uniform superposition through H a d a m a r d gates. Due to the reduced number of features in our datasets, still within the range of a classical simulator, Angle Embedding has the advantage of low cost compared with other embeddings, only a single layer of single qubit operations. Two variational layers were considered and are represented in Figure 4.
In order to take into account the real monetary losses due to inaccurate results, the models were analyzed based on the mean absolute error obtained in each experiment. In addition, the accuracy of the results obtained through the heuristics used was analyzed by calculating the standard deviation obtained during the 12 months of testing. Each experiment was run 10 times.

4. Dataset and Preprocessing

In addition to the variable to be predicted, the dataset made available for the research contained 25 economic features relevant to the proposed model, with 52 samples collected monthly from January 2019 to April 2023. The model’s prediction variable is the daily average of used vehicle financing each month and was also part of the dataset made available.
During the period covered in the database, the world faced the COVID-19 pandemic, and governments around the world shut down their countries’ economies to promote social isolation. For this reason, we can consider this period to be anomalous.
Training the model in more predictable environments could help with the training process, potentially leading to better results. However, removing the COVID data would mean reducing the already scarce number of samples available for training, so they have been kept in their original form. In addition, as it is understood that not the pandemic itself but its impacts on economic indicators (contained in the database used) are the most relevant features for the predictive model in question; no new features were added to the database that could bring new information about the pandemic scenario. It should also be considered that the proposed model must be robust to any factors that impact the financial market, and the inability to predict such occurrences makes it necessary to measure them indirectly through their influence on economic indicators.
When analyzing the data made available for the research, it was noticed that some features did not contain data for 2019. As the number of instances for training was already low, it was necessary to remove these features since the alternatives to such exclusion would be to exclude the data for the entire year 2019 or to infer the missing data, which could make the model biased and was therefore not done. After excluding the 6 features that did not contain data for 2019, a feature reduction was carried out using the PCA method [26,27].
In addition, the PCA method allows for the reduction of the number of features in a dataset by applying a transformation to the coordinate axes. This transformation generates new axes that point in the directions of the greatest variance in the datasets. The directions of greatest variance are the main components of the models since they supposedly have more information to extract during the training process and can then be used in this process. The PCA method was used through the scikit-learn [35] machine learning library.
Based on the data available and after eliminating the data that did not have values for 2019, 3 different sets of data were generated, with 4, 8, and 19 features, which represent 68.89%, 92.03%, and 100% of the system’s total variance, respectively. These sets were generated to assess the impact of adding new features to the models.

5. Results

The results obtained from the quantum experiments are presented in Section 5.1, while those from the classical experiments are detailed in Section 5.2. The number of variational layers selected for the quantum experiments includes configurations of 1, 3, and 5 layers, chosen to assess their impact on performance.
The measurement results are presented in terms of the distributions observed. Training and test errors are illustrated graphically, focusing on the average daily financing obtained. Additionally, the mean monthly absolute errors are provided in tables, offering a comprehensive view of the variations and trends.

5.1. Quantum Experiments

5.1.1. Four Features

In the first considered case, the dataset was reduced from 19 initial features to 4 features. Similar to the other quantum experiments, we considered the two quantum networks presented in Section 3, as well as a classical RNN.
Figure 5 shows the results of the quantum networks obtained in the two experiments using the 4-feature dataset and varying the number of variational layers. The actual values to which the predictions should approximate are shown in the black curves. The simulation environment is discussed in Section 5.3, and the convergence of the models is discussed in Section 5.5. The standard deviation for each month is presented in Table A1, in Appendix B.1, and the monthly mean absolute error for the two experiments is presented in Table A5, in Appendix C.1. The cumulative variance of the data contained in this database concerning the original database with 19 features is 68.69%.
Violin graphs were used to present the results of the predictions. These graphs show the density distributions through their contours. In this way, wider points in the figures represent a greater density of data, while sparser and more distant points represent outliers. In addition, these graphs can reveal multimodal trends in the distributions when there is more than one widening point. The black boxplot in the center of the figures shows the median of the distributions through a white line in the boxes, and the first and third quartiles are represented respectively through the lower and upper edges of the boxplot.
Considering realistic scenarios where errors are inevitable, it may be preferable to err upwards or downwards, depending on the market and the agents involved. Nonetheless, it is important to consider that errors above the target may be a warning of the need for greater production of a certain product or availability of services, while errors below the target may represent missed opportunities to sell products or services with a higher demand than predicted by the machine learning models.
Among the quantum models consisting of 4 features, experiment 1, involving a single variational layer, generally showed the best accuracy, with a minimum mean absolute error of 299.90 and a 12-month mean absolute error of 682.52 ± 284.14.
In the second experiment, the best result was the one in which 5 variational layers were considered, where the lowest mean absolute error obtained was 411.97, and the monthly mean was 785.98 ± 192.21. In addition, this model shows considerably less variation than the results obtained with just 1 variational layer, as well as a considerably lower mean.
Furthermore, in both cases, the means of the predictions were above the actual values, which could imply that there is a greater supply of used car finance than there is actual demand if these indicators are the only ones considered.
The results obtained from experiment 2 with 1 layer consistently exceeded the target at all times, deviating from the target especially in the final months. This outcome indicates a lower performance of this model in comparison with the other experiments with the same number of features.

5.1.2. Eight Features

In the second considered case, the number of features in the database was reduced to 8. This value was chosen as an intermediate value between the 4 features used initially and the final 19 features, given that the cumulative variance for 8 features was 92.03%.
The distributions obtained are shown in Figure 6. The standard deviation for each month is shown in Table A2, and the monthly mean absolute error for the two experiments is shown in Table A6. The cumulative variance of the data contained in this database concerning the original database with 19 features is 68.69%.
In the quantum models containing 8 features, experiment 1 with a single variational layer generally showed the best accuracy, with a minimum mean absolute error of 346.10 and a 12-month mean absolute error of 709.76 ± 297.93.
In the second experiment, the best result was again the one in which 5 variational layers were considered, where the lowest mean absolute error obtained was 441.65, and the monthly mean was 980.79 ± 239.93. As occurred in the experiment with 4 features, there is no statistical difference between this result and the result obtained with 3 variational layers. However, the results of this model showed considerably less variation than the results obtained with just 1 variational layer and also a considerably lower mean.
In this case, the forecast means were more distributed compared with the target in the best result of experiment 1, but the results of the second experiment echoed the trend of exceeding the actual values, generating a production signal above actual demand.
As observed in the experiment with 4 features, in the experiment with 8 features, the results obtained from experiment 2 with 1 layer remained above the target at all times, deviating from the target in the final months. This indicates a lower performance of this model in comparison with the others.

5.1.3. Nineteen Features

In the third considered case, the dataset was tested using all the features for which data were available for 2019. Features that did not have data for 2019 were discarded. The alternative to discarding these features would be to perform inference for 2019 data. However, the amount of data that would be inferred would represent approximately 1/4 of the dataset, a portion that would compromise the model’s performance.
In order to train the model with this dataset, no transformation other than data standardization was carried out. However, from the 8th month of the test set onwards, it was identified that one of the features had increased significantly in relation to the others. For this reason, it was decided to maintain these data to observe the effects that this increase in one of the features would have on the results.
The distributions obtained are shown in the subfigures provided in Figure 7. The standard deviation for each month is shown in Table A3, and the monthly and annual mean absolute errors for the two experiments are shown in Table A7. Since this database contains all the features from the original dataset, the cumulative variance of the data contained in this database is the total variance of the original set, i.e., 100%.
In experiment 1, which was performed with 19 features, there was no significant difference between the results obtained with 1, 3, and 5 variational layers. For all three cases, the minimum absolute mean errors obtained were 522.95, 638.46, and 533.07, and the 12-month means were 1141.58 ± 465.98, 1054.52 ± 298.52, and 1051.39 ± 311.39, respectively. In experiment 2, the smallest mean absolute monthly errors were 758.83, 552.07, and 365.82, respectively, and the monthly means were 1630.98 ± 625.66, 995.71 ± 374.16, and 1074.90 ± 450.09. In this case, however, the first result presents a larger error linked to a larger standard deviation.
Once again, the results obtained from experiment 2 with 1 layer performed worse than the others, indicating that the model exhibited lower performance.

5.2. Classical Experiments

The classical experiments were performed as a way of comparing the results obtained with those of traditionally used classical methods. For this purpose, a classical recurrent neural network with 128 and 1024 neurons in the recurrent layer was considered. RNNs were selected for benchmarking because they are a well-established method for time series forecasting, provide a robust benchmark for comparison, and help evaluate the potential benefits of quantum neural networks in demand prediction, as they are capable of using temporal correlation in the data to make predictions. Figure 8 shows the results obtained using this model, and convergence graphs are shown in Section 5.5.
In experiment 1, involving 4 and 8 features, the results were similar, so the 12-month means of the mean absolute errors were 774.93 ± 199.39 and 997.12 ± 331.18, respectively. However, when all 19 initial features were taken into account, these results were skewed by the variable that had significantly higher values in the last 5 months, so the error increased significantly, and the 12-month mean was 90,480.42 ± 111,210.54. Given that this increase in error and error variance only occurs in the last few months of training, the results for 19 characteristics are presented in two graphs, as the results scale has changed.
Disregarding the last 5 months, the model showed a mean absolute error of 344.19 with a mean standard deviation of 197.55, which is significantly better than all the other models (classical or quantum) presented.
In experiment 2, involving 4 and 8 features, the results also behaved similarly so that the 12-month means of the absolute mean errors were 643.12 ± 293.77 and 683.30 ± 332.37. As was the case in the first experiment, when all 19 initial features were considered, these results were biased by the variable that had significantly higher values in the last 5 months so that the error increased significantly, and the 12-month mean in this experiment was 7965.14 ± 61,675.02. The results of this experiment, considering 19 features, were also presented in two graphs since the scale of the results was modified.
Table 1 summarizes best results of classical and quantum annual mean MAE according to the results above and the tables in Appendix B.

5.3. Simulation Environment

All the simulations were performed in the PennyLane quantum computing software development kit [36], developed by the quantum computing company Xanadu [37], also using the TensorFlow machine learning library [38]. The simulations were carried out in an HPC environment on Intel Xeon Platinum 8260L processors. The simulations involving 19 features were performed on 17 cores, while the simulations involving 4 and 8 features were performed on a single core.

5.4. Training Time

The execution times for each sample of the quantum model are shown in Table 2, and for the classical model, in Table 3. The total computing time of the runs is given by the values in the table multiplied by ten since ten samples were extracted for each model.
Given that the demand forecasting problem was solved through a simulation and not on a quantum computer, it is not possible to correlate the processing times obtained with those obtained on a quantum computer. Simulating quantum circuits is an expensive simulation that uses an exponential number of resources in relation to the number of qubits. This demand for resources would scale linearly on a quantum computer, since they can naturally implement the quantum properties that must be simulated through classical algorithms. Instead, the rapid convergence of the models can be interpreted as an indication of shorter execution times required by the quantum model.
Processing time is often pointed out as an advantage of quantum computing since some quantum algorithms have an advantage over the best classical algorithms that perform the same task. For instance, Shor’s algorithm and Grover’s algorithm are able to perform tasks in exponentially and quadratically less time than a classic computer, respectively. However, when analyzing the computational advantages, other metrics must be taken into account, such as the accuracy of the results and the savings in computational and energy resources.

5.5. Convergence

The research on the convergence of quantum models showed that the results converge before the first 30 training epochs, so the 10 experiments used for the statistical analysis of each model were carried out using only 30 epochs. This decision was based on preliminary tests considering training with 1000 epochs, which showed rapid convergence of the models, as shown in Figure 9 for 4 features, with marginal or zero improvements in performance from that point onwards. Therefore, this approach optimizes the use of computational resources and avoids overtraining. In addition, although the models were simulated in a classical environment, quantum resources are currently scarce, so the predictive quality of the models linked to fast and stable convergence should be considered an advantage of these models. The convergence graphs of the models are shown in Figure 10, Figure 11 and Figure 12.

5.5.1. Quantum Models

In this section, we present the performance of quantum models trained with 4, 8, and 19 input features. The loss curves are shown for different configurations of quantum circuits, varying the number of layers and experiment types. Figure 10 shows the results for models with 4 features, Figure 11 for models with 8 features, and Figure 12 for models with 19 features. Each figure illustrates the evolution of training loss over epochs, with comparisons between test and validation losses. The results provide insights into how circuit.

5.5.2. Classical Models

The validation loss of the classical model with 19 features, shown in Figure 13f, indicates that this model performs better than the other models, whether classical or quantum. However, the model’s performance with the test data, presented in Figure 8e–h, shows results that are not consistent with these findings. This discrepancy is due to the behavior of one of the features described earlier, where there was a significant and sudden increase in the data. The results that performed better with the test data, despite having a higher loss than that shown in Figure 13f, assigned a lower weight to the feature in question.

6. Conclusions

Here, a quantum neural network was utilized for the first time to solve the problem of predicting short-term demand for used vehicles. The tests were carried out on datasets with 4, 8, and 19 features. The results were compared with those obtained using a classical recurrent neural network, showing similarities of the models in terms of accuracy in the best case for each one, but with the quantum model using fewer features and parameters and converging in fewer epochs than the classical model. In addition, the quantum model showed less bias towards problematic features in the scenarios with the largest number of features considered. Thus, the results show evidence that quantum models can be excellent candidates for future implementation of this task in large-scale quantum computers. These results can possibly be extended to other predictions of interest to the financial sector, creating a new way of forecasting in the financial industry. These results could be better explored in subsequent stages by testing other quantum models, including quantum analogs to classical recurrent neural networks, and comparing the results with more robust variations of classical recurrent neural networks.
Code Availability: The code used in the experiments is available through the following link: https://github.com/morgoth00/quantum-demand-forecasting, accessed on 25 February 2025.

Author Contributions

Conceptualization, G.F.d.J.; Methodology, V.L.d.S.; Software, G.F.d.J.; Validation, G.F.d.J.; Investigation, G.F.d.J.; Data curation, G.F.d.J.; Writing—original draft, G.F.d.J., M.H.F.d.S. and O.M.P.; Writing—review & editing, G.F.d.J., M.H.F.d.S. and O.M.P.; Supervision, L.C.d.S., C.d.S.C. and V.L.d.S.; Project administration, G.F.d.J. All authors have read and agreed to the published version of the manuscript.

Funding

This work has been partially supported by QuIIN—EMBRAPII CIMATEC Competence Center in Quantum Technologies, with financial resources from the PPI IoT/Manufatura 4.0 of the MCTI grant number 053/2023, signed with EMBRAPII. C.C., G.F.d.J., L.Q.G. and M.H.F.d.S. thank the Bahia State Research Support Foundation (FAPESB) for financial support (grant numbers APP0041/2023 and PPP0006/2024).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The authors do not have permission to share data.

Acknowledgments

The authors would like to thank the Brazilian bank BV for providing the data, technical and financial support. They also thank the SENAI CIMATEC Supercomputing Center for Industrial Innovation for the infrastructure access, technical and financial support.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. The Algorithm

Appendix A.1. Encoding

The first step of the algorithm involves encoding classical data into quantum bits. Initially, all qubits are prepared in a uniform superposition state using the Hadamard gate. This gate is a fundamental tool in quantum computing and is represented by the matrix shown in Equation (A1). The Hadamard gate ensures that each qubit has an equal probability of being measured in either the | 0 or | 1 state, thus enabling subsequent operations to be performed on superposed states. Figure A1 shows the visual representation of the action of this quantum logic gate on a qubit.
Figure A1. Bloch sphere representation of the H gate acting on a single qubit. (a) shows the initial qubit state, while (b) shows the state after applying this gate.
Figure A1. Bloch sphere representation of the H gate acting on a single qubit. (a) shows the initial qubit state, while (b) shows the state after applying this gate.
Entropy 27 00490 g0a1
H = 1 2 1 1 1 1 .
However, at this stage, classical data have not yet been fully transformed into quantum states. The encoding process is achieved by applying an R y rotation gate to each qubit. The R y gate introduces a parameterized rotation around the y-axis of the Bloch sphere, effectively encoding numerical classical data into quantum amplitudes. This encoding step is pivotal, as it translates classical information into the quantum domain, enabling computations that exploit quantum mechanics. Equation (A2) provides the matrix representation of the R y gate, illustrating its dependence on the rotation angle parameter. Figure A2 shows a visual representation of the action of this quantum logic gate on a qubit.
R y ( θ ) = cos ( θ / 2 ) sin ( θ / 2 ) sin ( θ / 2 ) cos ( θ / 2 ) ,
Figure A2. Bloch sphere representation of the R y ( θ ) gate acting on a single qubit. (a) shows the initial qubit state, while (b) shows the state after applying this gate.
Figure A2. Bloch sphere representation of the R y ( θ ) gate acting on a single qubit. (a) shows the initial qubit state, while (b) shows the state after applying this gate.
Entropy 27 00490 g0a2

Appendix A.2. Ansatz

The optimization process of the algorithm is implemented through an ansatz, which is a variational quantum circuit designed for specific problem-solving tasks. The ansatz is composed of various quantum gates, including the R y gate (introduced in Equation (A2)), as well as R x , R z , and CNOT gates. The R x and R z gates perform rotations around the X and Z axes of the Bloch sphere, respectively, allowing for adjustments to the phase and amplitude of the quantum states. Figure A3 and Figure A4 show the visual representation of the individual action of these gates on a qubit. The CNOT gate, a two-qubit entangling gate, introduces correlations between qubits, which are essential for harnessing the power of entanglement in quantum algorithms. The matrix representations of these gates, provided below, further detail their contributions to the ansatz structure.
R x ( θ ) = cos ( θ / 2 ) i sin ( θ / 2 ) i sin ( θ / 2 ) cos ( θ / 2 ) ,
R z ( θ ) = e i θ / 2 0 0 e i θ / 2 .
C N O T = 1 0 0 0 0 1 0 0 0 0 0 1 0 0 1 0
Figure A3. Bloch sphere representation of the R x ( θ ) gate acting on a single qubit. (a) shows the initial qubit state, while (b) shows the state after applying this gate.
Figure A3. Bloch sphere representation of the R x ( θ ) gate acting on a single qubit. (a) shows the initial qubit state, while (b) shows the state after applying this gate.
Entropy 27 00490 g0a3
Figure A4. Bloch sphere representation of the R z ( θ ) gate acting on a single qubit. (a) shows the initial qubit state, while (b) shows the state after applying this gate.
Figure A4. Bloch sphere representation of the R z ( θ ) gate acting on a single qubit. (a) shows the initial qubit state, while (b) shows the state after applying this gate.
Entropy 27 00490 g0a4

Appendix A.3. Measurements

Measurements at the end of the quantum circuit are carried out by projecting the states of the qubits onto the Z basis, effectively collapsing the superposition states into classical bits. This process is probabilistic, as the outcome depends on the quantum state amplitudes defined during the encoding and optimization steps. To ensure reliable results, a statistically significant number of measurements must be performed, with outcomes aggregated to determine the probabilities of each result. This measurement step is critical for extracting meaningful information from quantum computations.

Appendix A.4. Optimization

The optimization step is performed classically. In this step, the angles of the gates in the ansatz are adjusted by a classical optimizer. This optimizer iteratively refines the gate parameters to minimize a cost function that is typically related to the target problem being solved. Once the optimizer determines new optimal angles, these updated angles are applied to a new quantum circuit. The circuit retains the same structure as the previous one but incorporates the new angles into the ansatz. This iterative process continues until convergence is achieved or a predefined criterion is met, ensuring an efficient approach to solving the problem at hand.

Appendix B. Standard Deviation

The monthly standard deviation obtained in each experiment is presented in this section. Appendix B.1 shows the deviations obtained in the quantum experiments, while Appendix B.2 shows the deviations obtained in the classical experiments.

Appendix B.1. Quantum Experiments

Table A1, Table A2 and Table A3 show the standard deviation obtained in the quantum experiments carried out with 4, 8, and 19 features, respectively. The columns of the tables represent each experiment and the number of layers used, while the rows represent the deviations in each month. The last two rows represent the mean and median deviations over the 12-month period.
Table A1. Monthly standard deviation for quantum experiments with 4 features. The columns represent each experiment with a number of 1, 3, and 5 variational layers, and the lines represent the months. The last two lines show the 12-month mean and median of the mean absolute error.
Table A1. Monthly standard deviation for quantum experiments with 4 features. The columns represent each experiment with a number of 1, 3, and 5 variational layers, and the lines represent the months. The last two lines show the 12-month mean and median of the mean absolute error.
Experiment 1Experiment 2
1 Layer 3 Layers 5 Layers 1 Layer 3 Layers 5 Layers
Month 1175.8777449.1271579.0006292.4347781.3115945.6261
Month 2160.8550454.754541.9076311.3989697.3065928.2651
Month 3243.3442517.2781590.0444324.5650812.6765100.6944
Month 4348.6137574.0108598.8600410.6001871.57121111.7944
Month 5384.9921710.9180472.9006355.1420780.3099795.7520
Month 6443.4344770.3615422.4470326.7457730.8906498.5881
Month 7345.1314893.0815746.0978629.6234858.7378506.8464
Month 8815.3586990.5417737.8833707.07811085.2201506.1400
Month 9543.9914982.4916804.3594871.35731055.3599458.0408
Month 10643.1245810.9172597.7432578.6797931.9728502.1143
Month 11738.7717658.1062395.4243347.05881020.1457794.2041
Month 12905.4925695.5312451.0824302.25301077.0132679.5169
Mean479.0823708.8950578.1459454.7447891.8763727.3819
Median414.2133703.2246584.5225351.1004865.1545736.8605
Table A2. Monthly standard deviation for quantum experiments with 8 features. The columns represent each experiment with a number of 1, 3, and 5 variational layers, and the lines represent the months. The last two lines show the 12-month mean and median of the mean absolute error.
Table A2. Monthly standard deviation for quantum experiments with 8 features. The columns represent each experiment with a number of 1, 3, and 5 variational layers, and the lines represent the months. The last two lines show the 12-month mean and median of the mean absolute error.
Experiment 1Experiment 2
1 Layer 3 Layers 5 Layers 1 Layer 3 Layers 5 Layers
Month 1160.8833252.3982326.4278121.1870540.7551466.1218
Month 2389.2180321.1851321.9523115.6250544.9329578.9061
Month 3199.1200407.9355427.0799116.9056628.8543312.7844
Month 4317.9407397.5779462.1943276.6656595.7785338.2273
Month 5406.4636597.9564480.5169345.8178437.5681255.3891
Month 6710.3341482.3563550.8562235.6304739.2531322.0360
Month 7557.0957722.5253642.4665575.8245602.2518353.3172
Month 81059.4827621.9092885.9215693.9122616.6715352.7128
Month 9758.2744793.3121874.5666801.1654976.0197579.2410
Month 10841.0811915.8552820.2909697.0682655.6456514.6416
Month 11793.1211887.0502612.1734269.6858615.5451507.4711
Month 121245.5292529.3304646.7094483.5458799.0230362.0224
Mean619.8787577.4493587.5963394.4194646.0249411.9059
Median633.7149563.6434581.5148311.2417616.1083357.6698
Table A3. Monthly standard deviation for quantum experiments with 19 features. The columns represent each experiment with a number of 1, 3, and 5 variational layers, and the lines represent the months. The last two lines show the 12-month mean and median of the mean absolute error.
Table A3. Monthly standard deviation for quantum experiments with 19 features. The columns represent each experiment with a number of 1, 3, and 5 variational layers, and the lines represent the months. The last two lines show the 12-month mean and median of the mean absolute error.
Experiment 1Experiment 2
1 Layer 3 Layers 5 Layers 1 Layer 3 Layers 5 Layers
Month 1516.7164494.9935578.7277355.9007427.1650521.0726
Month 2572.1335685.2427280.9679145.9410421.7772693.0295
Month 3586.1252738.5064532.6228675.5994658.4324688.9236
Month 4598.8343669.0032586.9466590.2582758.9672748.5518
Month 5639.3816539.5154444.1018266.6220437.0098685.1440
Month 61091.7456705.1529436.6959519.9328534.6220579.4321
Month 7999.3549665.8485439.2488413.0658689.7512570.5552
Month 8647.7254797.2639559.3305932.4816570.7285670.7190
Month 9967.2389707.2880587.07411128.5844684.2530737.8799
Month 101389.6125641.6429565.22841301.0933705.1833659.6317
Month 11953.2446811.4675466.9461920.2317748.9888479.6019
Month 12995.1870863.9411424.4394959.6248798.4880565.5388
Mean829.7750693.3222491.8608648.1113619.6139633.3400
Median800.4850695.1979499.7845632.9288671.3427665.1754

Appendix B.2. Classical Experiments

Table A4 shows the standard deviation obtained in the classical experiments carried out with 4, 8, and 19 features. The columns of the table represent each experiment and the number of features used, while the rows represent the deviations in each month. The last two rows represent the average and median deviations over the 12-month period.
Table A4. Monthly standard deviation for classical experiments with 4, 8, and 19 features. The columns represent each experiment with a number of 4, 8, and 19 features, and the lines represent the months. The last two lines show the 12-month mean and median of the mean absolute error.
Table A4. Monthly standard deviation for classical experiments with 4, 8, and 19 features. The columns represent each experiment with a number of 4, 8, and 19 features, and the lines represent the months. The last two lines show the 12-month mean and median of the mean absolute error.
Classical Experiment 1Classical Experiment 2
4 Features 8 Features 19 Features 4 Features 8 Features 19 Features
Month 1308.17515.30983.7699.29189.26263.49
Month 2354.86592.63928.49258.99240.52368.53
Month 3528.00597.08684.88221.78155.91158.13
Month 4495.66797.64817.02321.39252.95190.64
Month 5472.31703.64676.05333.65171.95136.29
Month 6552.07611.75748.18287.49197.82353.89
Month 7484.05613.97494.19346.032230.98327.44
Month 8521.11929.49218,484.99492.37213.5466,181.53
Month 9766.99729.89217,021.29573.75204.8387,269.59
Month 10809.57621.17224,172.77623.91312.9194,615.78
Month 11797.50760.97231,876.29322.52218.2163,811.92
Month 12697.86751.18260,196.92385.86154.9255,942.54
Mean565.68685.3996,423.73355.59211,59830,801.65
Median524.55662.40956.13328.09209.19361.21

Appendix C. Mean Absolute Error

The mean absolute error obtained in each experiment is presented in this section. Appendix C.1 presents the errors obtained in the quantum experiments, while Appendix C.2 presents the errors obtained in the classical experiments.

Appendix C.1. Quantum Experiments

Table A5, Table A6 and Table A7 show the mean absolute error obtained in the quantum experiments carried out with 4, 8, and 19 features, respectively. The columns of the tables represent each experiment and the number of layers used, while the rows represent the errors in each month. The last row represents the average of the mean absolute errors over the 12-month period.
Table A5. Monthly mean absolute error for quantum experiments with 4 features. The columns represent each experiment with a number of 1, 3, and 5 variational layers, and the lines represent the months. The last line shows the 12-month mean of the mean absolute error.
Table A5. Monthly mean absolute error for quantum experiments with 4 features. The columns represent each experiment with a number of 1, 3, and 5 variational layers, and the lines represent the months. The last line shows the 12-month mean of the mean absolute error.
Experiment 1Experiment 2
1 Layer 3 Layers 5 Layers 1 Layer 3 Layers 5 Layers
Month 1650.18401.92530.33792.331463.55871.18
Month 21175.84536.48646.12805.951469.63951.18
Month 3635.44485.10496.181117.421479.38944.70
Month 4299.90453.46513.00804.101120.72929.66
Month 5334.44707.43858.721560.78738.46673.11
Month 6493.43658.21842.962249.48698.33411.97
Month 7733.64886.78717.682770.17912.10626.01
Month 81144.931027.13827.182888.231013.24867.35
Month 9566.271390.831297.833154.11885.93522.42
Month 10577.18898.56698.992561.831130.34742.62
Month 11596.821036.781036.901979.68821.36849.04
Month 12982.13792.25641.452048.101066.481042.62
Mean682.52 ± 284.14772.91 ± 292.49758.94 ± 234.121894.35 ± 865.871066.63 ± 279.64785.98 ± 192.21
Table A6. Monthly mean absolute error for quantum experiments with 8 features. The columns represent each experiment with a number of 1, 3, and 5 variational layers, and the lines represent the months. The last line shows the 12-month mean of the mean absolute error.
Table A6. Monthly mean absolute error for quantum experiments with 8 features. The columns represent each experiment with a number of 1, 3, and 5 variational layers, and the lines represent the months. The last line shows the 12-month mean of the mean absolute error.
Experiment 1Experiment 2
1 Layer 3 Layers 5 Layers 1 Layer 3 Layers 5 Layers
Month 1512.12732.27401.661097.631217.56951.43
Month 2616.63794.92530.461091.541366.401290.20
Month 3410.03795.86393.681316.141277.381179.36
Month 4346.10439.18431.12724.89835.73683.89
Month 5398.91585.06832.431327.13647.49441.65
Month 6850.79427.59632.402452.521028.891035.87
Month 7760.58767.84921.272418.721075.02993.72
Month 81008.91782.521008.662852.20938.751180.06
Month 9705.201388.401597.512948.62884.34941.42
Month 10692.89952.771075.671764.08916.54935.58
Month 11806.421036.141186.701923.66684.19899.97
Month 121408.56596.32881.291709.091053.041236.42
Mean709.76 ± 297.93774.90 ± 266.34824.40 ± 366.911802.19 ± 730.42993.78 ± 222.06980.79 ± 239.93
Table A7. Monthly mean absolute error for quantum experiments with 19 features. The columns represent each experiment with a number of 1, 3, and 5 variational layers, and the lines represent the months. The last line shows the 12-month mean of the mean absolute error.
Table A7. Monthly mean absolute error for quantum experiments with 19 features. The columns represent each experiment with a number of 1, 3, and 5 variational layers, and the lines represent the months. The last line shows the 12-month mean of the mean absolute error.
Experiment 1Experiment 2
1 Layer 3 Layers 5 Layers 1 Layer 3 Layers 5 Layers
Month 11290.761271.091184.782580.361465.181669.35
Month 21138.651341.461113.862454.711664.071878.59
Month 31012.17864.44852.391860.121122.221412.05
Month 4990.79990.921030.961660.29801.171150.94
Month 5522.95651.76533.071597.31987.061119.59
Month 6935.991132.941024.331732.11785.72775.02
Month 7835.871570.971529.912496.801508.081190.74
Month 8525.351240.001177.67758.83967.371232.49
Month 91090.46710.54629.811014.02592.40564.18
Month 102013.581264.201509.811208.45623.42892.39
Month 111442.46638.46791.39967.58552.07365.82
Month 121899.93977.431238.751241.23879.85647.64
Mean1141.58 ± 465.981054.52 ± 298.521051.39 ± 311.391630.98 ± 625.66995.71 ± 374.161074.90 ± 450.09

Appendix C.2. Classical Experiments

Table A8 shows the mean absolute error obtained in the classical experiments carried out with 4, 8, and 19 features. The columns of the table represent each experiment and the number of features used, while the rows represent the error obtained in each month. The last two rows represent the average and deviations over the 12-month period.
Table A8. Monthly mean absolute error for classical experiments with 4, 8, and 19 features. The columns represent each experiment with a number of 4, 8, and 19 features, and the lines represent the months. The last line shows the 12-month mean of the mean absolute error.
Table A8. Monthly mean absolute error for classical experiments with 4, 8, and 19 features. The columns represent each experiment with a number of 4, 8, and 19 features, and the lines represent the months. The last line shows the 12-month mean of the mean absolute error.
Classical Experiment 1Classical Experiment 2
4 Features 8 Features 19 Features 4 Features 8 Features 19 Features
Month 11080.651214.681083.70852.70806.36331.93
Month 2772.141195.051083.74549.44886.17497.79
Month 3934.071513.971422.53821.231255.03700.34
Month 4658.061244.07806.78332.87781.47229.77
Month 5549.081104.81796.94272.64584.7097.59
Month 6725.331006.55894.82380.181035.19289.33
Month 7909.291425.20535.88539.621021.39262.57
Month 8477.09663.75191,450.99286.32357.8283,613.94
Month 9967.82663.77207,996.691005.15199.17178,754.93
Month 10802.83645.03232,189.14850.08302.93129,657.59
Month 11933.40673.35211,070.681136.52557.77175,224.64
Month 12489.41615.22236,433.09726.71411.61170,439.79
Mean774.93 ± 199.39997.12 ± 331.1890,480.42 ± 111,210.54646.12 ± 293.77683.30 ± 332.3779,695.14 ± 61,675.02

Appendix D. Mean Absolute Percentage Error

The mean absolute percentage error obtained in each experiment is presented in this section. Appendix D.1 presents the errors obtained in the quantum experiments, while Appendix D.2 presents the errors obtained in the classical experiments.

Appendix D.1. Quantum Experiments

Table A9. Monthly mean absolute percentage error for quantum experiments with 4 features. The columns represent each experiment with a number of 1, 3, and 5 variational layers, and the lines represent the months. The last line shows the 12-month mean of the mean absolute percentage error.
Table A9. Monthly mean absolute percentage error for quantum experiments with 4 features. The columns represent each experiment with a number of 1, 3, and 5 variational layers, and the lines represent the months. The last line shows the 12-month mean of the mean absolute percentage error.
Experiment 1Experiment 2
1 Layer 3 Layers 5 Layers 1 Layer 3 Layers 5 Layers
Month 15.783.574.717.0513.027.75
Month 210.584.835.827.2513.238.56
Month 35.684.344.439.9913.228.44
Month 42.573.884.396.889.597.96
Month 52.775.877.1212.956.135.58
Month 64.135.517.0618.835.853.45
Month 76.457.796.3024.348.015.50
Month 810.049.017.2525.338.897.61
Month 94.7911.7710.9826.697.504.42
Month 105.027.826.0822.309.846.46
Month 114.998.668.6616.546.867.09
Month 128.496.855.5517.719.229.01
Mean5.94 ± 2.226.65 ± 2.346.53 ± 1.8816.32 ± 6.939.28 ± 2.226.82 ± 1.53
Table A10. Monthly mean absolute percentage error for quantum experiments with 8 features. The columns represent each experiment with a number of 1, 3, and 5 variational layers, and the lines represent the months. The last line shows the 12-month mean of the mean absolute percentage error.
Table A10. Monthly mean absolute percentage error for quantum experiments with 8 features. The columns represent each experiment with a number of 1, 3, and 5 variational layers, and the lines represent the months. The last line shows the 12-month mean of the mean absolute percentage error.
Experiment 1Experiment 2
1 Layer 3 Layers 5 Layers 1 Layer 3 Layers 5 Layers
Month 14.566.513.579.7610.838.46
Month 25.557.164.779.8212.3011.61
Month 33.667.113.5211.7611.4210.54
Month 42.963.763.696.217.155.85
Month 53.314.856.9111.015.373.66
Month 67.123.585.2920.538.618.67
Month 76.686.758.0921.259.448.73
Month 88.856.868.8425.018.2310.35
Month 95.9711.7513.5224.957.487.97
Month 106.038.299.3615.357.988.14
Month 116.748.669.9116.075.727.52
Month 1212.185.167.6214.789.1110.69
Mean6.13 ± 2.386.70 ± 2.067.09 ± 2.8115.54 ± 5.788.64 ± 1.878.52 ± 2.13
Table A11. Monthly mean absolute percentage error for quantum experiments with 19 features. The columns represent each experiment with a number of 1, 3, and 5 variational layers, and the lines represent the months. The last line shows the 12-month mean of the mean absolute percentage error.
Table A11. Monthly mean absolute percentage error for quantum experiments with 19 features. The columns represent each experiment with a number of 1, 3, and 5 variational layers, and the lines represent the months. The last line shows the 12-month mean of the mean absolute percentage error.
Experiment 1Experiment 2
1 Layer 3 Layers 5 Layers 1 Layer 3 Layers 5 Layers
Month 111.4811.3110.5422.9513.0314.85
Month 210.2512.0710.0322.1014.9816.91
Month 39.057.737.6216.6310.0312.62
Month 48.488.488.8214.216.869.85
Month 54.345.414.4213.258.199.29
Month 67.849.488.5714.506.586.49
Month 77.3413.8013.4421.9313.2510.46
Month 84.6010.8710.336.668.4810.81
Month 99.236.015.338.585.014.77
Month 1017.5311.0013.1410.525.437.77
Month 1112.055.336.618.084.613.06
Month 1216.438.4510.7110.737.615.60
Mean9.88 ± 3.719.16 ± 2.539.13 ± 2.7414.18 ± 4.748.67 ± 3.279.37 ± 3.86

Appendix D.2. Classical Experiments

Table A12. Monthly mean absolute percentage error for classical experiments with 4, 8, and features. The columns represent each experiment with a number of 4, 8, and 19 features, and the lines represent the months. The last line shows the 12-month mean of the mean absolute percentage error.
Table A12. Monthly mean absolute percentage error for classical experiments with 4, 8, and features. The columns represent each experiment with a number of 4, 8, and 19 features, and the lines represent the months. The last line shows the 12-month mean of the mean absolute percentage error.
Classical Experiment 1Classical Experiment 2
4 Features 8 Features 19 Features 4 Features 8 Features 19 Features
Month 19.6110.809.647.587.172.95
Month 26.9510.769.764.957.984.48
Month 38.3513.5312.717.3411.226.26
Month 45.6310.656.912.856.691.97
Month 54.569.176.612.264.850.81
Month 66.078.437.493.188.672.42
Month 77.9912.524.714.748.972.30
Month 84.185.821679.052.513.14733.31
Month 98.195.621759.888.501.681512.47
Month 106.995.612021.147.402.641128.63
Month 117.805.621763.249.504.661463.79
Month 124.235.322044.476.283.561473.82
Mean6.71 ± 1.618.65 ± 2.74777.13 ± 841.905.59 ± 2.085.93 ± 2.56527.77 ± 599.01

Appendix E. Mean Squared Error

Appendix E.1. Quantum Experiments

Table A13. Monthly mean squared error for quantum experiments with 4 features. The columns represent each experiment with a number of 1, 3, and 5 variational layers, and the lines represent the months. The last line shows the 12-month mean of the mean squared error.
Table A13. Monthly mean squared error for quantum experiments with 4 features. The columns represent each experiment with a number of 1, 3, and 5 variational layers, and the lines represent the months. The last line shows the 12-month mean of the mean squared error.
Experiment 1Experiment 2
1 Layer 3 Layers 5 Layers 1 Layer 3 Layers 5 Layers
Month 1 4.51 × 10 5 2.39 × 10 5 3.69 × 10 5 7.05 × 10 5 2.45 × 10 6 1.21 × 10 6
Month 2 1.41 × 10 6 4.37 × 10 5 5.26 × 10 5 7.37 × 10 5 2.53 × 10 6 1.32 × 10 6
Month 3 4.57 × 10 5 2.78 × 10 5 3.42 × 10 5 1.34 × 10 6 2.54 × 10 6 1.39 × 10 6
Month 4 1.53 × 10 5 4.73 × 10 5 5.86 × 10 5 7.98 × 10 5 1.53 × 10 6 1.26 × 10 6
Month 5 1.35 × 10 5 9.53 × 10 5 9.39 × 10 5 2.55 × 10 6 7.09 × 10 5 5.80 × 10 5
Month 6 2.99 × 10 5 9.33 × 10 5 8.71 × 10 5 5.16 × 10 6 6.20 × 10 5 2.24 × 10 5
Month 7 6.68 × 10 5 1.19 × 10 6 8.19 × 10 5 7.75 × 10 6 1.12 × 10 6 5.96 × 10 5
Month 8 1.74 × 10 6 1.61 × 10 6 9.34 × 10 5 8.47 × 10 6 1.39 × 10 6 9.68 × 10 5
Month 9 4.16 × 10 5 2.68 × 10 6 2.01 × 10 6 1.00 × 10 7 1.07 × 10 6 4.14 × 10 5
Month 10 4.72 × 10 5 1.06 × 10 6 6.28 × 10 5 6.62 × 10 6 1.57 × 10 6 7.37 × 10 5
Month 11 5.18 × 10 5 1.46 × 10 6 1.22 × 10 6 4.03 × 10 6 9.90 × 10 5 1.01 × 10 6
Month 12 1.35 × 10 6 9.83 × 10 5 5.95 × 10 5 4.28 × 10 6 1.52 × 10 6 1.37 × 10 6
Mean 6.72 × 10 5 ± 4.00 × 10 5 1.02 × 10 6 ± 6.88 × 10 5 8.20 × 10 5 ± 4.98 × 10 5 4.37 × 10 6 ± 2.36 × 10 6 1.50 × 10 6 ± 6.52 × 10 5 9.23 × 10 5 ± 4.34 × 10 5
Table A14. Monthly mean squared error for quantum experiments with 8 features. The columns represent each experiment with a number of 1, 3, and 5 variational layers, and the lines represent the months. The last line shows the 12-month mean of the mean squared error.
Table A14. Monthly mean squared error for quantum experiments with 8 features. The columns represent each experiment with a number of 1, 3, and 5 variational layers, and the lines represent the months. The last line shows the 12-month mean of the mean squared error.
Experiment 1Experiment 2
1 Layer 3 Layers 5 Layers 1 Layer 3 Layers 5 Layers
Month 1 2.86 × 10 5 5.94 × 10 5 2.35 × 10 5 1.22 × 10 6 1.75 × 10 6 1.10 × 10 6
Month 2 4.47 × 10 5 7.25 × 10 5 3.75 × 10 5 1.20 × 10 6 2.13 × 10 6 1.97 × 10 6
Month 3 2.04 × 10 5 7.83 × 10 5 2.57 × 10 5 1.74 × 10 6 1.99 × 10 6 1.48 × 10 6
Month 4 1.70 × 10 5 2.67 × 10 5 2.50 × 10 5 5.94 × 10 5 8.84 × 10 5 5.71 × 10 5
Month 5 2.42 × 10 5 4.18 × 10 5 8.87 × 10 5 1.87 × 10 6 5.92 × 10 5 2.54 × 10 5
Month 6 1.06 × 10 6 3.28 × 10 5 5.46 × 10 5 6.06 × 10 6 1.31 × 10 6 1.17 × 10 6
Month 7 8.11 × 10 5 6.86 × 10 5 9.48 × 10 5 5.97 × 10 6 1.45 × 10 6 1.10 × 10 6
Month 8 1.61 × 10 6 7.27 × 10 5 1.38 × 10 6 8.32 × 10 6 1.21 × 10 6 1.52 × 10 6
Month 9 7.26 × 10 5 2.41 × 10 6 2.99 × 10 6 8.78 × 10 6 1.03 × 10 6 1.11 × 10 6
Month 10 6.80 × 10 5 1.37 × 10 6 1.57 × 10 6 3.49 × 10 6 1.15 × 10 6 1.11 × 10 6
Month 11 9.36 × 10 5 1.46 × 10 6 1.75 × 10 6 3.77 × 10 6 6.26 × 10 5 1.04 × 10 6
Month 12 3.08 × 10 6 4.89 × 10 5 1.15 × 10 6 3.13 × 10 6 1.49 × 10 6 1.65 × 10 6
Mean 8.54 × 10 5 ± 7.39 × 10 5 8.55 × 10 5 ± 5.70 × 10 5 1.03 × 10 6 ± 7.15 × 10 5 3.85 × 10 6 ± 2.07 × 10 6 1.30 × 10 6 ± 5.56 × 10 5 1.17 × 10 6 ± 4.58 × 10 5
Table A15. Monthly mean squared error for quantum experiments with 19 features. The columns represent each experiment with a number of 1, 3, and 5 variational layers, and the lines represent the months. The last line shows the 12-month mean of the mean squared error.
Table A15. Monthly mean squared error for quantum experiments with 19 features. The columns represent each experiment with a number of 1, 3, and 5 variational layers, and the lines represent the months. The last line shows the 12-month mean of the mean squared error.
Experiment 1Experiment 2
1 Layer 3 Layers 5 Layers 1 Layer 3 Layers 5 Layers
Month 1 1.91 × 10 6 1.84 × 10 6 1.71 × 10 6 6.77 × 10 6 2.31 × 10 6 3.03 × 10 6
Month 2 1.59 × 10 6 2.22 × 10 6 1.31 × 10 6 6.04 × 10 6 2.93 × 10 6 3.96 × 10 6
Month 3 1.33 × 10 6 1.20 × 10 6 9.82 × 10 5 3.87 × 10 6 1.65 × 10 6 2.42 × 10 6
Month 4 1.30 × 10 6 1.38 × 10 6 1.28 × 10 6 3.07 × 10 6 1.08 × 10 6 1.75 × 10 6
Month 5 4.01 × 10 5 6.40 × 10 5 4.26 × 10 5 2.62 × 10 6 1.15 × 10 6 1.68 × 10 6
Month 6 1.39 × 10 6 1.73 × 10 6 1.22 × 10 6 3.24 × 10 6 8.46 × 10 5 9.03 × 10 5
Month 7 9.67 × 10 5 2.93 × 10 6 2.48 × 10 6 6.36 × 10 6 2.65 × 10 6 1.70 × 10 6
Month 8 3.77 × 10 5 2.02 × 10 6 1.61 × 10 6 8.48 × 10 5 1.14 × 10 6 1.91 × 10 6
Month 9 1.72 × 10 6 7.92 × 10 5 5.83 × 10 5 1.37 × 10 6 4.76 × 10 5 6.51 × 10 5
Month 10 5.15 × 10 6 1.95 × 10 6 2.54 × 10 6 1.75 × 10 6 5.42 × 10 5 1.15 × 10 6
Month 11 2.90 × 10 6 7.33 × 10 5 8.23 × 10 5 1.19 × 10 6 5.05 × 10 5 2.53 × 10 5
Month 12 4.50 × 10 6 1.57 × 10 6 1.70 × 10 6 1.89 × 10 6 8.39 × 10 5 6.90 × 10 5
Mean 1.96 × 10 6 ± 1.29 × 10 6 1.58 × 10 6 ± 6.39 × 10 5 1.39 × 10 6 ± 6.87 × 10 5 3.25 × 10 6 ± 1.91 × 10 6 1.34 × 10 6 ± 7.34 × 10 5 1.67 × 10 6 ± 1.04 × 10 6

Appendix E.2. Classical Experiments

Table A16. Monthly mean squared error for classical experiments with 4, 8 and 19 features. The columns represent each experiment with a number of 4, 8 and 19 features, and the lines represent the months. The last line shows the 12-month mean of the mean squared error.
Table A16. Monthly mean squared error for classical experiments with 4, 8 and 19 features. The columns represent each experiment with a number of 4, 8 and 19 features, and the lines represent the months. The last line shows the 12-month mean of the mean squared error.
Experiment 1Experiment 2
4 Features 8 Features 19 Features 4 Features 8 Features 19 Features
Month 1 1.25 × 10 6 1.71 × 10 6 1.90 × 10 6 7.36 × 10 5 6.82 × 10 5 1.45 × 10 5
Month 2 7.10 × 10 5 1.74 × 10 6 1.95 × 10 6 3.62 × 10 5 8.37 × 10 5 3.63 × 10 5
Month 3 1.12 × 10 6 2.61 × 10 6 2.45 × 10 6 7.19 × 10 5 1.60 × 10 6 5.13 × 10 5
Month 4 5.92 × 10 5 2.06 × 10 6 1.11 × 10 6 1.70 × 10 5 6.68 × 10 5 8.55 × 10 4
Month 5 4.35 × 10 5 1.67 × 10 6 8.97 × 10 5 1.03 × 10 5 3.68 × 10 5 1.77 × 10 4
Month 6 7.87 × 10 5 1.35 × 10 6 1.18 × 10 6 1.85 × 10 5 1.11 × 10 6 1.72 × 10 5
Month 7 1.06 × 10 6 2.37 × 10 6 4.64 × 10 5 3.56 × 10 5 1.08 × 10 6 1.02 × 10 5
Month 8 3.90 × 10 5 6.08 × 10 5 5.00 × 10 10 1.53 × 10 5 1.70 × 10 5 8.06 × 10 9
Month 9 1.15 × 10 6 6.14 × 10 5 6.98 × 10 10 1.10 × 10 6 5.65 × 10 4 3.63 × 10 10
Month 10 9.54 × 10 5 6.45 × 10 5 9.01 × 10 10 8.30 × 10 5 1.08 × 10 5 2.40 × 10 10
Month 11 1.23 × 10 6 7.69 × 10 5 8.20 × 10 10 1.39 × 10 6 3.54 × 10 5 3.44 × 10 10
Month 12 4.39 × 10 5 5.14 × 10 5 8.58 × 10 10 6.62 × 10 5 1.91 × 10 5 3.19 × 10 10
Mean 8.44 × 10 5 ± 3.12 × 10 5 1.39 × 10 6 ± 7.14 × 10 5 3.15 × 10 10 ± 3.84 × 10 10 5.64 × 10 5 ± 3.95 × 10 5 6.02 × 10 5 ± 4.62 × 10 5 1.12 × 10 10 ± 1.49 × 10 10

References

  1. Alessandri, P.; Mumtaz, H. Financial regimes and uncertainty shocks. J. Monet. Econ. 2019, 101, 31–46. [Google Scholar] [CrossRef]
  2. Ghosal, V.; Ye, Y. The impact of uncertainty on the number of businesses. J. Econ. Bus. 2019, 105, 105840. [Google Scholar] [CrossRef]
  3. Stockhammer, E.; Grafl, L. Financial uncertainty and business investment. Rev. Political Econ. 2010, 22, 551–568. [Google Scholar] [CrossRef]
  4. Kumar, G.; Jain, S.; Singh, U. Stock market forecasting using computational intelligence: A survey. Arch. Comput. Methods Eng. 2021, 28, 1069–1101. [Google Scholar] [CrossRef]
  5. Guo, Y. Research on the management innovation of smes from the perspective of strategic management. Front. Bus. Econ. Manag. 2023, 11, 154–160. [Google Scholar] [CrossRef]
  6. Spiliotis, E.; Makridakis, S.; Semenoglou, A.-A.; Assimakopoulos, V. Comparison of statistical and machine learning methods for daily sku demand forecasting. Oper. Res. 2020, 22, 3037–3061. [Google Scholar] [CrossRef]
  7. Abbasimehr, H.; Shabani, M.; Yousefi, M. An optimized model using lstm network for demand forecasting. Comput. Ind. Eng. 2020, 143, 106435. [Google Scholar] [CrossRef]
  8. Aktepe, A.; Yanık, E.; Ersöz, S. Demand forecasting application with regression and artificial intelligence methods in a construction machinery company. J. Intell. Manuf. 2021, 32, 1587–1604. [Google Scholar] [CrossRef]
  9. Subramanian, S.; Harsha, P. Demand modeling in the presence of unobserved lost sales. Manag. Sci. 2020, 67, 3803–3833. [Google Scholar] [CrossRef]
  10. Ferreira, K.; Lee, B.; Simchi-Levi, D. Analytics for an online retailer: Demand forecasting and price optimization. Manuf. Serv. Oper. Manag. 2016, 18, 69–88. [Google Scholar] [CrossRef]
  11. Dorrington, J.; Finney, I.; Palmer, T.; Weisheimer, A. Beyond skill scores: Exploring sub-seasonal forecast value through a case-study of french month-ahead energy prediction. Q. J. R. Meteorol. Soc. 2020, 146, 3623–3637. [Google Scholar] [CrossRef]
  12. Abolghasemi, M.; Beh, E.; Tarr, G.; Gerlach, R. Demand forecasting in supply chain: The impact of demand volatility in the presence of promotion. Comput. Ind. Eng. 2020, 142, 106380. [Google Scholar] [CrossRef]
  13. Aamer, A.; Yani, L.E.; Priyatna, I.A. Data analytics in the supply chain management: Review of machine learning applications in demand forecasting. Oper. Supply Chain Manag. An Int. J. 2020, 14. [Google Scholar] [CrossRef]
  14. Kerkkänen, A. Improving Demand Forecasting Practices in the Industrial Context; Lappeenranta University of Technology: Lappeenranta, Finland, 2010. [Google Scholar]
  15. Jeyaraman, J.; Krishnamoorthy, G.; Konidena, B.K.; Sistla, S.M.K. Machine learning for demand forecasting in manufacturing. Int. J. Multidiscip. Res. 2024, 6, 1–11. [Google Scholar]
  16. Feizabadi, J. Machine learning demand forecasting and supply chain performance. Int. J. Logistics Res. Appl. 2020, 25, 119–142. [Google Scholar] [CrossRef]
  17. Oner, M.; Üstündağ, A. Combining predictive base models using deep ensemble learning. J. Intell. Fuzzy Syst. 2020, 39, 6657–6668. [Google Scholar] [CrossRef]
  18. Kendon, V. Quantum computing using continuous-time evolution. Interface Focus 2020, 10, 20190143. [Google Scholar] [CrossRef]
  19. Cerezo, M.; Verdon, G.; Huang, H.-Y.; Cincio, L.; Coles, P.J. Challenges and opportunities in quantum machine learning. Nat. Comput. Sci. 2022, 2, 567–576. [Google Scholar] [CrossRef]
  20. Houssein, E.H.; Abohashima, Z.; Elhoseny, M.; Mohamed, W.M. Machine learning in the quantum realm: The state-of-the-art, challenges, and future vision. Expert Syst. Appl. 2022, 194, 116512. [Google Scholar] [CrossRef]
  21. Melnikov, A.; Kordzanganeh, M.; Alodjants, A.; Lee, R. Quantum machine learning: From physics to software engineering. Adv. Phys. X 2023, 8, 2165452. [Google Scholar] [CrossRef]
  22. Ciliberto, C.; Herbster, M.; Ialongo, A.D.; Pontil, M.; Rocchetto, A.; Severini, S.; Wossnig, L. Quantum machine learning: A classical perspective. Proc. Math. Phys. Eng. Sci. 2017, 474, 20170551. [Google Scholar] [CrossRef] [PubMed]
  23. Schuld, M.; Sweke, R.; Meyer, J.J. Effect of data encoding on the expressive power of variational quantum-machine-learning models. Phys. Rev. A 2021, 103, 032430. [Google Scholar] [CrossRef]
  24. Caro, M.C.; Huang, H.-Y.; Cerezo, M.; Sharma, K.; Sornborger, A.; Cincio, L.; Coles, P.J. Generalization in quantum machine learning from few training data. Nat. Commun. 2022, 13, 4919. [Google Scholar] [CrossRef] [PubMed]
  25. Cherrat, E.A.; Raj, S.; Kerenidis, I.; Shekhar, A.; Wood, B.; Dee, J.; Chakrabarti, S.; Chen, R.; Herman, D.; Hu, S.; et al. Quantum Deep Hedging. Quantum 2023, 7, 1191. [Google Scholar] [CrossRef]
  26. Shafizadeh-Moghadam, H. Fully component selection: An efficient combination of feature selection and principal component analysis to increase model performance. Expert Syst. Appl. 2021, 186, 115678. [Google Scholar] [CrossRef]
  27. Ma, J.; Yuan, Y. Dimension reduction of image deep feature using PCA. J. Vis. Commun. Image Represent. 2019, 63, 102578. [Google Scholar] [CrossRef]
  28. Avramouli, M.; Savvas, I.; Garani, G.; Vasilaki, A. Quantum machine learning: Current state and challenges. In Proceedings of the 25th Pan-Hellenic Conference on Informatics, PCI ’21, Volos, Greece, 26–28 November 2021; Association for Computing Machinery: New York, NY, USA, 2022; pp. 397–402. [Google Scholar]
  29. Lins, I.D.; Mendes Araújo, L.M.; Souto Maior, C.B.; da Silva Ramos, P.M.; José das Chagas Moura, M.; Ferreira-Martins, A.J.; Chaves, R.; Canabarro, A. Quantum machine learning for drowsiness detection with EEG signals. Process Saf. Environ. Prot. 2024, 186, 1197–1213. [Google Scholar]
  30. Combarro, E.; Gonzalez-Castillo, S. A Practical Guide to Quantum Machine Learning and Quantum Optimization: Hands-on Approach to Modern Quantum Algorithms, 1st ed.; Packt Publishing: Birmingham, UK, 2023. [Google Scholar]
  31. Schuld, M.; Petruccione, F. Machine Learning with Quantum Computers; Springer: Berlin, Germany, 2021. [Google Scholar]
  32. Ogur, B.; Yılmaz, I. The effect of superposition and entanglement on hybrid quantum machine learning for weather forecasting. Quantum Inf. Comput. 2023, 23, 181–194. [Google Scholar] [CrossRef]
  33. Cong, I.; Choi, S.; Lukin, M. Quantum convolutional neural networks. Nat. Phys. 2018, 15, 1273–1278. [Google Scholar] [CrossRef]
  34. Narayanan, A.; Menneer, T. Quantum artificial neural network architectures and components. Inf. Sci. 2000, 128, 231–255. [Google Scholar] [CrossRef]
  35. Sklearn. Available online: https://scikit-learn.org/stable/index.html (accessed on 24 November 2023).
  36. Pennylane. Available online: https://pennylane.ai/ (accessed on 31 July 2024).
  37. Xanadu. Available online: https://www.xanadu.ai/ (accessed on 31 July 2024).
  38. Tensorflow. Available online: https://www.tensorflow.org (accessed on 31 July 2024).
Figure 1. A two-layered ansatz applied to four qubits. Each layer is defined by a variational circuit V j dependent on some parameters θ j . The circuits E n t are used to entangle the qubits, and the state | ψ n denotes the output of the feature map.
Figure 1. A two-layered ansatz applied to four qubits. Each layer is defined by a variational circuit V j dependent on some parameters θ j . The circuits E n t are used to entangle the qubits, and the state | ψ n denotes the output of the feature map.
Entropy 27 00490 g001
Figure 2. Cumulative variance of the data. The x-axis represents the component index, while the y-axis represents the variance. The sets used in this article were marked in black (4 features), green (8 features), and red (19 features, the complete dataset). The bars represent the individual variance of each component, while the blue line represents the cumulative variance.
Figure 2. Cumulative variance of the data. The x-axis represents the component index, while the y-axis represents the variance. The sets used in this article were marked in black (4 features), green (8 features), and red (19 features, the complete dataset). The bars represent the individual variance of each component, while the blue line represents the cumulative variance.
Entropy 27 00490 g002
Figure 3. Variational quantum circuit. The Hadamard gate layer prepares the qubits in uniform superposition; Ry gates (red) encode the data in qubits; and the variational layer or ansatz (blue) entangles the qubits and applies parameterized rotations, where θ i , ϕ i , and ω i represent, respectively, the rotation angles in the x, y, and z axes in each qubit i, and are the trainable parameters of the model. The measurement layer (green) collapses the qubits, generating the outputs [32].
Figure 3. Variational quantum circuit. The Hadamard gate layer prepares the qubits in uniform superposition; Ry gates (red) encode the data in qubits; and the variational layer or ansatz (blue) entangles the qubits and applies parameterized rotations, where θ i , ϕ i , and ω i represent, respectively, the rotation angles in the x, y, and z axes in each qubit i, and are the trainable parameters of the model. The measurement layer (green) collapses the qubits, generating the outputs [32].
Entropy 27 00490 g003
Figure 4. Entanglement layers used in a variational circuit (Figure 3). In (a), here named “entanglement layer 1”, the qubits are entangled in pairs, and these pairs are subsequently tied together. In (b), here named “entanglement layer 2”, the qubits are entangled in a cascade. Adapted from [32].
Figure 4. Entanglement layers used in a variational circuit (Figure 3). In (a), here named “entanglement layer 1”, the qubits are entangled in pairs, and these pairs are subsequently tied together. In (b), here named “entanglement layer 2”, the qubits are entangled in a cascade. Adapted from [32].
Entropy 27 00490 g004
Figure 5. Predictions for quantum models with 4 features. (a) shows quantum experiment 1 with 1 layer, (b) quantum experiment 2 with 1 layer, (c) quantum experiment 1 with 3 layers, (d) quantum experiment 2 with 3 layers, (e) quantum experiment 1 with 5 layers, and (f) quantum experiment 2 with 5 layers. The x-axis shows the model’s training months, while the y-axis represents average daily financing. The distributions obtained from 10 experiments are shown in the colored violin graphs, while the actual values are shown in the black line.
Figure 5. Predictions for quantum models with 4 features. (a) shows quantum experiment 1 with 1 layer, (b) quantum experiment 2 with 1 layer, (c) quantum experiment 1 with 3 layers, (d) quantum experiment 2 with 3 layers, (e) quantum experiment 1 with 5 layers, and (f) quantum experiment 2 with 5 layers. The x-axis shows the model’s training months, while the y-axis represents average daily financing. The distributions obtained from 10 experiments are shown in the colored violin graphs, while the actual values are shown in the black line.
Entropy 27 00490 g005
Figure 6. Predictions for quantum models with 8 features. (a) shows quantum experiment 1 with 1 layer, (b) quantum experiment 2 with 1 layer, (c) quantum experiment 1 with 3 layers, (d) quantum experiment 2 with 3 layers, (e) quantum experiment 1 with 5 layers, and (f) quantum experiment 2 with 5 layers. The x-axis shows the model’s training months, while the y-axis represents average daily financing. The distributions obtained from 10 experiments are shown in the colored violin graphs, while the actual values are shown in the black line.
Figure 6. Predictions for quantum models with 8 features. (a) shows quantum experiment 1 with 1 layer, (b) quantum experiment 2 with 1 layer, (c) quantum experiment 1 with 3 layers, (d) quantum experiment 2 with 3 layers, (e) quantum experiment 1 with 5 layers, and (f) quantum experiment 2 with 5 layers. The x-axis shows the model’s training months, while the y-axis represents average daily financing. The distributions obtained from 10 experiments are shown in the colored violin graphs, while the actual values are shown in the black line.
Entropy 27 00490 g006
Figure 7. Predictions for quantum models with 19 features. (a) shows quantum experiment 1 with 1 layer, (b) quantum experiment 2 with 1 layer, (c) quantum experiment 1 with 3 layers, (d) quantum experiment 2 with 3 layers, (e) quantum experiment 1 with 5 layers, and (f) quantum experiment 2 with 5 layers. The x-axis shows the model’s training months, while the y-axis represents average daily financing. The distributions obtained from 10 experiments are shown in the colored violin graphs, while the actual values are shown in the black line.
Figure 7. Predictions for quantum models with 19 features. (a) shows quantum experiment 1 with 1 layer, (b) quantum experiment 2 with 1 layer, (c) quantum experiment 1 with 3 layers, (d) quantum experiment 2 with 3 layers, (e) quantum experiment 1 with 5 layers, and (f) quantum experiment 2 with 5 layers. The x-axis shows the model’s training months, while the y-axis represents average daily financing. The distributions obtained from 10 experiments are shown in the colored violin graphs, while the actual values are shown in the black line.
Entropy 27 00490 g007
Figure 8. Predictions for classical models. (a) shows classical experiment 1 with 4 features, (b) classical experiment 2 with 4 features, (c) classical experiment 1 with 8 features, (d) classical experiment 2 with 8 features, (e) the first 7 months of classical experiment 1 with 19 features, (f) the first 7 months of classical experiment 2 with 19 features, (g) the last 5 months of classical experiment 1 with 19 features, and (h) the last 5 months of classical experiment 2 with 19 features. The x-axis shows the model’s training months, while the y-axis represents average daily financing. The distributions obtained from 10 experiments are shown in the colored violin graphs, while the actual values are shown in the black line.
Figure 8. Predictions for classical models. (a) shows classical experiment 1 with 4 features, (b) classical experiment 2 with 4 features, (c) classical experiment 1 with 8 features, (d) classical experiment 2 with 8 features, (e) the first 7 months of classical experiment 1 with 19 features, (f) the first 7 months of classical experiment 2 with 19 features, (g) the last 5 months of classical experiment 1 with 19 features, and (h) the last 5 months of classical experiment 2 with 19 features. The x-axis shows the model’s training months, while the y-axis represents average daily financing. The distributions obtained from 10 experiments are shown in the colored violin graphs, while the actual values are shown in the black line.
Entropy 27 00490 g008
Figure 9. Convergence of the quantum model with the set of 4 features and 1 layer with 1000 epochs. The x-axis shows the training epochs, while the y-axis shows the mean absolute error (standardized values). The black curve shows the test loss, while the magenta curve shows the validation loss.
Figure 9. Convergence of the quantum model with the set of 4 features and 1 layer with 1000 epochs. The x-axis shows the training epochs, while the y-axis shows the mean absolute error (standardized values). The black curve shows the test loss, while the magenta curve shows the validation loss.
Entropy 27 00490 g009
Figure 10. Loss for quantum models with 4 features. (a) shows the loss for quantum experiment 1 with 1 layer, (b) for quantum experiment 2 with 1 layer, (c) for quantum experiment 1 with 3 layers, (d) for quantum experiment 2 with 3 layers, (e) for quantum experiment 1 with 5 layers, and (f) for quantum experiment 2 with 5 layers. The x-axis shows the training epochs, while the y-axis shows the mean absolute error (standardized values). The black curve shows the test loss, while the magenta curve shows the validation loss.
Figure 10. Loss for quantum models with 4 features. (a) shows the loss for quantum experiment 1 with 1 layer, (b) for quantum experiment 2 with 1 layer, (c) for quantum experiment 1 with 3 layers, (d) for quantum experiment 2 with 3 layers, (e) for quantum experiment 1 with 5 layers, and (f) for quantum experiment 2 with 5 layers. The x-axis shows the training epochs, while the y-axis shows the mean absolute error (standardized values). The black curve shows the test loss, while the magenta curve shows the validation loss.
Entropy 27 00490 g010
Figure 11. Loss for quantum models with 8 features. (a) shows the loss for quantum experiment 1 with 1 layer, (b) for quantum experiment 2 with 1 layer, (c) for quantum experiment 1 with 3 layers, (d) for quantum experiment 2 with 3 layers, (e) for quantum experiment 1 with 5 layers, and (f) for quantum experiment 2 with 5 layers. The x-axis shows the training epochs, while the y-axis shows the mean absolute error (standardized values). The black curve shows the test loss, while the magenta curve shows the validation loss.
Figure 11. Loss for quantum models with 8 features. (a) shows the loss for quantum experiment 1 with 1 layer, (b) for quantum experiment 2 with 1 layer, (c) for quantum experiment 1 with 3 layers, (d) for quantum experiment 2 with 3 layers, (e) for quantum experiment 1 with 5 layers, and (f) for quantum experiment 2 with 5 layers. The x-axis shows the training epochs, while the y-axis shows the mean absolute error (standardized values). The black curve shows the test loss, while the magenta curve shows the validation loss.
Entropy 27 00490 g011
Figure 12. Loss for quantum models with 19 features. (a) shows the loss for quantum experiment 1 with 1 layer, (b) for quantum experiment 2 with 1 layer, (c) for quantum experiment 1 with 3 layers, (d) for quantum experiment 2 with 3 layers, (e) for quantum experiment 1 with 5 layers, and (f) for quantum experiment 2 with 5 layers. The x-axis shows the training epochs, while the y-axis shows the mean absolute error (standardized values). The black curve shows the test loss, while the magenta curve shows the validation loss.
Figure 12. Loss for quantum models with 19 features. (a) shows the loss for quantum experiment 1 with 1 layer, (b) for quantum experiment 2 with 1 layer, (c) for quantum experiment 1 with 3 layers, (d) for quantum experiment 2 with 3 layers, (e) for quantum experiment 1 with 5 layers, and (f) for quantum experiment 2 with 5 layers. The x-axis shows the training epochs, while the y-axis shows the mean absolute error (standardized values). The black curve shows the test loss, while the magenta curve shows the validation loss.
Entropy 27 00490 g012
Figure 13. Loss for classical models. (a) shows the loss for classical experiment 1 with 4 features, (b) for classical experiment 2 with 4 features, (c) for classical experiment 1 with 8 features, (d) for classical experiment 2 with 8 features, (e) for quantum experiment 1 with 19 features, and (f) for classical experiment 2 with 19 features. The x-axis shows the training epochs, while the y-axis shows the mean absolute error (standardized values). The black curve shows the test loss, while the magenta curve shows the validation loss.
Figure 13. Loss for classical models. (a) shows the loss for classical experiment 1 with 4 features, (b) for classical experiment 2 with 4 features, (c) for classical experiment 1 with 8 features, (d) for classical experiment 2 with 8 features, (e) for quantum experiment 1 with 19 features, and (f) for classical experiment 2 with 19 features. The x-axis shows the training epochs, while the y-axis shows the mean absolute error (standardized values). The black curve shows the test loss, while the magenta curve shows the validation loss.
Entropy 27 00490 g013
Table 1. Best results of classical and quantum annual mean MAE.
Table 1. Best results of classical and quantum annual mean MAE.
ModelMAE
QNNExperiment 1 682.52 ± 284.14
Experiment 2 785.98 ± 192.21
RNNExperiment 1 774.93 ± 199.39
Experiment 2 646.12 ± 293.77
Table 2. Quantum model processing times.
Table 2. Quantum model processing times.
1 Layer3 Layers5 Layers
4 variáveisExperiment 11 min 303 min4 min
Experiment 21 min 304 min5 min
8 variáveisExperiment 13 min6 min10 min
Experiment 23 min6.5  min10 min
19 variáveisExperiment 11 h1 h 303 h 30
Experiment 21 h2 h 203 h 30
Table 3. Classical model processing times.
Table 3. Classical model processing times.
Experiment 1Experiment 2
4 features13 min32 min 30
8 features14 min 3036 min
19 features14 min 3036 min
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

de Jesus, G.F.; da Silva, M.H.F.; Pires, O.M.; da Silva, L.C.; dos Santos Cruz, C.; da Silva, V.L. Exploring Quantum Neural Networks for Demand Forecasting. Entropy 2025, 27, 490. https://doi.org/10.3390/e27050490

AMA Style

de Jesus GF, da Silva MHF, Pires OM, da Silva LC, dos Santos Cruz C, da Silva VL. Exploring Quantum Neural Networks for Demand Forecasting. Entropy. 2025; 27(5):490. https://doi.org/10.3390/e27050490

Chicago/Turabian Style

de Jesus, Gleydson Fernandes, Maria Heloísa Fraga da Silva, Otto Menegasso Pires, Lucas Cruz da Silva, Clebson dos Santos Cruz, and Valéria Loureiro da Silva. 2025. "Exploring Quantum Neural Networks for Demand Forecasting" Entropy 27, no. 5: 490. https://doi.org/10.3390/e27050490

APA Style

de Jesus, G. F., da Silva, M. H. F., Pires, O. M., da Silva, L. C., dos Santos Cruz, C., & da Silva, V. L. (2025). Exploring Quantum Neural Networks for Demand Forecasting. Entropy, 27(5), 490. https://doi.org/10.3390/e27050490

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop