Next Article in Journal
Designing for Diversity: Creating Inclusive Digital Learning Environments for Global Classrooms
Previous Article in Journal
Fuel Species Classification and Biomass Estimation for Fire Behavior Modeling Based on UAV Photogrammetric Point Clouds
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Optimizing Short-Term Electrical Demand Forecasting with Deep Learning and External Influences †

by
Leonardo Santos Amaral
1,*,
Gustavo Medeiros de Araújo
2 and
Ricardo Moraes
2
1
Department of Computer Science, Montes Claros State University, Professor Rui Braga Avenue, s/n, Vila Mauricéia, Montes Claros 39401-089, Brazil
2
Department of Information Science, Federal University of Santa Catarina, Trindade, s/n, Florianópolis 88035-972, Brazil
*
Author to whom correspondence should be addressed.
Presented at the 11th International Conference on Time Series and Forecasting, Canaria, Spain, 16–18 July 2025.
Eng. Proc. 2025, 101(1), 16; https://doi.org/10.3390/engproc2025101016
Published: 12 August 2025

Abstract

Short-term electrical demand forecasting is crucial for the efficient operation of modern power grids. Traditional methods often fail by neglecting system nonlinearities and external factors that influence electricity consumption. In this study, we propose an enhanced deep learning-based forecasting model that integrates external factors such as meteorological data and economic indicators to improve prediction accuracy. Using an ISO NE (Independent System Operator New England) dataset from 2017 to 2019, we analyze 23 independent variables to assess their impact on model performance. Our findings demonstrate that careful variable selection reduces dimensionality while maintaining forecasting accuracy, enabling the effective application of deep learning models. The CNN plus LSTM composite model achieved the lowest prediction error of 0.15%, outperforming standalone CNN (0.8%) and LSTM (1.44%) approaches. The combination of CNN’s feature extraction capabilities with LSTM’s strength in handling time series data was instrumental in achieving superior performance. Our results highlight the importance of incorporating external influences in electricity demand forecasting and suggest future directions for developing more precise and efficient models.

1. Introduction

The modernization of electrical power systems, driven by the integration of clean and sustainable energy sources, has significantly altered grid management. Unlike traditional systems, where power flows unidirectionally from centralized plants to consumers, modern smart grids enable bidirectional energy exchange, requiring adaptive strategies for stability and efficiency [1,2].
Electricity demand forecasting plays a fundamental role in smart grids by ensuring grid stability, preventing overloads, and facilitating renewable energy integration [3]. However, forecasting has become increasingly complex due to the variability of renewable sources, external factors such as weather and socioeconomic trends, and the necessity for real-time decision-making [4,5]. Traditional time series models often fail to capture these nonlinear dependencies, emphasizing the need for advanced machine learning approaches [6].
In [6], two demand categories are identified: individual consumer demand and aggregate demand, which depends on the hierarchical structure of the grid [7]. Effective forecasting supports unit commitment and load dispatch, where decisions range from long-term resource planning to real-time energy distribution [8,9]. Consequently, forecasting models must accommodate multifactorial influences, ensuring precise short-term predictions [10,11].
Deep learning, particularly recurrent neural networks (RNNs) and convolutional neural networks (CNNs) [11], has demonstrated remarkable potential in handling multivariate, high-dimensional data [12]. These models can process vast historical datasets, incorporate weather and economic indicators, and adapt to changing consumption patterns.
This paper presents a machine learning-based methodology for short-term electricity demand forecasting, providing a structured approach to model construction. A case study using the ISO NE dataset is conducted to analyze the impact of different input variables, model architectures, and forecasting strategies.
The key contributions of this study are as follows:
  • Enhanced forecasting accuracy through the integration of multiple external factors.
  • A structured deep learning framework for energy demand prediction.
  • Empirical validation demonstrating the effectiveness of combining CNN and long short term memory (LSTM) models.
The remainder of this paper is organized as follows: Section 2 presents the developed modeling methodology, Section 3 discusses experimental results, and Section 4 concludes with key findings.

2. Modeling Methodology

Electricity prediction models can be divided into two categories: univariate, where consumption is treated as time series data with implicit exogenous factors, and multivariate, where all influencing factors are explicitly included. The models can also be categorized by the forecasting horizon (one-step or multi-step) [13]. Building effective neural network models for electricity forecasting remains a challenge, particularly in parameter determination and choosing the right architecture [14].

2.1. Pipeline for Building Electricity Forecasting Models

Creating a neural network for forecasting involves selecting the appropriate model architecture and parameter tuning. For instance, time series methods such as autoregressive with exogenenous inputs (ARX models) can incorporate both endogenous (e.g., demand history) and exogenous factors (e.g., weather), but require temporal alignment of data [15]. The process of model creation involves many decisions that are not straightforward, and the goal is to simplify and guide model-building for load prediction.

2.2. Key Aspects of Forecasting Methodology

Effective models must capture the input variables’ behavior and their impact on the output. Key decisions include the following:
  • Number of Input Nodes: It should correspond to the relevant variables influencing demand [14].
  • Hidden Layers: Networks with hidden layers are more capable of learning complex relationships; typically, one or two layers suffice [16].
  • Neurons in Hidden Layers: An excessive number can lead to overfitting, with guidelines for determining the appropriate number [17].
  • Hyperparameters: Model training is influenced by hyperparameters like activation functions, learning rates, and batch sizes. A GridSearchCV method is used to optimize these [18].
  • Training Algorithm: The backpropagation method minimizes error by adjusting network weights to improve predictions [14].
Data pre-processing is crucial, involving cleaning, normalization, and variable selection to enhance model performance [19,20]. Additionally, seasonality, calendar patterns, and thermal discomfort indexes are incorporated to refine model accuracy.
To build accurate forecasting models, the pipeline encompasses the main modeling steps, including the collection and integration of endogenous and exogenous variables, followed by an assessment of their relevance to the model. Data preparation involves adjusting temporal resolution, cleaning, and normalization. To enrich the model, additional features such as calendar variables, seasonal indicators, and the thermal discomfort index are generated from the raw data. The pipeline then moves to data splitting, model structure definition, internal parameter selection, division into training and test sets, and performance evaluation of the results.
The pre-processing stage is critical to maintaining model performance, as noted by [21]. During this phase, real-time data integration helps ensure that the model remains accurate and resilient. While some variables are directly extracted from the input database or climate data sources, others are created through encoding methods to represent key phenomena related to the forecasting problem. It is important to note that the automation of this stage, particularly the integration of real-time data, falls outside the scope of the present study.
Finally, several metrics can be employed to measure the accuracy of the model (shown as ‘Evaluate error’). In this study, we use the mean absolute percentage error (MAPE) as a metric, as this is a widely accepted measure of prediction accuracy for time series analysis. MAPE is calculated by taking the average of the absolute percentage errors between the predicted and actual values. This metric is particularly useful because it expresses the error as a percentage, thus making it easier to interpret the model’s performance across different scales and datasets. It is defined as follows:
M A P E =   1 n i = 1 n A i F i F i ×   100
where n represents the number of observed instances, A i is the current consumption value, and F i is the predicted value for each point. The total absolute error value is then determined by summing the absolute values for each instance divided by the number of evaluated points, n. Our conclusions are based on MAPE values, where a lower MAPE value indicates a higher accuracy for the model [22].

3. Implementation and Results

For better clarity in terms of comprehending the construction of forecasting load models, a structured pipeline is introduced here. This pipeline encompasses the stages essential for developing multivariate and multi-step models using deep learning techniques. Each stage is meticulously justified, and its necessity and specific contribution are justified.
In the following, we present a brief evaluation of the fundamental components that are essential in composing accurate load forecasting models. To demonstrate the applicability of our pipeline and to evaluate its efficacy, a comprehensive array of experiments was conducted, including various input conditions (numbers of variables) and output conditions (time horizons), as shown in the tables below. Notably, they encompass different configurations of input variables, prediction algorithms, and model architectures (shallow or deep).

3.1. Dataset

The dataset used in this study can be obtained from the ISO NE (Independent System Operator New England) website at http://www.iso-ne.com. It contains the total electrical loads for several cities in England over the period January 2017 and December 2019, and comprises 23 independent variables such as weather information, economic indicators, and market data.
Graphical representations of the annual, weekly, and daily variations in total electricity consumption for this dataset are given in [23]. The chart of the annual variation contains peaks during the summer months and another, albeit less intense, peak in the winter months. The weekly variation chart shows higher consumption levels on Mondays, while the daily variation chart has consumption peaks during the day, particularly between 6 and 7 p.m.
While the ISO NE dataset offers valuable insights for evaluating electricity consumption models, the generalizability of the study’s findings depends on the similarity between the source region and the target region in any future application. Given the dynamic nature of the factors involved, which fluctuate with both seasonal and temporal changes, it is critical to update the model with current data tailored to the specific region of interest. This can be facilitated through real-time time series data processing, as shown in the first step of the flowchart (Figure 1). This foundational step in the proposed predictive model ensures accurate capture of regional and temporal variations.

3.2. Case Study

To achieve the objective of this paper, we present some results obtained through a series of the test simulations conducted using the proposed methodology. Further details and tests from these simulations are shown in Figure 1.
The simulation results are summarized in Table 1, Table 2 and Table 3 and cover various combinations of input variables, algorithms and prediction horizons.
Our scheme was implemented using the Python programming language version 3.10, the Sklearn version 1.7.0 and Keras machine learning libraries version 3 [18].
To highlight the importance of Step 1 of the pipeline, details of the processing are given in Table 1, with a focus on the variation in the input variables and a comparison of processing times. It is important to note that the processing detailed in Table 1 includes seasonality and calendar representations. We also note that the variable selection algorithms employed here originate either from the Waikato environment for knowledge analysis (WEKA tool [24]) or were custom-built in Python (shown in the ‘Source’ column). The experimental results were compared based on the precision (MAPE) and processing time (shown in the ‘Magnitudes’ column).
Based on these simulation steps, the following observations can be made:
o 
Among the variable selection algorithms tested here, the most notable ones were correlation-based feature subset evaluation (CFS subset evaluation), classifier attribute evaluation, and relief. These algorithms selected the five or six variables that were most strongly correlated with the target variable, yielding predictions with error rates similar to or lower than those obtained when all variables in the database were considered (as indicated in the last line of Table 3). This underscores the relevance of the selected variables.
o 
Regarding the performance of models with different prediction algorithms, LSTM stands out as having the highest computational cost. Its processing time was 10 times higher than for the DNN, five times more than for the CNN, and 2.5 times more than for the combined CNN + LSTM model. However, the models based on the LSTM and a combination of CNN + LSTM achieved the highest accuracy.
o 
From comparing the performance of shallow and deep models, it was observed that the latter incurred a higher computational cost. Nevertheless, in most scenarios, they demonstrated superior accuracy compared to shallow models.
To demonstrate the importance of integrating external factors into the model, as described in Step 2 of the pipeline, some simulation tests were carried out, and the results are presented in Table 2 and Table 3.
In these tables, the columns indicate the presence of external factors, with the distinction based on the prediction horizon. The acronyms and elements included in the columns are defined as follows: Sh: shallow model; D: deep learning model; Sl: system load; W: weather; S: seasonality; C: calendar; Fs: feature selection; Av: all variables, Id: discomfort index; Δ: percentage variation in error. For instance, the column Sl + W presents the results incorporating both system load and seasonality external factors for different prediction algorithms (shallow model or deep learning model).
From the results presented in Table 2 and Table 3, the following observations can be made:
o 
The distinguishing factor between the tables is the prediction horizon. It is evident that increasing the prediction horizon leads to a reduction in accuracy.
o 
Processing involving only the target variable (Sl), with or without external factors, gave the biggest errors. This implies that deep learning algorithms face limitations in terms of their generalization capacity when operating with a very limited number of input variables, resulting in elevated error rates.
o 
From a comparison of the predictions based on only the target variable (Sl), the selected variables (Fs), and all variables in the dataset (Av), it is notable that errors in the Fs and Av columns are very similar. This suggests that despite the reduction in the number of input variables from 23 to 6 or 7 in the variable selection step, this did not lead to an increase in the prediction error.
A detailed simulation was conducted to assess the impact of model architecture on performance by varying the number of intermediate layers and neurons. Deep learning models were used, with variable selection performed in WEKA [24], considering external factors like seasonality and the calendar. The results suggest that using two or three intermediate layers with six neurons provides accurate predictions with lower computational cost and error rates below 1%. Complete results are not presented due to space limitations.

4. Conclusions and Future Research

This study highlights the importance of input variable selection, which significantly influences model architecture, processing time, and accuracy. By considering the correlation between variables and the target, dimensionality reduction was achieved without compromising performance. The integration of external factors, such as seasonality, calendar variations, and economic indicators, proved crucial in enhancing forecast accuracy beyond traditional climate-based approaches.
Defining the model architecture carefully was also essential. Simulations demonstrated that deep learning models with up to three intermediate layers and an optimized number of neurons performed best. CNN-based models, particularly the CNN + LSTM combination, consistently delivered the lowest error rates due to CNN’s feature extraction capabilities and LSTM’s strength in handling time series data. Additionally, variable selection reduced computational cost and minimized overfitting, further improving model performance.
While the proposed methodology was validated with a specific dataset, adapting it to different regions may require adjustments due to unique electricity demand patterns. Future research should assess the individual impact of external factors and test models on individual consumer datasets. Moreover, integrating real-time data collection and reinforcement learning could enhance prediction accuracy, capturing dynamic factors like climate change and improving operational efficiency.

Author Contributions

Conceptualization of the article and methodology were given by L.S.A., G.M.d.A. and R.M.; formal analysis, investigation, and writing (original draft preparation) by L.S.A., G.M.d.A. and R.M.; software and validation by L.S.A., G.M.d.A.; writing—review and editing by L.S.A., and R.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by CNPq/Brazil (Project 420365/2023-0) and CAPES/Brazil.

Data Availability Statement

Data available within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Merce, R.A.; Grover-Silva, E.; Le Conte, J. Load and Demand Side Flexibility Forecasting. In Proceedings of the International Conference on Smart Grids, Green Communications and IT Energy-aware Technologies, Lisbon, Portugal, 27 September–1 October 2020; pp. 1–6. [Google Scholar]
  2. Khan, K.A.; Quamar, M.M.; Al-Qahtani, F.H.; Asif, M.; Alqahtani, M.; Khalid, M. Smart Grid Infrastructure and Renewable Energy Deployment: A Conceptual Review of Saudi Arabia. Energy Strateg. Rev. 2023, 50, 101247. [Google Scholar] [CrossRef]
  3. Mystakidis, A.; Koukaras, P.; Tsalikidis, N.; Ioannidis, D.; Tjortjis, C. Energy Forecasting: A Comprehensive Review of Techniques and Technologies. Energies 2024, 17, 1662. [Google Scholar] [CrossRef]
  4. Wang, X.; Yao, Z.; Papaefthymiou, M. A Real-Time Electrical Load Forecasting and Unsupervised Anomaly Detection Framework. Appl. Energy 2023, 330, 120279. [Google Scholar] [CrossRef]
  5. Khatoon, S.; Ibraheem; Singh, A.K. Priti Effects of Various Factors on Electric Load Forecasting: An Overview. In Proceedings of the 2014 6th IEEE Power India International Conference (PIICON), Delhi, India, 5–7 December 2014; pp. 1–5. [Google Scholar]
  6. Peñaloza, A.K.A.; Balbinot, A.; Leborgne, R.C. Review of Deep Learning Application for Short-Term Household Load Forecasting. In Proceedings of the 2020 IEEE PES Transmission & Distribution Conference and Exhibition-Latin America (T&D LA), Montevideo, Uruguay, 28 September–2 October 2020; pp. 1–6. [Google Scholar]
  7. Ren, D.; Liu, W.; Li, J.; Huang, R. Research on Aggregate Load Prediction Methods. In Proceedings of the 2023 IEEE 7th Conference on Energy Internet and Energy System Integration (EI2), Hangzhou, China, 15–18 December 2023; pp. 3564–3567. [Google Scholar]
  8. Aybar-Mejía, M.; Villanueva, J.; Mariano-Hernández, D.; Santos, F.; Molina-García, A. A Review of Low-Voltage Renewable Microgrids: Generation Forecasting and Demand-Side Management Strategies. Electronics 2021, 10, 2093. [Google Scholar] [CrossRef]
  9. Bunn, D.W. Short-Term Forecasting: A Review of Procedures in the Electricity Supply Industry. J. Oper. Res. Soc. 1982, 33, 533–545. [Google Scholar] [CrossRef]
  10. Min, H.; Hong, S.; Song, J.; Son, B.; Noh, B.; Moon, J. SolarFlux Predictor: A Novel Deep Learning Approach for Photovoltaic Power Forecasting in South Korea. Electronics 2024, 13, 2071. [Google Scholar] [CrossRef]
  11. Manzolini, G.; Fusco, A.; Gioffrè, D.; Matrone, S.; Ramaschi, R.; Saleptsis, M.; Simonetti, R.; Sobic, F.; Wood, M.J.; Ogliari, E.; et al. Impact of PV and EV Forecasting in the Operation of a Microgrid. Forecasting 2024, 6, 591–615. [Google Scholar] [CrossRef]
  12. Sengupta, S.; Basak, S.; Saikia, P.; Paul, S.; Tsalavoutis, V.; Atiah, F.; Ravi, V.; Peters, A. A Review of Deep Learning with Special Emphasis on Architectures, Applications and Recent Trends. Knowl.-Based Syst. 2020, 194, 105596. [Google Scholar] [CrossRef]
  13. Curry, B.; Morgan, P.H. Model Selection in Neural Networks: Some Difficulties. Eur. J. Oper. Res. 2006, 170, 567–577. [Google Scholar] [CrossRef]
  14. Zhang, G.; Eddy Patuwo, B.; Hu, M.Y. Forecasting with Artificial Neural Networks: The State of the Art. Int. J. Forecast. 1998, 14, 35–62. [Google Scholar] [CrossRef]
  15. Petropoulos, F.; Apiletti, D.; Assimakopoulos, V.; Babai, M.Z.; Barrow, D.K.; Ben Taieb, S.; Bergmeir, C.; Bessa, R.J.; Bijak, J.; Boylan, J.E.; et al. Forecasting: Theory and Practice. Int. J. Forecast. 2022, 38, 705–871. [Google Scholar] [CrossRef]
  16. Lippmann, R. An Introduction to Computing with Neural Nets. IEEE ASSP Mag. 1987, 4, 4–22. [Google Scholar] [CrossRef]
  17. Sheela, K.G.; Deepa, S.N. Review on Methods to Fix Number of Hidden Neurons in Neural Networks. Math. Probl. Eng. 2013, 2013, 425740. [Google Scholar] [CrossRef]
  18. Chollet, F. Keras: The Python Deep Learning API. 2015. Available online: https://keras.io/ (accessed on 23 March 2021).
  19. Ribeiro, A.M.N.C.; do Carmo, P.R.X.; Endo, P.T.; Rosati, P.; Lynn, T. Short- and Very Short-Term Firm-Level Load Forecasting for Warehouses: A Comparison of Machine Learning and Deep Learning Models. Energies 2022, 15, 750. [Google Scholar] [CrossRef]
  20. Gnanambal, S.; Thangaraj, M.; Meenatchi, V.; Gayathri, V. Classification Algorithms with Attribute Selection: An Evaluation Study Using WEKA. Int. J. Adv. Netw. Appl 2018, 9, 3640–3644. [Google Scholar]
  21. Cogollo, M.R.; Velasquez, J.D. Methodological Advances in Artificial Neural Networks for Time Series Forecasting. IEEE Lat. Am. Trans. 2014, 12, 764–771. [Google Scholar] [CrossRef]
  22. Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning Internal Representations by Error Propagation; Defense Technical Information Center: Fort Belvoir, VA, USA, 1985. [Google Scholar]
  23. Amaral, L.S.; de Araújo, G.M. An Expanded Study of the Application of Deep Learning Models in Energy Consumption. In Data and Information in Online Environments: Third EAI International Conference, DIONE 2022, Virtual, 28–29 July 2022; Proceedings; Springer Nature: Berlin/Heidelberg, Germany, 2022; Volume 452, p. 150. [Google Scholar]
  24. Hall, M.; Frank, E.; Holmes, G.; Pfahringer, B.; Reutemann, P.; Witten, I.H. The WEKA Data Mining Software: An Update. SIGKDD Explor. Newsl. 2009, 11, 10–18. [Google Scholar] [CrossRef]
Figure 1. Simulation detail flowchart.
Figure 1. Simulation detail flowchart.
Engproc 101 00016 g001
Table 1. Processes with variation in input variables (horizon: one step ahead).
Table 1. Processes with variation in input variables (horizon: one step ahead).
TechniqueSourceSelected
Variables
MagnitudeDNNCNNLSTMCNN Plus LSTM
ShallowDeepShallowDeepShallowDeepShallowDeep
CFS subset evaluationWEKART demand, DACC, DA MLC,
RT MLC, MIN_5MIN_RSP, MAX_5MIN_RSP
MAPE0.220.181.800.740.370.240.210.18
t(s)1521852763408871798739848
Classifier attribute
evaluation
MAX_5MIN_RSP, DA EC, DA CC, RT demand, DA LMPMAPE0.270.231.600.480.210.160.260.24
t(s)1531873423849941813761790
Principal componentsRT LMP, RT EC, DA LMP, DA EC, RT MLCMAPE7.806.849.676.5010.959.798.067.57
t(s)87130150219594912996386
ReliefRT demand, DA demand, DA EC, DA LMC, reg. service priceMAPE0.520.333.820.220.230.160.160.17
t(s)112.81232743399061792692823
Mutual informationPythonDA MLC, DA LMP, MIN_5MIN_RSP, DA EC, dew pointMAPE5.905.768.155.0310.217.827.236.11
t(s)1551862703395811827725792
--AllMAPE0.860.382.271.050.290.200.250.20
t(s)118123158153591920695879
The best values are highlighted in bold.
Table 2. MAPE results for processes with different sets of input variables (horizon: one step ahead).
Table 2. MAPE results for processes with different sets of input variables (horizon: one step ahead).
TechniqueModelSl
(1)
Sl + W
(2)
Sl + S +
C + Id
(3)
Δ (%)
(4)
Fs
(5)
Fs + S + C + Id
(6)
Δ (%)
(7)
Av
(8)
Av + S +
C + Id
(9)
Δ (%)
(10)
DNNSh11.306.765.3852.400.290.1934.500.190.185.30
D10.855.988.5744.900.370.1754.100.220.1722.70
CNNSh12.138.468.00342.852.2321.83.402.2633.50
D11.044.954.2961.100.180.180.000.260.1638.50
LSTMSh12.378.317.8636.50.650.2561.500.470.1763.80
D11.747.599.0023.3014.107.0749.8014.9610.4530.10
CNN plus LSTMSh10.386.565.6745.400.310.2132.20.420.1954.80
D11.115.097.6031.600.250.1540.000.250.244.00
The best values are highlighted in bold.
Table 3. MAPE results for processes with different sets of input variables (horizon: 12 steps ahead).
Table 3. MAPE results for processes with different sets of input variables (horizon: 12 steps ahead).
TechniqueModelSl
(1)
Sl + W
(2)
Sl + S + C + Id
(3)
Δ (%)
(4)
Fs
(5)
Fs + S + C + Id
(6)
Δ (%)
(7)
Av
(8)
Av + S + C + Id
(9)
Δ (%)
(10)
DNNSh33.2023.4427.9815.7036.5829.7718.6029.8029.760.13
D48.1027.6337.0123.1045.0332.428.1044.040.587.77
CNNSh12.0115.374.9259.004.914.831.604.924.684.90
D10.609.193.7764.404.793.9617.304.393.6317.70
LSTMSh13.0014.369.8224.504.394.302.004.293.6614.70
D15.5815.6515.272.0015.811.344.4615.415.151.62
CNN plus LSTMSh15.008.044.5869.504.423.9311.004.124.090.70
D11.248.6510.704.8015.9410.8232.1015.615.361.50
The best values are highlighted in bold.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Amaral, L.S.; Araújo, G.M.d.; Moraes, R. Optimizing Short-Term Electrical Demand Forecasting with Deep Learning and External Influences. Eng. Proc. 2025, 101, 16. https://doi.org/10.3390/engproc2025101016

AMA Style

Amaral LS, Araújo GMd, Moraes R. Optimizing Short-Term Electrical Demand Forecasting with Deep Learning and External Influences. Engineering Proceedings. 2025; 101(1):16. https://doi.org/10.3390/engproc2025101016

Chicago/Turabian Style

Amaral, Leonardo Santos, Gustavo Medeiros de Araújo, and Ricardo Moraes. 2025. "Optimizing Short-Term Electrical Demand Forecasting with Deep Learning and External Influences" Engineering Proceedings 101, no. 1: 16. https://doi.org/10.3390/engproc2025101016

APA Style

Amaral, L. S., Araújo, G. M. d., & Moraes, R. (2025). Optimizing Short-Term Electrical Demand Forecasting with Deep Learning and External Influences. Engineering Proceedings, 101(1), 16. https://doi.org/10.3390/engproc2025101016

Article Metrics

Back to TopTop