Next Article in Journal
Cost-Effective Planning of Station-Based Car-Sharing Systems: Increasing Efficiency While Emphasizing User Comfort
Previous Article in Journal
Urban Communication in Smart Cities: Stakeholder Participation Motivators
Previous Article in Special Issue
GeoBIM for Geothermal Energy Efficiency in Buildings and Smart Cities: A Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Advanced Multivariate Deep Learning Methodology for Forecasting Wind Speed and Solar Irradiation

1
Control & Instrumentation Engineering Department, King Fahd University of Petroleum & Minerals, Dhahran 31261, Saudi Arabia
2
Interdisciplinary Research Center for Sustainable Energy Systems, King Fahd University of Petroleum & Minerals, Dhahran 31261, Saudi Arabia
3
Electrical Engineering Department, King Fahd University of Petroleum & Minerals, Dhahran 31261, Saudi Arabia
4
Department of Information and Computer Science, King Fahd University of Petroleum & Minerals, Dhahran 31261, Saudi Arabia
5
Interdisciplinary Research Center for Intelligent Secure Systems, King Fahd University of Petroleum & Minerals, Dhahran 31261, Saudi Arabia
6
School of Energy and Engineering, Murdoch University, Perth, WA 6150, Australia
*
Authors to whom correspondence should be addressed.
Smart Cities 2026, 9(4), 59; https://doi.org/10.3390/smartcities9040059
Submission received: 20 January 2026 / Revised: 18 March 2026 / Accepted: 19 March 2026 / Published: 27 March 2026
(This article belongs to the Special Issue Energy Strategies of Smart Cities, 2nd Edition)

Highlights

What are the main findings?
  • Proposals: A deep learning multivariate architecture is suggested to predict wind speed and global horizontal irradiance using only timestamp-based features, enabling reliable prediction of renewable energy without sensing infrastructure.
  • Time-based inputs: The prediction accuracy of time-based inputs is similar to that of meteorological variables across several deep learning models, such as LSTM, BiLSTM, GRU, and CNN, and can be scaled to data-limited urban settings.
What are the implications of the main findings?
  • The proposed solution is a cost-effective and easy-to-deploy forecasting approach for smart cities, requiring less dense meteorological sensor networks without affecting forecast performance.
  • Accurate time-based renewable energy prediction supports smart city energy planning, including distributed renewable integration, data-driven urban energy planning, resilient grid implementation, and the strategic development of sustainable and low-carbon urban systems.

Abstract

The transition to smart cities is accelerating distributed wind and solar deployment. However, their intermittency challenges grid operation, thereby making accurate machine-learning-based prediction of wind speed and global horizontal irradiance (GHI) crucial. This study presents a cost-effective approach that enhances prediction accuracy by extracting additional features from timestamp records for deep learning models used to forecast GHI and wind speed. Unlike conventional methods that require onsite meteorological measurements, the proposed approach uses only date and time information as inputs to multivariate deep neural networks, including recurrent neural networks, gated recurrent units, long short-term memory (LSTM), bidirectional LSTM, and convolutional neural networks. For wind speed prediction, the proposed configuration achieves R2 up to 0.9987, with RMSE as low as 0.067 m/s for 3 d ahead forecasting, outperforming univariate baselines and matching models. For GHI forecasting, the time-based configuration attains R2 values above 0.9994 in 12 h ahead predictions, with the RMSE reduced to approximately 4.47 W/m2, representing a substantial improvement over univariate models. The proposed framework maintains strong performance, particularly under clear and sunny conditions. These results demonstrate that timestamp-engineered features can deliver forecasting accuracy comparable to conventional multivariate meteorological models while significantly reducing infrastructure requirements, making the approach well-suited for scalable smart city energy management.

1. Introduction

One of the key aspects of a smart city is to reduce CO2 emissions by adopting different strategies that can also improve the quality of life of citizens [1]. Cities are becoming increasingly dependent on wind and solar energy due to rapid urbanization, climate constraints, and rising electrification. However, the intrinsic intermittency of these resources presents serious challenges for smart city energy systems, especially in microgrid management, real-time grid operation, and the integration of distributed renewable power. Scalable forecasting solutions that operate under limited sensing infrastructure are therefore critical for smart city energy strategies [2]. Accurate short-term and intra-hour forecasts of wind speed and global horizontal irradiance (GHI) are essential for reliable and cost-effective urban energy management. The search for alternative energy sources to replace fossil fuels is driven by their depletion, finite nature, and the environmental hazards associated with greenhouse gas emissions, necessitating the development of sustainable green alternatives [3]. Consequently, energy generation from renewable resources has steadily increased over the past decade. Wind and solar power have experienced rapid growth and are expected to dominate the future energy mix. In 2024, global solar energy generation increased by more than 2000 times, from 1.06 TWh to 21,111.73 TWh, while wind energy generation increased by approximately 80 times, from 31.41 TWh to 2511.03 TWh, compared to estimates in 2000 [4]. Figure 1 shows annual electricity generation from hydropower, wind, solar, and other renewable sources. The International Energy Agency estimates that solar and wind power will comprise two-thirds of the renewable energy mix by 2030, in line with achieving the global net-zero scenario [5].
The stochastic nature of wind and solar power creates significant challenges for grid integration [6,7]. Power suppliers cannot effectively utilize these resources to maximize economic gains owing to the unpredictability of their output [8]. The intermittent generation of solar and wind power also constrains the development and optimization of technologies such as photovoltaic systems and wind turbines [9].
Furthermore, existing grid systems worldwide cannot accommodate the large-scale integration of variable renewables without costly modifications [10,11]. To address these challenges, various mitigation strategies have been documented. These include improving predictions of wind speed and sunshine hours to support the planning and operation of renewable energy systems [6,7,12].
Recent developments in renewable energy forecasting increasingly integrate deep learning with advanced optimization techniques. For example, PSO-trained CNN-LSTM networks improve short-term wind forecasting by capturing both local spatial features and temporal variations [13]. Transformer-based architectures and attention mechanisms further enhance performance by modeling long-range dependencies and nonlinear patterns in wind and solar time series [14]. The integration of multiscale decomposition techniques with deep neural networks enables the capture of both short-term fluctuations and long-term trends in photovoltaic and wind power generation [15,16]. In addition, graph-based learning algorithms account for spatial correlations among wind farms [17], while physics-informed hybrid models improve generalization in solar irradiance forecasting [18]. Despite these advances, most existing methods rely on large volumes of dense meteorological, SCADA, or external sensing data, which can be limiting in smart city contexts where scalability is not necessarily defined by extensive instrumentation.
Renewable energy forecasting can be categorized into four types, (a) long-term, (b) day-ahead, (c) intra-day, and (d) intra-hour forecasting, each serving a distinct role in energy transmission and management [19,20]. Intra-hour predictions are essential for real-time battery storage control [21]. Intra-day predictions are used in decision-making with respect to unit commitment [22], while day-ahead predictions are used in the planning of maintenance and operation of power units [23]. Strategic planning requires long-term forecasts. Table 1 categorizes wind speed and GHI forecasting according to the prediction horizon, as the forecast horizon determines operational objectives and model specifications. The duration parameter refers to the interval between the issuance of the forecast and the target window. Long-term predictions, ranging from months to years, support strategic planning and capacity expansion. Day-ahead forecasts, covering one day to one week, are used for maintenance scheduling and unit commitment. Hourly and intra-day forecasts enable short-term dispatch and renewable integration. Intra-hour predictions, spanning minutes to one hour, are essential for real-time services such as battery control, microgrid balancing, and frequency regulation. Each horizon is evaluated at a constant temporal resolution of 15 min to ensure comparability across models. Forecast uncertainty increases with the length of the prediction horizon due to error accumulation, whereas shorter horizons require greater sensitivity to temporal variations because of rapid changes in wind speed and solar irradiance.
This multi-horizon framework aligns model evaluation with practical smart grid requirements and demonstrates the applicability of the proposed approach to both operational and strategic urban energy management.
Traditional wind speed and solar irradiation forecasting technologies were based on simplistic models with limited data requirements. However, with the development of deep learning (DL), it has become possible to leverage large datasets, including meteorological information, geographical characteristics, and historical weather records. Multivariable DL approaches explicitly consider the relationship between variables that influence wind and solar energy production [24]. One of the major advantages of this method is its ability to capture nonlinear associations and complex data patterns.
The model detects less pronounced relationships among meteorological parameters, geographical terrain, and temporal variations to generate more accurate and reliable predictions through deep neural networks. Also, the multivariate form of the model allows complete forecasting by accounting for dynamic interactions between the speed of the wind and the solar irradiation [25]. The implications of this study for optimizing renewable energy systems extend to energy management, grid resilience, and environmental sustainability. By leveraging the predictive capabilities of multivariate DL, stakeholders can make informed decisions, reduce uncertainties, and maximize the use of wind and solar energy in a changing energy environment [26].
Although numerous studies have improved renewable energy forecasting through complex machine learning models and hybrid frameworks, most prior work has focused on increasing model complexity or incorporating additional meteorological variables. Limited attention has been given to the systematic evaluation of input feature representation itself. Timestamp data are typically used in raw form or treated as auxiliary inputs, and their structured cyclical encoding has not been rigorously benchmarked against meteorological multivariate configurations under controlled experimental conditions. Accordingly, rather than proposing new DL architectures, the primary research gap addressed in this study is a controlled investigation of whether engineered cyclical temporal features can serve as a viable and scalable substitute for sensor-dependent multivariate inputs across multiple forecasting horizons and weather regimes. This work clarifies how feature design influences forecasting performance under identical training protocols, enabling an assessment of whether performance gains arise from architectural complexity or from improved temporal representation.

2. Literature Review

This section discusses key methodologies for time series forecasting of wind speed and Global Horizontal Irradiance (GHI). Techniques are categorized as univariate, relying solely on past values of the target variable, or multivariate, which incorporate external factors. Both approaches are widely used in the literature. Univariate models are cost-effective and require no additional measurement instruments, whereas multivariate models typically provide higher accuracy. The choice between these techniques depends on the specific objectives and constraints of the application.

2.1. Wind Speed

Wind power prediction relies on accurate forecasting of wind speed, as wind power is proportional to the cube of wind speed. Even minor forecasting errors can result in significant overestimation or underestimation of wind power output. To address this issue, researchers have developed various techniques to minimize prediction errors in wind speed forecasting.
These techniques can be categorized into four broad types based on the algorithms used:
  • Physical methods, where meteorological information with topographical features is applied to forecast wind velocity by means of dynamic equations like the primitive equations;
  • Statistical approaches, which analyze historical wind speed data as a time series to construct mathematical models;
  • Computational intelligence approaches, which employ techniques such as artificial neural networks to model historical wind speed data and capture the nonlinear characteristics of the dataset;
  • Hybrid approaches, which integrate two or more methods to overcome the limitations of individual techniques.
Table 2 summarizes the advantages and limitations of several statistical and machine learning models, including ARIMA, Kalman filters, support vector machines, and random forests, for onshore and offshore wind speed forecasting. The findings indicate that a hybrid approach combining Kalman filtering, wavelet transformation, and random forest models achieves superior performance in short-term wind speed prediction for both applications. However, recent studies suggest that DL models can further enhance forecasting accuracy beyond the capabilities of statistical and conventional machine learning methods.

2.2. GHI

Many studies identify GHI as the principal variable in solar power forecasting. Several techniques for converting GHI into solar power output, such as Ostwald’s method, are recognized in the literature for their reliability. In [38], hourly temperature and GHI data from Jubail City were used to develop a machine learning forecasting model. The predicted GHI values were converted into solar power using Ostwald’s technique, confirming the effectiveness of translating irradiance forecasts into practical power estimates and emphasizing the importance of accurate GHI prediction. Among the evaluated algorithms, multiple linear regression (MLR), k-nearest neighbors (kNN), and decision tree (DT), kNN demonstrated superior performance, although all three methods achieved acceptable predictive accuracy.
In [39], a long short-term memory (LSTM) model was employed to determine the optimal training dataset length for GHI prediction. The authors evaluated a univariate LSTM model using training datasets spanning five, four, three, two, and one year. The findings indicate that two years of historical GHI data provide high forecasting accuracy. A very short-term GHI forecasting framework based on the LSTM algorithm was proposed in [40], highlighting the importance of input variable selection. The results showed that a multivariate LSTM configuration incorporating temperature, GHI, and wind direction outperformed the univariate model but required longer training time. Finally, ref. [41] proposed a hybrid DL model combining an LSTM layer for temporal feature extraction and a convolutional neural network (CNN) layer for spatial feature extraction. The hybrid LSTM-CNN model outperformed standalone approaches, and the sequence of layers was critical, as the CNN-LSTM architecture performed worse than the LSTM-CNN configuration. This model demonstrated strong performance across different locations and weather conditions compared with other machine learning approaches.
Table 3 shows that recent studies on wind speed and GHI forecasting primarily focus on advanced architectures, hybrid learning strategies, and multi-source data fusion. Although these approaches achieve high accuracy, they generally rely on meteorological or SCADA-based inputs, which limits scalability in sensor-constrained smart city environments. Furthermore, timestamp information is typically used in raw form rather than being systematically transformed and validated as a primary predictive feature. This highlights a clear research gap: the need for an accurate, scalable, and infrastructure-light forecasting framework that operates independently of extensive sensing systems.

2.3. Research Gaps and Contributions

Although DL has substantially improved wind speed and GHI forecasting, several limitations remain, particularly in the context of smart city energy systems:
  • Most multivariate forecasting studies rely heavily on meteorological sensor data, which limits scalability in urban environments where dense sensing infrastructure is costly or unavailable.
  • Timestamp information is typically included in raw form, such as year, month, and hour, but is rarely transformed into structured cyclical features and evaluated as a standalone predictive alternative.
  • Few studies directly compare time-based inputs with meteorological variables under identical model architectures, training conditions, and forecasting horizons.
  • Intra-hour and short-horizon forecasting, which are critical for microgrid control, battery management, and distributed renewable coordination, receive less systematic attention than day-ahead forecasting.
  • Limited research evaluates renewable forecasting models across multiple weather classes and seasonal variations to assess robustness under realistic operating conditions.
These gaps underscore the need for a scalable, infrastructure-light forecasting framework suitable for smart cities and data-constrained environments. This study addresses these limitations through the following contributions:
  • Introduces a structured timestamp-based feature engineering approach that converts raw date and time information into cyclical sine and cosine signals representing daily and annual patterns.
  • Demonstrates that time-based features alone can achieve forecasting accuracy comparable to, and in some horizons better than, models using meteorological variables.
  • Develops and evaluates five DL architectures, RNN, GRU, LSTM, BiLSTM, and CNN, using consistent hyperparameter optimization, loss functions, and training procedures to ensure fair comparison.
  • Systematically compares three input configurations:
  • Univariate models,
  • Multivariate models with meteorological variables,
  • Multivariate models with engineered time-based signals.
  • Evaluates performance across multiple forecasting horizons, including 12 h, 1 d, and 3 d ahead predictions at 15 min resolution.
  • Assesses robustness under different weather categories, choose one set and use it throughout, for example: “cloudy, partly cloudy, sunny, and clear”.
  • Demonstrates that the proposed time-based multivariate framework enables accurate intra-hour renewable forecasting without requiring additional sensing infrastructure, thereby supporting scalable smart city energy management.
The remainder of this paper is structured as follows. Section 3 outlines the datasets, methodology, and evaluation metrics employed in this study, including input selection and hyperparameter settings. Section 4 presents a detailed analysis and comparison of the results. Finally, Section 5 concludes with a summary of the key findings and directions for future research.

3. Dataset and Methodology

Renewable energy generation exhibits strong periodic behavior driven by deterministic astronomical cycles, including Earth’s rotation, which produces the diurnal cycle, and its revolution around the sun, which produces the annual cycle. Representing temporal variables such as hour, day, or month as linear integers introduces artificial discontinuities. For example, 23:00 and 00:00 appear numerically distant despite being temporally adjacent. To preserve periodicity and support smooth function approximation, this study applies sine- and cosine-based cyclical encoding to temporal components. Each periodic variable is mapped onto the unit circle, enabling the model to learn continuous phase relationships rather than discrete boundaries. This transformation ensures that temporally adjacent points remain geometrically close in feature space, improves the stability of gradient-based learning, and enhances the capture of seasonal and diurnal patterns in wind speed and solar irradiance. By explicitly embedding temporal periodicity into the input representation, the model can focus on learning residual variability beyond the predictable cyclical structure.

3.1. Data Description

The proposed DL models were trained and tested using an open-source dataset from Kaggle [49]. The dataset contains various meteorological and solar radiation variables related to Izmir, Turkey, comprising eighteen complete and consistent features. It includes 105,120 records sampled at 15 min intervals from 2017 to 2019, to capture maximum variability in wind speed and GHI. The model was trained using data from 2017 to cover the full range of wind speed and GHI variations. Evaluation was conducted on 2018 data, while selected days from the 2019 dataset were used to assess model performance under different operating conditions and weather patterns.
Figure 2 shows that the annual GHI profile exhibits a clear seasonal envelope driven by solar geometry, with peak irradiance in summer and lower values in winter. The daily profile displays a smooth diurnal cycle dominated by deterministic astronomical effects. In contrast, Figure 3 indicates that wind speed exhibits high stochastic variability throughout the year, with weak seasonal structure and irregular intra-day fluctuations. These observations confirm that GHI is primarily governed by strong periodic components, whereas wind speed contains significant nonlinear and turbulent dynamics. The multi-scale temporal behavior observed in both variables supports the use of cyclical temporal encoding and multi-horizon DL forecasting.

3.2. Machine Learning Models

This section introduces the deep neural networks and evaluation metrics used for performance comparison. Each network employed in this study is discussed in terms of its structure, input–output relationships, advantages, and limitations.

3.2.1. DL Machines

DL is a subfield of machine learning that uses artificial neural networks designed to emulate the behavior of biological neurons. DL methods can learn effectively from large datasets with minimal preprocessing and are capable of handling diverse data representations. A typical DL model consists of input, hidden, and output layers. The number of neurons in the hidden layers, as well as the activation functions, weights, and biases, vary depending on the task. DL is characterized by architectures with more than two hidden layers. Figure 4 illustrates a fully connected deep neural network with n hidden layers, where each circle represents a neuron, and each arrow denotes a weighted connection. A brief description of commonly used algorithms is provided below.

3.2.2. Recurrent Neural Networks (RNNs)

The RNN is a powerful neural network configuration used in pattern detection for sequential data. RNNs are applied in speech recognition, video tagging, and time series forecasting [50]. Unlike feedforward neural networks, RNNs have a feedback signal, making the output dependent on current and previous computations, thereby emulating memory. Figure 5 illustrates the difference between feedforward and recurrent neural networks. The output of the recurrent model O t can be calculated using the following equations:
O t =   F o W H O H t + B o
H t = F H W H H H t 1 + W X H X t + B h
where F o is the activation function of the output layer, H t is the hidden state at time t W H O is the weight matrix connecting the hidden layer to the output layer, B o output layer bias, F H is the activation function of the hidden layer, W H H recurrent weight matrix connecting the previous hidden state to current hidden state, W X H is the weight matrix connecting the input layer to hidden layer, H t 1 previous hidden state, X t input vector at time step t, and B h is the hidden layer bias. During the backpropagation update, RNNs encounter the gradient disappearance problem [51]. Consequently, new RNN architectures, such as GRU and LSTM, have been proposed to address this issue.

3.2.3. LSTM Units

In 1997, a recurrent network architecture was proposed to address the vanishing gradient problem [52]. LSTM networks enable recurrent neural networks to learn dependencies over extended time horizons by maintaining stable error flow during training. This capability is achieved through gated memory cells that regulate the storage and propagation of information. Each LSTM cell consists of three main components: a forget gate, an input gate, and an output gate, which collectively control the retention and updating of information within the cell [53]. Figure 6 illustrates the structure of an LSTM cell. The mathematical formulation of the cell can be expressed using the following variables.
F t = σ W f H t 1 ,   X t + B f
I t = σ W i H t 1 ,   X t + B i
C ~ t = tanh W c H t 1 ,   X t + B c
C t = F t C t 1 + I t C ~ t
O t = σ W o H t 1 ,   X t + B o
H t = O t tanh C t
The LSTM architecture consists of a forget gate F t , an input gate I t , a cell input activation C ~ t , a cell state C t , and a memory cell output H t . These components are governed by the above relations. In these equations, B f ,   B i ,   B c ,   B o denote the network bias vectors, W f ,   W i ,   W c , W o represent the corresponding weight matrices, σ is the sigmoid activation function, t a n h is the hyperbolic tangent activation function, and denotes element-wise multiplication.

3.2.4. Gated Recurrent Units (GRUs)

A network architecture related to the LSTM model is the GRU. Both GRU and LSTM architectures capture long-term dependencies in sequential data, making them suitable for time-series forecasting tasks that involve seasonality and noise [54]. Unlike vanilla recurrent neural networks, which rely on a single activation function, GRUs use gating mechanisms to regulate information flow and learn long-term temporal relationships. This structure allows GRUs to achieve more accurate predictions than standard RNNs in many time-series forecasting applications. GRU-based models have been successfully applied to both univariate and multivariate time-series forecasting problems [55]. The standard GRU cell consists of two gates, the update gate and the reset gate, as illustrated in Figure 7.
The update gate manages the information from the previous state to the current hidden state. Conversely, the reset gate controls the amount of new knowledge in the current state.
Z t = σ W z H t 1 ,   X t
R t = σ W r H t 1 ,   X t
H ^ t = tanh W h R t H t 1 ,   X t
H t = ( 1 Z t ) H t 1 + Z t H ^ t
The GRU is defined by the following equations, where Z t denotes the update gate vector that controls the balance between the previous hidden state and the candidate activation, R t represents the reset gate vector that regulates the contribution of the previous hidden state, and H ^ t denotes the candidate hidden state vector.

3.2.5. CNNs

CNNs were originally developed for tasks such as object detection, image recognition, and instance segmentation in computer vision. They have also demonstrated strong potential in time-series prediction, successfully forecasting multivariate datasets in areas such as traffic flow and solar power generation [56]. A typical CNN architecture includes one or more convolutional layers, followed by a subsampling or pooling layer and one or more fully connected layers, similar to standard neural networks. A distinguishing feature of CNNs is the use of convolutional kernels with a specified kernel size; this parameter is not present in the other DL algorithms considered in this study.

3.3. Evaluation Metrics

This study employs widely used statistical metrics from the literature to evaluate the proposed models and compare different methods and input configurations. Wind speed prediction models are assessed using three metrics: Root Mean Squared Error (RMSE), Mean Absolute Percentage Error (MAPE), and the coefficient of determination (R2). MAPE is not used for GHI evaluation due to the presence of many near-zero values. Lower values of MAPE, MSE, and RMSE indicate higher predictive accuracy, whereas higher R2 values reflect better model performance. The following equations define the evaluation metrics [57]:
R M S E =   i = 1 n P r e d i c t e d i M e a s u r e d i 2 n
M A P E = 100 n   i = 1 n P r e d i c t e d i M e a s u r e d i M e a s u r e d i
R 2 = 1 i = 1 n M e a s u r e d i P r e d i c t e d i 2 i = 1 n M e a s u r e d i M e a s u r e d M e a n 2

3.4. Input Selection and Ranking

To enrich the dataset with meaningful temporal information, four engineered features, day sin, day cos, year sin, and year cos, were added. These cyclical signals represent daily and annual patterns derived from timestamps. They enable the models to learn repeating behaviors without relying on raw calendar values. The last column was removed due to invalid entries. The original timestamp fields, year, month, day, hour, and minute, were also excluded because the cyclical features capture the same information in a more effective form. A clear-sky index was calculated for each timestamp by dividing the measured GHI by the corresponding clear-sky GHI value [58]. Strong correlations were observed among physically related variables. GHI shows a negative correlation with relative humidity and solar zenith angle, reflecting atmospheric attenuation and the geometric relationship between the sun’s position and surface irradiance. In contrast, GHI exhibits a moderate positive correlation with temperature and day-based sinusoid characteristics due to seasonal and diurnal solar cycles. During nighttime, when GHI equals zero, the clear-sky index was set to 1 to avoid undefined values and maintain consistency.
The Python Keras library Version 3.0 was used to implement the deep neural network architectures with the optimized configurations. All experiments were conducted on a machine equipped with an Intel Core i7-10510U processor operating at 1.8–2.3 GHz, 16 GB of RAM, and Windows 11 operating system. Table 4 presents the optimal hyperparameter settings identified through grid search. MSE was used as the loss function during training. The learning rate of the Adam optimizer [59] was set to 0.0001 to balance convergence stability and training speed, as higher values produced oscillatory validation loss during preliminary experiments. The batch size was evaluated across the values 6, 9, 19, 28, 35, and 39, and a batch size of 32 was adopted for all models. Each model was trained for 100 epochs to ensure a fair comparison across different input configurations. This training duration was kept constant across all experiments, allowing performance differences to be attributed solely to variations in the input data.
Figure 8 illustrates the correlation between engineered time-based features, day sin, day cos, year sin, and year cos, and GHI. The diagonal values are all 1, indicating perfect self-correlation. The off-diagonal values represent correlation strength. GHI shows a moderate positive correlation with day sin at 0.35, suggesting that solar radiation patterns partially follow a daily sinusoidal trend. It is negatively correlated with day cos at −0.67, reflecting the cosine-shaped variation in GHI throughout the day. These relationships indicate that daily cyclical characteristics help predict solar patterns, whereas year sin and year cos exhibit low correlation with GHI and daily components. This is expected, as annual seasonal effects occur over longer periods and do not strongly influence short-term GHI fluctuations. Their low correlations suggest that these engineered features add unique information to the dataset without redundancy.

3.5. Hyperparameter Selection for the Machine Learning Models

Three experimental tests were conducted to identify a cost-effective selection of input features for the DL models. The first test employed a univariate model using only endogenous variables, wind speed and GHI. The second test applied a multivariate model incorporating key meteorological variables for forecasting wind speed and GHI. The third test evaluated the inclusion of engineered time-based signals as endogenous inputs compared with exogenous meteorological variables. All scenarios were assessed separately using the same set of DL algorithms to ensure fair comparison. Owing to the absence of a specific method to determine the optimal architecture for any given DL model, several experiments identified appropriate setups. The DL model hyperparameters were optimized through grid search over candidate values.
To improve clarity and reproducibility, Figure 9 presents the complete workflow of the proposed framework. This includes data preprocessing, cyclical temporal feature generation, supervised sequence construction, chronological dataset splitting, input configuration definition, hyperparameter optimization, model training, multi-horizon testing, and robustness evaluation across different weather regimes.

4. Results and Discussion

This section presents and discusses the findings obtained from all algorithms under the different experimental setups. The models were trained and validated using data from the first two years, with parameters optimized accordingly, and then tested on the final year of data, 2019. Testing was conducted across multiple days and diverse weather conditions to evaluate model performance under realistic operating scenarios. In addition, the models were assessed over several forecasting horizons, including 1 h, 6 h, 1 d, and 3 d ahead predictions, all at 15 min intervals.
For clarity and comparability with related studies, this section is divided into two subsections addressing wind speed and global horizontal irradiance, respectively. The data set was divided into three categories, which represented half years of data. Namely, the first year was used for training, the second year for validation and hyperparameter selection, and the third year for testing and performance comparison.
In order to analyze the strength of the proposed models further, weather conditions were grouped into four categories according to the clear-sky index, which includes: cloudy, partially cloudy, sunny, and clear [59]. Timestamps corresponding to each category were identified and provided as inputs to the trained models. Forecasting performance was then analyzed separately for each weather class to ensure the reliability and generalizability of the proposed method in different weather conditions.

4.1. Wind Speed Forecasting

Each model was trained for 100 epochs. For the RNN-based models, the training time was approximately 6 s per epoch, whereas the CNN required about 3 s per epoch in the univariate setup. When the endogenous and exogenous variables of input were used with a similar training time, it was found that the presence of extra inputs did not significantly increase computational cost. In general, there was no significant difference between the training time across models and input configurations, although the CNN demonstrated a clear advantage in computational efficiency.
In all experiments, the training loss decreased rapidly within the first ten epochs and then stabilized at approximately 0.002. The validation loss followed a similar trend and converged to a comparable value with only minor differences. This behavior indicates that the models were neither overfitted nor underfitted, regardless of the input configuration. Figure 10 presents the training and validation loss curves of each DL model as functions of epochs when wind speed and time-based signals are used as inputs. It is also observed that, during the initial epoch, the validation loss is lower than the training loss, after which the training loss becomes slightly lower than the validation loss. This pattern is consistent with findings reported in the literature [48] and reflects a stable and well-fitted training process.
Figure 11 compares the day-ahead wind speed predictions generated by five DL models, RNN, GRU, LSTM, BiLSTM, and CNN, with the actual wind speed over a 24 h period. The horizontal axis represents the time of day, while the vertical axis shows wind speed in meters per second (m/s). The observed wind speed serves as a reference for evaluating the accuracy of the predicted profiles.
After training and validating the models using data from the first two years, the final year was reserved exclusively for testing. The testing procedure was conducted in two phases. The first phase evaluated model performance across different forecasting horizons to identify the strengths and limitations of each DL algorithm and input configuration. This phase focused on the last three days of 2019 and considered key forecasting horizons of 12 h ahead, 1 d ahead, and 3 d ahead, all generated at 15 min intervals. Figure 12 presents a visual comparison of 1 d-ahead wind speed forecasts produced by various DL models, such as Simple RNN, GRU, LSTM, BiLSTM, and CNN with different kernel sizes (KS = 1, 2, and 4). Each subplot compares the predicted wind speed profile with the measured values during testing. The prediction horizon (iterations) is depicted on the horizontal axis, and wind speed in meters per second (m/s) on the vertical axis. The measured data are well represented by the predicted wind speed in all models, which shows that the DL architectures could reproduce the time dynamics and variability of wind speed. The substantial overlap between predicted and observed curves indicates high predictive accuracy. Minor deviations are observed during periods of rapid change, particularly at peak wind speeds and sharp transitions, but these differences remain limited. The strong forecasting performance is further validated by the quantitative performance metrics comparison given in Table 5, Table 6 and Table 7 for three days, one day, and 12 h, respectively. These tables report the error metrics corresponding to each forecasting horizon evaluated in the first testing phase.
Figure 13 presents the percentage error of wind speed forecasts generated by the evaluated DL models across the testing iterations. For all models, the errors are centered around zero and remain within a narrow range, indicating unbiased and stable predictions without systematic overestimation or underestimation. The similar error patterns observed across architectures confirm the robustness and consistent forecasting performance of the proposed models.
Evidently, RMSE generally decreases when time-based signals are added to wind speed as input features. Similarly, MAPE presents a general decrease relative to the univariate configuration, indicating that multivariate models provide more accurate predictions relative to the observed values. Error metrics remain low across all forecasting horizons, and the corresponding correlation coefficients demonstrate strong agreement between predicted and actual values, confirming the reliability of the proposed models.
A comparison of the two multivariate input configurations indicates a number of important observations. On the whole, DL models using time-based signals and meteorological variables can always be expected to perform better than the univariate baseline. The magnitude of improvement is comparable between the two multivariate approaches. However, slightly lower MAPE values are typically achieved at shorter forecasting horizons when meteorological variables are used as inputs. In contrast, for longer forecasting horizons, time-based signals tend to provide greater performance gains than meteorological inputs. These findings indicate that time-related features can be especially effective for improving long-term wind speed prediction. Across model architectures, predictive performance is similar, with CNN showing a minor advantage in training efficiency and LSTM achieving slightly lower error metrics.
The second testing stage evaluates model robustness using twelve representative days, one from each month, to assess seasonal variability throughout the year. Table 8 provides an overview of the RMSE values of each of the chosen days using various models and input configurations. The results indicate that, in all months, models incorporating wind speed and time-based signals consistently achieve lower RMSE values than models using wind speed alone. Furthermore, the improvements obtained with time-based signals are more consistent than those achieved with meteorological inputs. A comparison with other studies is presented in Table 9. This finding demonstrates that augmenting wind speed with time-based exogenous features provides a robust and accurate forecasting framework across different seasons. Both RNN and CNN models exhibit comparable predictive performance, although the CNN requires less training time. Figure 14 illustrates the day-ahead wind speed forecasts generated by all DL models.
Although Figure 10 illustrates a representative daily wind speed forecast for visualization, model performance was evaluated comprehensively across multiple forecasting horizons and seasonal conditions. Table 5, Table 6 and Table 7 present quantitative metrics, RMSE, MAPE, and R2), for 12 h, 1 d, and 3 d ahead predictions, while Table 8 summarizes monthly RMSE values for twelve representative days. These results confirm that the observed performance improvements are consistent across different time scales and seasonal regimes, rather than being confined to a single daily example.

4.2. GHI Forecasting

Similar to the wind speed experiments, all models were trained for 100 epochs. The training dataset covered an entire year, enabling the models to capture both seasonal and diurnal patterns in the data. Among the evaluated architectures, the CNN demonstrated faster convergence and smoother loss trajectories for both training and validation datasets. For both univariate and multivariate configurations, the CNN required approximately 2 s per epoch, whereas the RNN and BiLSTM required about 4 and 5 s per epoch, respectively. This underscores the computational efficiency of the CNN architecture relative to the other DL models.
After satisfactory convergence, testing was conducted using the 2019 dataset in two stages. In the first stage, GHI was predicted across multiple forecasting horizons using three input configurations: (a) a univariate baseline model, (b) a multivariate model incorporating time-based signals, and (c) a multivariate model using meteorological variables. This design enables direct comparison among the baseline approach, the proposed time-based configuration, and conventional multivariate meteorological models reported in the literature.
The training loss stabilized after a few epochs, converging to approximately 0.017 for univariate models and 0.015 for multivariate models with time-based signals. The validation loss closely followed the training loss, showing similar trends and plateauing slightly above it. This pattern indicates good generalization performance and suggests that the models are well suited for GHI forecasting. Figure 15 presents the training and validation loss curves for all DL models using time-based signal inputs.
The Pearson correlation between GHI and the time-based elements (day of year and time-of-year components) of the engineered sine and cosine functions is shown in Figure 16. The medium correlations between GHI and the day-based signals indicate significant temporal cycles related to the diurnal and seasonal variations in solar irradiance. In contrast, the annual signals exhibit only weak linear correlation with GHI, suggesting a lack of substantial direct linear relationships. However, these characteristics were retained during the feature-selection process to enable DL models to leverage potential nonlinear relationships, thereby complementing the meteorological and irradiance-based inputs for GHI forecasting.
The predicted and measured GHI time series using various DL models, including recurrent and convolutional architectures, are presented in Figure 17. The models closely resemble observed GHI patterns, capturing diurnal and seasonal variations, and exhibit high coefficients of determination greater than 98% with identical RMSE values. Minor deviations occur primarily during rapid irradiance changes, indicating similar and stable performance across the models.
The performance of multivariate time-signal models was tested across different weather categories to evaluate the robustness of time-based features in predicting GHI. RMSE and the coefficient of determination were reported for each test case. Figure 18 presents the training and validation loss curves of all DL models using a set of five features as multivariate inputs, selected based on Pearson correlation analysis. All models converge rapidly and exhibit stable loss behavior, with training and validation losses remaining close, indicating good generalization. CNN-based models show slightly faster convergence and more favorable loss profiles than recurrent models.
Figure 19 compares predicted and actual GHI during the testing period of selected multivariate inputs. Each model reflects temporal development and seasonal GHI fluctuations, achieving high coefficients of determination (above 98%) and similar RMSE values. Minor deviations occur primarily during rapid changes in irradiance.
Figure 20 presents a zoomed-in view of a typical clear-sky period to assess the accuracy of short-term predictions. The measured GHI profile is closely tracked by all models, with minimal error and R2 values exceeding 99%. These results confirm that a five-feature multivariate configuration provides sufficient information for reliable GHI forecasting across different model architectures. MAPE is not applicable due to the presence of several near-zero values in the dataset [65]. Table 10, Table 11 and Table 12 report the error metrics for the different forecasting horizons.
Table 13 presents the evaluation metrics for GHI forecasting under different weather conditions, namely cloudy, partially cloudy, sunny, and clear. All models achieve lower errors and higher correlation values under sunny and very sunny conditions, whereas performance declines under cloudy and partly cloudy conditions due to high irradiance variability. Time-based multivariate models demonstrate consistent performance across weather classes, although meteorological inputs provide a slight advantage in cloudy conditions.
Although Figure 21 presents a clear-sky day for visual illustration, model performance was systematically evaluated across multiple weather regimes. Table 12 reports RMSE and R2 values for cloudy, partially cloudy, sunny, and clear conditions. As expected, forecasting errors increase under cloudy and partially cloudy conditions due to greater irradiance variability and rapid atmospheric fluctuations. Nevertheless, the proposed framework maintains stable predictive performance across all regimes, demonstrating robustness beyond favorable clear-sky scenarios. This comprehensive evaluation ensures that the reported results are not biased toward less challenging forecasting conditions.
Figure 21 presents scatter plots of predicted versus observed GHI for all DL models, with data points tightly clustered around the 1:1 line across the full irradiance range. Slight dispersion at higher GHI values reflects increased variability under peak solar intensity, while the overall clustering confirms the effectiveness of the multivariate forecasting framework across different model architectures.
The results indicate that incorporating meteorological variables provides greater benefits than time-based signals under cloudy and partially cloudy conditions. However, including time signals as exogenous variables enhances GHI forecasting accuracy in sunny and clear weather. A comparison with related studies is presented in Table 14. However, performance under cloudy conditions remains limited across all models and may require reconsideration of the algorithm and training data.
The proposed method can support smart city applications by enabling accurate intra-hour renewable forecasting using timestamp information alone. The framework is especially suitable for scalable deployment in cities with heterogeneous infrastructure, which helps achieve equitable, efficient, and smart energy systems to support sustainable urban development. Although the proposed models demonstrate stable performance across multiple forecasting horizons, 12 h, 1 d, and 3 d ahead, and seasonal variations within the Izmir dataset, the evaluation is confined to a single geographic location. The chronological train, validation, and test split across three consecutive years, 2017 to 2019, provides evidence of intra-site temporal robustness and interannual consistency. However, cross-site spatial generalization under different climatic regimes is not explicitly assessed in this study.
Notably, this study focuses on a controlled comparison of DL architectures and input feature configurations. Physical and hybrid forecasting baselines, including persistence models, numerical weather prediction models, clear-sky analytical models, and physics-informed machine learning frameworks, are not included in the present evaluation. The objective is to isolate the contribution of structured temporal feature engineering within data-driven architectures rather than to perform cross-paradigm benchmarking. Incorporating standardized physical baselines would offer additional operational insight and is therefore reserved for future research.

5. Conclusions

This study demonstrated a timestamp-driven multivariate DL framework for wind speed and GHI forecasting across multiple horizons. Unlike conventional approaches that rely on extensive meteorological measurements, the proposed method transforms raw timestamps into cyclical temporal features and systematically evaluates their predictive capability across five DL architectures. Experimental results confirm that incorporating engineered time-based signals consistently improves forecasting accuracy compared with univariate models. For wind speed prediction, the proposed configuration achieved R2 values of up to 0.9987 and reduced RMSE to 0.067 m/s in 3 d-ahead forecasting. In 12 h ahead prediction, both meteorological and time-based multivariate models significantly outperformed the baseline, with R2 values exceeding 0.94. For GHI forecasting, the time-signal multivariate models achieved R2 values above 0.9994, with RMSE near 4.5 W/m2 in short-horizon prediction, clearly outperforming univariate configurations and performing comparably to meteorological-input models. Robustness analysis across cloudy, partially cloudy, sunny, and very sunny conditions further demonstrated stable model behavior, with the best performance observed under clear-sky conditions. Importantly, the proposed timestamp-based framework attains high predictive accuracy without requiring additional sensing infrastructure, offering a cost-effective and scalable solution for smart cities and distributed renewable energy systems.
Although the proposed framework achieves strong predictive accuracy, it is validated using a single dataset and may exhibit reduced performance under highly variable cloudy conditions. The study also concentrates on deterministic forecasting with conventional DL architectures. Future research will investigate cross-regional validation, hybrid temporal and meteorological feature fusion, attention-based architectures, and probabilistic forecasting approaches to enhance robustness and support practical deployment.

Author Contributions

M.S. conceived the study, designed the methodology, performed the analysis, software implementation, and drafted the original manuscript. A.R.K. contributed to data collection and interpretation, software implementation, validation of results, and drafted the original manuscript. M.H. assisted with data collection, software implementation, and preliminary analysis, and drafted the original manuscript. M.M.R. contributed to the investigation, resources, data analysis, validation of results, and manuscript review and editing. S.A.S. provided conceptual guidance, supervision, data analysis, and manuscript review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

The research was funded by the Interdisciplinary Research Center for Sustainable Energy Systems at King Fahd University of Petroleum & Minerals (KFUPM), Dhahran 31261, Saudi Arabia, under project No. INSE2512 and the Deanship of Research at KFUPM, Dhahran 31261, Saudi Arabia, under project No. EC241001.

Data Availability Statement

Data are openly available in a public repository. I. D. Kiziloklu and U. Cifci, “Solar Radiation and Meteorological Dataset,” Kaggle. Available online: https://www.kaggle.com/datasets/ibrahimkiziloklu/solar-radiation-dataset/data (accessed on 23 February 2024).

Acknowledgments

The authors would like to acknowledge the research support received for this work from the King Fahd University of Petroleum & Minerals (KFUPM), Dhahran 31261, Saudi Arabia.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bisegna, F.; Vespasiano, F.; Pompei, L.; Burattini, C.; Belli, E.; Bellucci, A.M.; Di Vittorio, F.; Blaso, L. Towards the Decarbonization of Urban Communities: Evaluation of Smart and Green Strategies to Reduce Gas Carbon Emissions. Smart Cities 2026, 9, 26. [Google Scholar] [CrossRef]
  2. Elmousalami, H.; Alnaser, A.A.; Kin Peng Hui, F. Advancing Smart Zero-Carbon Cities: High-Resolution Wind Energy Forecasting to 36 Hours Ahead. Appl. Sci. 2024, 14, 11918. [Google Scholar] [CrossRef]
  3. Londoño-Pulgarin, D.; Cardona-Montoya, G.; Restrepo, J.C.; Muñoz-Leiva, F. Fossil or bioenergy? Global fuel market trends. Renew. Sustain. Energy Rev. 2021, 143, 110905. [Google Scholar] [CrossRef]
  4. Ritchie, H.; Roser, M.; Rosado, P. Renewable Energy. Our World in Data. 2020. Available online: https://ourworldindata.org/renewable-energy (accessed on 18 February 2026).
  5. International Energy Agency (IEA). Renewable Power Generation by Technology in the Net Zero Scenario, 2010–2030; IEA: Paris, France, 2022; Available online: https://www.iea.org/data-and-statistics/charts/renewable-power-generation-by-technology-in-the-net-zero-scenario-2010-2030 (accessed on 18 February 2026).
  6. Ahmed, S.D.; Al-Ismail, F.S.M.; Shafiullah, M.; Al-Sulaiman, F.A.; El-Amin, I.M. Grid integration challenges of wind energy: A review. IEEE Access 2020, 8, 10857–10878. [Google Scholar] [CrossRef]
  7. Shafiullah, M.; Ahmed, S.D.; Al-Sulaiman, F.A. Grid integration challenges and solution strategies for solar PV systems: A review. IEEE Access 2022, 10, 52233–52257. [Google Scholar] [CrossRef]
  8. Ruhnau, O.; Hennig, P.; Madlener, R. Economic implications of forecasting electricity generation from variable renewable energy sources. Renew. Energy 2020, 161, 1318–1327. [Google Scholar] [CrossRef]
  9. Amir, M.; Khan, S.Z. Assessment of renewable energy: Status, challenges, COVID-19 impacts, opportunities, and sustainable energy solutions in Africa. Energy Built Environ. 2022, 3, 348–362. [Google Scholar] [CrossRef]
  10. Mastoi, M.S.; Zhuang, S.; Haris, M.; Hassan, M.; Ali, A. Large-scale wind power grid integration challenges and their solutions: A detailed review. Environ. Sci. Pollut. Res. 2023, 30, 103424–103462. [Google Scholar] [CrossRef]
  11. Tang, H.; Wang, S.; Li, H. Flexibility categorization, sources, capabilities and technologies for energy-flexible and grid-responsive buildings: State-of-the-art and future perspective. Energy 2021, 219, 119598. [Google Scholar] [CrossRef]
  12. Li, B.; Zhang, J. A review on the integration of probabilistic solar forecasting in power systems. Sol. Energy 2020, 210, 68–86. [Google Scholar] [CrossRef]
  13. Lv, Q.; Zhang, J.; Zhang, J.; Zhang, Z.; Zhou, Q.; Gao, P.; Zhang, H. Short-Term Wind Power Prediction Model Based on PSO-CNN-LSTM. Energies 2025, 18, 3346. [Google Scholar] [CrossRef]
  14. Zhong, Q.; Wang, L.; Huang, C. T-LSTM: A Novel Model for High-Precision Wind Power Prediction by Integrating Transformer and Improved LSTM. Appl. Sci. 2026, 16, 1609. [Google Scholar] [CrossRef]
  15. Cisse, B.B.; Rashed, G.I.; Badjan, A.; Haider, H.; Gony, H.A.I.; Ershad, A.M. Optimized Hybrid Deep Learning Framework for Reliable Multi-Horizon Photovoltaic Power Forecasting in Smart Grids. Electricity 2026, 7, 4. [Google Scholar] [CrossRef]
  16. Alguhi, A.A.; Al-Shaalan, A.M. LSTM-Based Prediction of Solar Irradiance and Wind Speed for Renewable Energy Systems. Energies 2025, 18, 4594. [Google Scholar] [CrossRef]
  17. Li, B.; Ding, B.; Pang, W.; Ni, H. Triple-Flow Dynamic Graph Convolutional Network for Wind Power Forecasting. Symmetry 2025, 17, 2026. [Google Scholar] [CrossRef]
  18. Yaghi, M.A.; Al-Omari, H. Physics-Aware Deep Learning Framework for Solar Irradiance Forecasting Using Fourier-Based Signal Decomposition. Algorithms 2026, 19, 81. [Google Scholar] [CrossRef]
  19. Sweeney, C.; Bessa, R.J.; Browell, J.; Pinson, P. The future of forecasting for renewable energy. WIREs Energy Environ. 2020, 9, e365. [Google Scholar] [CrossRef]
  20. Zheng, J.; Du, J.; Wang, B.; Klemeš, J.J.; Liao, Q.; Liang, Y. A hybrid framework for forecasting power generation of multiple renewable energy sources. Renew. Sustain. Energy Rev. 2023, 172, 113046. [Google Scholar] [CrossRef]
  21. Su, Y.; Zhang, W.; Deng, G.; Wang, Z. An intra-hour photovoltaic power generation prediction method for flexible building energy systems and its application in operation scheduling strategy. Sol. Energy 2024, 284, 113031. [Google Scholar] [CrossRef]
  22. Rodríguez-Benítez, F.J.; Arbizu-Barrena, C.; Huertas-Tato, J.; Aler-Mur, I.; Galván-León, I.; Pozo-Vázquez, D. A short-term solar radiation forecasting system for the Iberian Peninsula. Part 1: Models description and performance assessment. Sol. Energy 2020, 195, 396–412. [Google Scholar] [CrossRef]
  23. Apeh, O.O.; Nwulu, N.I. Machine learning approach for short- and long-term global solar irradiance prediction. J. Energy Environ. Sustain. 2024, 7, 321–342. [Google Scholar] [CrossRef]
  24. Almaghrabi, S.; Rana, M.; Hamilton, M.; Rahaman, M.S. Multivariate solar power time series forecasting using multilevel data fusion and deep neural networks. Inf. Fusion 2024, 104, 102180. [Google Scholar] [CrossRef]
  25. Mfetoum, I.M.; Ngoh, S.K.; Molu, R.J.J.; Nde Kenfack, B.F.; Onguene, R.; Naoussi, S.R.D.; Tamba, J.G.; Bajaj, M.; Berhanu, M. A multilayer perceptron neural network approach for optimizing solar irradiance forecasting in Central Africa with meteorological insights. Sci. Rep. 2024, 14, 3572, Correction in Sci. Rep. 2024, 14, 5334. https://doi.org/10.1038/s41598-024-55883-z. [Google Scholar] [CrossRef] [PubMed]
  26. Ribeiro, R.; Fanzeres, B. Identifying representative days of solar irradiance and wind speed in Brazil using machine learning techniques. Energy AI 2024, 15, 100320. [Google Scholar] [CrossRef]
  27. Monteiro, C.; Bessa, R.; Miranda, V.; Botterud, A.; Wang, J.; Conzelmann, G. Wind Power Forecasting: State-of-the-Art 2009; Argonne National Laboratory: Argonne, IL, USA, 2009. [Google Scholar] [CrossRef]
  28. Patel, Y.; Deb, D. Machine intelligent hybrid methods based on Kalman filter and wavelet transform for short-term wind speed prediction. Wind 2022, 2, 37–50. [Google Scholar] [CrossRef]
  29. Ding, Y.; Ye, X.-W.; Guo, Y. Copula-based JPDF of wind speed, wind direction, wind angle, and temperature with SHM data. Probab. Eng. Mech. 2023, 73, 103483. [Google Scholar] [CrossRef]
  30. He, X.; Wang, J. A short-time wind speed forecasting method based on feature selection and KF-CNN-LSTM hybrid model. J. Supercomput. 2025, 81, 1408. [Google Scholar] [CrossRef]
  31. Liu, X.; Lin, Z.; Feng, Z. Short-term offshore wind speed forecast by seasonal ARIMA—A comparison against GRU and LSTM. Energy 2021, 227, 120492. [Google Scholar] [CrossRef]
  32. Freire, J.P.; Rubio, L.; Velasquez, C.E. Optimizing wind power forecasting with hybrid neural networks: Insights from Brazil. Neural Comput. Appl. 2025, 37, 20409–20436. [Google Scholar] [CrossRef]
  33. Wang, J.; Li, Q.; Zhang, H.; Wang, Y. A deep-learning wind speed interval forecasting architecture based on modified scaling approach with feature ranking and two-output gated recurrent unit. Expert Syst. Appl. 2023, 211, 118419. [Google Scholar] [CrossRef]
  34. Joseph, L.P.; Deo, R.C.; Prasad, R.; Salcedo-Sanz, S.; Raj, N.; Soar, J. Near real-time wind speed forecast model with bidirectional LSTM networks. Renew. Energy 2023, 204, 39–58. [Google Scholar] [CrossRef]
  35. Wang, J.; Li, Z. Wind speed interval prediction based on multidimensional time series of convolutional neural networks. Eng. Appl. Artif. Intell. 2023, 121, 105987. [Google Scholar] [CrossRef]
  36. Liu, X.; Zhang, L.; Wang, J.; Zhou, Y.; Gan, W. A unified multi-step wind speed forecasting framework based on numerical weather prediction grids and wind farm monitoring data. Renew. Energy 2023, 211, 948–963. [Google Scholar] [CrossRef]
  37. Zhang, X. Developing a hybrid probabilistic model for short-term wind speed forecasting. Appl. Intell. 2023, 53, 728–745. [Google Scholar] [CrossRef]
  38. Lv, S.-X.; Wang, L. Multivariate wind speed forecasting based on multi-objective feature selection approach and hybrid deep learning model. Energy 2023, 263, 126100. [Google Scholar] [CrossRef]
  39. Mas’ud, A.A. Comparison of three machine learning models for the prediction of hourly PV output power in Saudi Arabia. Ain Shams Eng. J. 2022, 13, 101648. [Google Scholar] [CrossRef]
  40. Samadianfard, S.; Rousta, Z.; Mohebbiyan, M.; Talebi, H. Innovative strategies for hourly global horizontal irradiance forecasting using a hybrid gated recurrent unit and long short-term memory architecture with spatio-temporal attention mechanism. Earth Sci. Inform. 2025, 18, 448. [Google Scholar] [CrossRef]
  41. Alkhatib, F.Y.; Alsadi, J.; Ramadan, M.; Nasser, R.; Awdallah, A.; Chrysikopoulos, C.V.; Maalouf, M. Comparative analysis of deep learning techniques for global horizontal irradiance forecasting in US cities. Clean Energy 2025, 9, 66–83. [Google Scholar] [CrossRef]
  42. Liu, H.; Zhang, Z. Development and trending of deep learning methods for wind power predictions. Artif. Intell. Rev. 2024, 57, 112. [Google Scholar] [CrossRef]
  43. Azimi, R.; Ghofrani, M.; Ghayekhloo, M. A hybrid wind power forecasting model based on data mining and wavelet analysis. Energy Convers. Manag. 2016, 127, 208–225. [Google Scholar] [CrossRef]
  44. Mastoi, M.S.; Wang, D.; Ma, N.; Hassan, M.; Shafiullah, M.; Bashir, T.; Hassan, A.; Flah, A. AI-driven control and optimization for renewable energy integration in smart grids: Challenges, applications, and future research directions. Energy Strat. Rev. 2026, 64, 102049. [Google Scholar] [CrossRef]
  45. Elmousaid, R.; Drioui, N.; Elgouri, R.; Agueny, H.; Adnani, Y. Ultra-short-term global horizontal irradiance forecasting based on a novel hybrid GRU–TCN model. Results Eng. 2024, 23, 102817. [Google Scholar] [CrossRef]
  46. Kumar, A.; Singh, A.J.; Kumar, S. Forecasting wind speed over multiple horizons: Superiority of deep learning and decomposition-driven hybrid models. Unconv. Resour. 2026, 9, 100296. [Google Scholar] [CrossRef]
  47. Pang, J.; Dong, S. Shape-aware Seq2Seq model for accurate multistep wind speed forecasting. J. Ocean Univ. China 2026, 25, 55–73. [Google Scholar] [CrossRef]
  48. Otuka, C.I.; Cai, D.; Ukwuoma, C.C.; Luo, S.; Yang, Z.; Bamisile, O.; Ukwuoma, C.D.; Otuka, C.U.; Otuka, N.O.; Huang, Q. Experimental review and recent advances in deep learning techniques for solar irradiance forecasting and prediction. Sol. Energy 2026, 304, 114175. [Google Scholar] [CrossRef]
  49. Kumari, P.; Toshniwal, D. Long short-term memory–convolutional neural network based deep hybrid approach for solar irradiance forecasting. Appl. Energy 2021, 295, 117061. [Google Scholar] [CrossRef]
  50. Kiziloklu, I.D.; Cifci, U. Solar Radiation and Meteorological Dataset. Kaggle. 2024. Available online: https://www.kaggle.com/datasets/ibrahimkiziloklu/solar-radiation-dataset/data (accessed on 23 February 2024).
  51. Chandola, D.; Gupta, H.; Tikkiwal, V.A.; Bohra, M.K. Multi-step ahead forecasting of global solar radiation for arid zones using deep learning. Procedia Comput. Sci. 2020, 167, 626–635. [Google Scholar] [CrossRef]
  52. Liu, M.-D.; Ding, L.; Bai, Y.-L. Application of hybrid model based on empirical mode decomposition, novel recurrent neural networks and ARIMA to wind speed prediction. Energy Convers. Manag. 2021, 233, 113917. [Google Scholar] [CrossRef]
  53. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
  54. Kulshrestha, A.; Krishnaswamy, V.; Sharma, M. Bayesian BiLSTM approach for tourism demand forecasting. Ann. Tour. Res. 2020, 83, 102925. [Google Scholar] [CrossRef]
  55. Hosseini, M.; Katragadda, S.; Wojtkiewicz, J.; Gottumukkala, R.; Maida, A.; Chambers, T.L. Direct normal irradiance forecasting using multivariate gated recurrent units. Energies 2020, 13, 3914. [Google Scholar] [CrossRef]
  56. Che, Z.; Purushotham, S.; Cho, K.; Sontag, D.; Liu, Y. Recurrent neural networks for multivariate time series with missing values. Sci. Rep. 2018, 8, 6085. [Google Scholar] [CrossRef] [PubMed]
  57. Wang, K.; Li, K.; Zhou, L.; Hu, Y.; Cheng, Z.; Liu, J.; Chen, C. Multiple convolutional neural networks for multivariate time series prediction. Neurocomputing 2019, 360, 107–119. [Google Scholar] [CrossRef]
  58. Shafiullah, M.; Abido, M.A.; Al-Mohammed, A.H. Power System Fault Diagnosis: A Wide Area Measurement Based Intelligent Approach; Elsevier: Amsterdam, The Netherlands, 2022. [Google Scholar] [CrossRef]
  59. Barhmi, K.; Mirbagheri Golroodbari, S.; Knap, W.; Van Sark, W. Real-time solar irradiance forecasting for grid integration using all-sky imagery and multi-stage AI with Kalman filter optimization. Renew. Energy 2026, 259, 125117. [Google Scholar] [CrossRef]
  60. Attouri, K.; Mansouri, M.; Kouadri, A. Adaptive PolyKAN-based autoencoder for fault detection and classification in wind and solar power systems. Ain Shams Eng. J. 2026, 17, 103884. [Google Scholar] [CrossRef]
  61. Collares-Pereira, M.; Rabl, A. The average distribution of solar radiation: Correlations between diffuse and hemispherical and between daily and hourly insolation values. Sol. Energy 1979, 22, 155–164. [Google Scholar] [CrossRef]
  62. Trivedi, N.K.; Gautam, V.; Anand, A.; Aljahdali, H.M.; Villar, S.G.; Anand, D.; Goyal, N.; Kadry, S. Early detection and classification of tomato leaf disease using high-performance deep neural network. Sensors 2021, 21, 7987. [Google Scholar] [CrossRef]
  63. Jaseena, K.U.; Kovoor, B.C. Decomposition-based hybrid wind speed forecasting model using deep bidirectional LSTM networks. Energy Convers. Manag. 2021, 234, 113944. [Google Scholar] [CrossRef]
  64. Neshat, M.; Nezhad, M.M.; Abbasnejad, E.; Mirjalili, S.; Tjernberg, L.B.; Garcia, D.A.; Alexander, B.; Wagner, M. A deep learning-based evolutionary model for short-term wind speed forecasting: A case study of the Lillgrund offshore wind farm. Energy Convers. Manag. 2021, 236, 114002. [Google Scholar] [CrossRef]
  65. Wei, D.; Wang, J.; Niu, X.; Li, Z. Wind speed forecasting system based on gated recurrent units and convolutional spiking neural networks. Appl. Energy 2021, 292, 116842. [Google Scholar] [CrossRef]
  66. Acikgoz, H.; Budak, U.; Korkmaz, D.; Yildiz, C. WSFNet: An efficient wind speed forecasting model using channel attention-based densely connected convolutional neural network. Energy 2021, 233, 121121. [Google Scholar] [CrossRef]
  67. Wang, Y.; Zou, R.; Liu, F.; Zhang, L.; Liu, Q. A review of wind speed and wind power forecasting with deep neural networks. Appl. Energy 2021, 304, 117766. [Google Scholar] [CrossRef]
  68. Hong, T.; Pinson, P.; Wang, Y.; Weron, R.; Yang, D.; Zareipour, H. Energy forecasting: A review and outlook. IEEE Open Access J. Power Energy 2020, 7, 376–388. [Google Scholar] [CrossRef]
  69. Boubaker, S.; Benghanem, M.; Mellit, A.; Lefza, A.; Kahouli, O.; Kolsi, L. Deep neural networks for predicting solar radiation at Hail region, Saudi Arabia. IEEE Access 2021, 9, 36719–36729. [Google Scholar] [CrossRef]
  70. Singla, P.; Duhan, M.; Saroha, S. An ensemble method to forecast 24-h ahead solar irradiance using wavelet decomposition and BiLSTM deep learning network. Earth Sci. Inform. 2022, 15, 291–306. [Google Scholar] [CrossRef]
Figure 1. Global electricity generation from renewable energy sources between 2000 and 2024 [2].
Figure 1. Global electricity generation from renewable energy sources between 2000 and 2024 [2].
Smartcities 09 00059 g001
Figure 2. Distribution of Izmir’s GHI over the training period: (a) over one year and (b) over one day.
Figure 2. Distribution of Izmir’s GHI over the training period: (a) over one year and (b) over one day.
Smartcities 09 00059 g002
Figure 3. Distribution of Izmir’s wind speed over two different horizons: (a) one year and (b) one day.
Figure 3. Distribution of Izmir’s wind speed over two different horizons: (a) one year and (b) one day.
Smartcities 09 00059 g003
Figure 4. A typical fully connected feedforward deep neural network.
Figure 4. A typical fully connected feedforward deep neural network.
Smartcities 09 00059 g004
Figure 5. Feedforward vs. recurrent neural networks.
Figure 5. Feedforward vs. recurrent neural networks.
Smartcities 09 00059 g005
Figure 6. Structure of an LSTM cell.
Figure 6. Structure of an LSTM cell.
Smartcities 09 00059 g006
Figure 7. GRU architecture.
Figure 7. GRU architecture.
Smartcities 09 00059 g007
Figure 8. Pearson correlation matrix of meteorological variables for solar and wind forecasting.
Figure 8. Pearson correlation matrix of meteorological variables for solar and wind forecasting.
Smartcities 09 00059 g008
Figure 9. Workflow of the proposed timestamp-driven multivariate DL framework for wind speed and GHI forecasting.
Figure 9. Workflow of the proposed timestamp-driven multivariate DL framework for wind speed and GHI forecasting.
Smartcities 09 00059 g009
Figure 10. Training and validation loss for wind speed models.
Figure 10. Training and validation loss for wind speed models.
Smartcities 09 00059 g010
Figure 11. Day-ahead wind speed prediction using five DL algorithms.
Figure 11. Day-ahead wind speed prediction using five DL algorithms.
Smartcities 09 00059 g011
Figure 12. Comparative view of wind speed forecasting performance for different DL models.
Figure 12. Comparative view of wind speed forecasting performance for different DL models.
Smartcities 09 00059 g012
Figure 13. Error (%) profiles of wind speed forecasts produced by the studied DL models.
Figure 13. Error (%) profiles of wind speed forecasts produced by the studied DL models.
Smartcities 09 00059 g013
Figure 14. Scatter plot of actual wind speed versus forecasted wind speed.
Figure 14. Scatter plot of actual wind speed versus forecasted wind speed.
Smartcities 09 00059 g014
Figure 15. Loss function for each DL algorithm over the validation and training datasets.
Figure 15. Loss function for each DL algorithm over the validation and training datasets.
Smartcities 09 00059 g015
Figure 16. Pearson correlation matrix between GHI and time-based signals.
Figure 16. Pearson correlation matrix between GHI and time-based signals.
Smartcities 09 00059 g016
Figure 17. Comparison of the predicted and measured GHI time series.
Figure 17. Comparison of the predicted and measured GHI time series.
Smartcities 09 00059 g017
Figure 18. Training and validation loss curves of the DL models with five features.
Figure 18. Training and validation loss curves of the DL models with five features.
Smartcities 09 00059 g018
Figure 19. Comparison between predicted and measured GHI with five features.
Figure 19. Comparison between predicted and measured GHI with five features.
Smartcities 09 00059 g019
Figure 20. Zoomed-in comparison of predicted and measured GHI.
Figure 20. Zoomed-in comparison of predicted and measured GHI.
Smartcities 09 00059 g020
Figure 21. Scatter plot comparison of forecasted and measured GHI values.
Figure 21. Scatter plot comparison of forecasted and measured GHI values.
Smartcities 09 00059 g021
Table 1. Practical applications for wind speed and GHI forecasting at different horizons.
Table 1. Practical applications for wind speed and GHI forecasting at different horizons.
Long-TermDay-AheadIntra-DayIntra-Hour
DurationMonths to yearsA day up to a weekHours aheadMinutes to an hour
ApplicationGrid planningMaintenance
scheduling
Commitment issueBattery storage management
Table 2. Wind speed prediction methods.
Table 2. Wind speed prediction methods.
MethodExamplesBenefitsDrawbacks
PhysicalNumerical weather prediction [27]Can provide long-term wind power predictions that are beneficial for wind power assessment.A complex mathematical model that requires significant computational time, making it unsuitable for short-term wind speed prediction.
StatisticalKalman filters [28], copula theory [29], auto-regressive integrated moving average (ARIMA) [30], and seasonal ARIMA [31]Robust models capable of predicting wind speed at different time horizons.Relatively slow and mostly linear. Hence, it cannot capture the non-stationary features of wind speed.
IntelligentArtificial neural networks (ANN) [32], GRU [33], BiLSTM [34], and CNN [35]Can handle non-linearities in the data; hence, they tend to exhibit superior performance.Requires tuning and feature selection for optimal performance.
HybridPhysical and intelligent [36]
Statistical and intelligent [37]
Different intelligent algorithms [38]
Combine the advantages of the individual models to achieve a globally optimal forecasting performance.This may lead to complex systems that are computationally expensive.
Table 3. Comparison of DL based forecasting studies with the proposed method.
Table 3. Comparison of DL based forecasting studies with the proposed method.
StudyTaskInputsHorizonStrengthLimitation
Development and trending of DL methods for wind power forecasting [42]Wind power (review)SCADA + meteorologyMulti-horizonComprehensive DL surveyAssumes data-rich sensing
A wind power forecasting model based on data-driven and attention mechanism [43]Wind powerMultivariate + attentionShort-termAttention-enhanced accuracyMeteorology-dependent
Comparative analysis of DL techniques for global horizontal irradiance forecasting [44]GHIWeather + univariateHourlyStructured DL benchmarkingLimited temporal resolution; weather-based
Ultra-short-term global horizontal irradiance forecasting based on a GRU-TCN hybrid [45]GHIIrradiance + external dataUltra-shortHybrid DL designNot sensor-sparse focused
Forecasting wind speed over multiple horizons: Superiority of DL methods [46]Wind speedStatistical/ML/DL inputsShort–midBroad horizon comparisonNo timestamp-only validation
Shape-Aware Seq2Seq Model for Accurate Multistep Wind Speed Forecasting [47]Wind speedDenoised wind + advanced DLMulti-stepArchitecture innovationInput minimalism is not addressed
Experimental review and recent advances in DL techniques for solar irradiance forecasting [48]Solar irradiance (review)DL + multi-source dataReviewSummarizes the latest DL advancesNo validated timestamp-only baseline
Proposed Method Wind speed + GHIEngineered timestamp cyclical features (+baselines)12 h/1 d/3 d @ 15 minSensor-light, scalable framework
Table 4. Hyperparameters of DL algorithms.
Table 4. Hyperparameters of DL algorithms.
ModelLayersNumber of UnitsAFLFOptimizer
1RNN128tanhMSEAdam, with a learning rate of 0.0001
RNN64tanh
Dense8relu
Dense1linear
2GRU100tanhMSEAdam, with a learning rate of 0.0001
GRU64tanh
Dense8relu
Dense1linear
3LSTM128tanhMSEAdam, with a learning rate of 0.0001
LSTM64tanh
Dense8relu
Dense1linear
4BiLSTM128tanhMSEAdam, with a learning rate of 0.0001
BiLSTM64tanh
Dense8relu
Dense1linear
5CNN1DFilters = 64-MSEAdam, with a learning rate of 0.0001
Kernel size = 2-
Flatten--
Dense8relu
Dense1linear
Table 5. Evaluation metrics for 3 d-ahead wind speed prediction.
Table 5. Evaluation metrics for 3 d-ahead wind speed prediction.
InputModelMAPEMSERMSER2
Wind speedRNN1.280.005000.0710.9986
GRU1.320.005130.0720.9985
LSTM1.280.005150.0720.9985
BiLSTM1.320.004770.0690.9986
CNN1.290.004760.0690.9986
Wind speed and time signalsRNN1.270.004670.0680.9987
GRU1.250.004690.0680.9987
LSTM1.250.005050.0710.9986
BiLSTM1.260.004480.0670.9987
CNN1.310.004810.0690.9986
Wind speed and meteorologicalRNN1.300.005050.0710.9986
GRU1.240.004420.0670.9987
LSTM1.330.005530.0740.9984
BiLSTM1.300.005340.0730.9985
CNN1.250.004900.0700.9986
Table 6. Evaluation metrics for 1 d ahead wind speed prediction.
Table 6. Evaluation metrics for 1 d ahead wind speed prediction.
InputModelMAPEMSERMSER2
Wind speedRNN1.450.003980.0630.9958
GRU1.490.004220.0650.9956
LSTM1.500.004000.0630.9958
BiLSTM1.630.004280.0650.9955
CNN1.480.004100.0640.9957
Wind speed and time signalsRNN1.480.004050.0640.9957
GRU1.460.003850.0620.9960
LSTM1.470.003710.0610.9961
BiLSTM1.460.003870.0620.9959
CNN1.420.003680.0610.9961
Wind speed and meteorologicalRNN1.600.004080.0640.9957
GRU1.490.004200.0650.9956
LSTM1.430.003970.0630.9958
BiLSTM1.550.004270.0650.9955
CNN1.520.004340.0660.9954
Table 7. Evaluation metrics for 12 h ahead wind speed prediction.
Table 7. Evaluation metrics for 12 h ahead wind speed prediction.
InputModelMAPEMSERMSER2
Wind speedRNN1.570.002570.0510.9401
GRU1.610.002830.0530.9339
LSTM1.670.002830.0530.9338
BiLSTM1.910.003150.0560.9264
CNN1.620.002800.0530.9346
Wind speed and time signalsRNN1.530.002360.0490.9448
GRU1.580.002750.0520.9358
LSTM1.510.002660.0520.9379
BiLSTM1.640.002540.0500.9408
CNN1.600.002930.0540.9316
Wind speed and meteorologicalRNN1.470.002270.0480.9470
GRU1.510.002330.0480.9455
LSTM1.630.002410.0490.9437
BiLSTM1.430.002230.0470.9478
CNN1.420.002180.0470.9490
Table 8. RMSE value of a 12-month span of wind speed prediction.
Table 8. RMSE value of a 12-month span of wind speed prediction.
InputModelJanFebMarAprMayJunJulAugSepOctNovDec
Wind speedRNN967868757867608075725567
GRU917469748168638174755258
LSTM877769757966628272745257
BiLSTM916970778066648374735659
CNN 927069757366618274785562
Wind speed and time signalsRNN926866757261597668695155
GRU836767737362607770695055
LSTM907567727462597969685056
BiLSTM826767757365617469695254
CNN 856667717265588168715252
Wind speed and meteorological variablesRNN1656666747862617468715162
GRU1097168747762588268745159
LSTM1268467737768637573745257
BiLSTM1138369748561608674715561
CNN907468737662597574755159
All values have been multiplied by 10 3 .
Table 9. Comparison of the proposed model for wind speed forecasting.
Table 9. Comparison of the proposed model for wind speed forecasting.
AuthorInputTraining SizeForecast HorizonAlgorithmR2
Ref. [60]Wind speed only117,64310 minBiLSTM0.9894
89,3220.9799
Ref. [61]Wind speed only42,04810 minBiLSTM0.9830
Ref. [62]Wind speed only80030 minGRU0.9391
CNN0.9378
Ref. [63]Wind speed only61321–5 hCNN-DCA0.9441
Ref. [64]Wind speed only12,264-1DCNN0.9630
ProposedWind speed and time signals35,04015 min–3 dVarious DL0.9987
Table 10. 3 d-ahead prediction of GHI.
Table 10. 3 d-ahead prediction of GHI.
InputModelRMSER2
GHIRNN17.670.9778
GRU18.110.9767
LSTM17.820.9774
BiLSTM18.360.9760
CNN17.770.9776
GHI and time signalsRNN15.530.9829
GRU14.910.9842
LSTM15.390.9832
BiLSTM15.430.9828
CNN15.510.9829
GHI and meteorologicalRNN18.990.9744
GRU15.100.9838
LSTM14.600.9849
BiLSTM15.770.9823
CNN15.040.9839
Table 11. Day-ahead prediction of GHI.
Table 11. Day-ahead prediction of GHI.
InputModelRMSER2
Wind speedRNN8.5790.9975
GRU5.3000.9991
LSTM6.3340.9987
BiLSTM10.250.9965
CNN9.0250.9973
Time signalsRNN4.6190.9993
GRU5.3000.9991
LSTM4.5300.9993
BiLSTM5.1260.9991
CNN4.7280.9993
MeteorologicalRNN8.1240.9978
GRU5.1250.9991
LSTM6.5470.9986
BiLSTM9.7440.9968
CNN4.6750.9993
Table 12. 12 h-ahead prediction of GHI.
Table 12. 12 h-ahead prediction of GHI.
InputModelRMSER2
Wind speedRNN9.7370.9972
GRU7.0480.9985
LSTM6.9150.9986
BiLSTM11.580.9961
CNN10.150.9970
Time signalsRNN4.5840.9994
GRU5.4930.9991
LSTM4.4740.9994
BiLSTM5.2130.9992
CNN4.6200.9994
MeteorologicalRNN9.2510.9975
GRU5.8120.9990
LSTM7.4410.9984
BiLSTM11.140.9964
CNN5.2140.9992
Table 13. Evaluation metrics under different weather conditions for GHI.
Table 13. Evaluation metrics under different weather conditions for GHI.
ModelInputsWeatherRMSER2WeatherRMSER2
RNNUnivariateCloudy26.11565.43%Sunny42.33998.50%
Time signals23.61971.72%37.39598.83%
Meteorological22.38274.61%40.09098.65%
GRUUnivariate25.57966.84%43.50898.41%
Time signals25.92965.92%35.80198.93%
Meteorological25.12868.00%38.07198.78%
LSTMUnivariate25.27667.62%44.16198.36%
Time signals25.22967.74%34.49898.88%
Meteorological25.20667.80%37.11698.85%
BiLSTMUnivariate27.91460.51%43.44698.42%
Time signals25.02768.25%36.36698.89%
Meteorological21.57776.40%38.18398.78%
CNNUnivariate24.29570.08%42.76098.47%
Time signals23.45572.12%36.13098.91%
Meteorological22.55074.22%38.36798.77%
RNNUnivariatePartially Cloudy29.23491.07%Clear7.98699.89%
Time signals26.05992.91%5.49199.95%
Meteorological24.77993.59%9.78299.83%
GRUUnivariate31.29589.77%8.33099.88%
Time signals24.79193.58%5.81899.94%
Meteorological24.28693.84%5.93999.94%
LSTMUnivariate29.87790.68%6.92599.88%
Time signals24.94093.50%5.92799.94%
Meteorological24.14893.91%8.37799.92%
BiLSTMUnivariate29.72290.77%8.69099.87%
Time signals24.53793.71%7.14099.91%
Meteorological25.83193.03%8.63299.87%
CNNUnivariate28.53991.49%6.09199.93%
Time signals25.69593.10%5.22699.95%
Meteorological25.61193.15%5.63099.94%
Table 14. Comparison of the proposed model with recent literature across the full test dataset.
Table 14. Comparison of the proposed model with recent literature across the full test dataset.
Ref. AlgorithmInputTime HorizonR2
Ref. [66]LSTM, GRU, BiGRU, BiLSTM,
CNN, CNN-BiLSTM
GHI1 d0.9550
Ref. [67]WT-BiLSTMGHI1 h0.9400
Ref. [68]LSTM-CNNGHI and meteorological variables1 h0.9701
Ref. [69]ANN, CNN, LSTMGHI and meteorological variablesup to 24 h0.9870
Ref. [70]Stacked LSTM Stacked BiLSTMGHI, GHI, and meteorological data1 h0.99, 0.99
ProposedRNN, GRU, LSTM, BiLSTM, CNNGHI and time signals15 min0.99
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shafiullah, M.; Katranji, A.R.; Hassan, M.; Rahman, M.M.; Shezan, S.A. Advanced Multivariate Deep Learning Methodology for Forecasting Wind Speed and Solar Irradiation. Smart Cities 2026, 9, 59. https://doi.org/10.3390/smartcities9040059

AMA Style

Shafiullah M, Katranji AR, Hassan M, Rahman MM, Shezan SA. Advanced Multivariate Deep Learning Methodology for Forecasting Wind Speed and Solar Irradiation. Smart Cities. 2026; 9(4):59. https://doi.org/10.3390/smartcities9040059

Chicago/Turabian Style

Shafiullah, Md, Abdul Rahman Katranji, Mannan Hassan, Md Mahfuzur Rahman, and Sk. A. Shezan. 2026. "Advanced Multivariate Deep Learning Methodology for Forecasting Wind Speed and Solar Irradiation" Smart Cities 9, no. 4: 59. https://doi.org/10.3390/smartcities9040059

APA Style

Shafiullah, M., Katranji, A. R., Hassan, M., Rahman, M. M., & Shezan, S. A. (2026). Advanced Multivariate Deep Learning Methodology for Forecasting Wind Speed and Solar Irradiation. Smart Cities, 9(4), 59. https://doi.org/10.3390/smartcities9040059

Article Metrics

Back to TopTop