Next Article in Journal
Steam Generation for Industry Using Linear Fresnel Solar Collectors and PV-Driven High-Temperature Heat Pumps: Techno-Economic Analysis
Previous Article in Journal
Barriers and Challenges in the Implementation of Decentralized Solar Water Disinfection Treatment Systems—A Case of Ghana
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Photovoltaic Farm Power Generation Forecast Using Photovoltaic Battery Model with Machine Learning Capabilities

by
Agboola Benjamin Alao
1,*,
Olatunji Matthew Adeyanju
2,
Manohar Chamana
3,
Stephen Bayne
1 and
Argenis Bilbao
1
1
Electrical and Computer Engineering Department, Texas Tech University, Lubbock, TX 79409, USA
2
National Wind Institute, Texas Tech University, Lubbock, TX 79409, USA
3
Renewable Energy Program, Texas Tech University, Lubbock, TX 79409, USA
*
Author to whom correspondence should be addressed.
Solar 2025, 5(2), 26; https://doi.org/10.3390/solar5020026
Submission received: 9 May 2025 / Revised: 29 May 2025 / Accepted: 4 June 2025 / Published: 6 June 2025

Abstract

:
This study presents a machine learning-based photovoltaic (PV) model for energy management and planning in a microgrid with a battery system. Microgrids integrating PV face challenges such as solar irradiance variability, temperature fluctuations, and intermittent generation, which impact grid stability and battery storage efficiency. Existing models often lack predictive accuracy, computational efficiency, and adaptability to changing environmental conditions. To address these limitations, the proposed model integrates an Adaptive Neuro-Fuzzy Inference System (ANFIS) with a multi-input multi-output (MIMO) prediction algorithm, utilizing historical temperature and irradiance data for accurate and efficient forecasting. Simulation results demonstrate high prediction accuracies of 95.10% for temperature and 98.06% for irradiance on dataset-1, significantly reducing computational demands and outperforming conventional prediction techniques. The model further uses ANFIS outputs to estimate PV generation and optimize battery state of charge (SoC), achieving a consistent minimal SoC reduction of about 0.88% (from 80% to 79.12%) over four different battery types over a seven-day charge–discharge cycle, providing up to 11 h of battery autonomy under specified load conditions. Further validation with four other distinct datasets confirms the ANFIS network’s robustness and superior ability to handle complex data variations with consistent accuracy, making it a valuable tool for improving microgrid stability, energy storage utilization, and overall system reliability. Overall, ANFIS outperforms other models (like curve fittings, ANN, Stacked-LSTM, RF, XGBoost, GBoostM, Ensemble, LGBoost, CatBoost, CNN-LSTM, and MOSMA-SVM) with an average accuracy of 98.65%, and a 0.45 RMSE value on temperature predictions, while maintaining 98.18% accuracy, and a 31.98 RMSE value on irradiance predictions across all five datasets. The lowest average computational time of 17.99s was achieved with the ANFIS model across all the datasets compared to other models.

1. Introduction

Photovoltaic (PV) forecasting is crucial in modern power system planning and management, enabling grid operators to integrate renewable energy sources effectively. By predicting solar power generation, it supports grid stability, optimizes energy dispatch, and enhances market participation. Accurate PV forecasting is particularly important for managing the variability of solar energy, which is influenced by weather conditions, shading, and operational factors. The increasing integration of photovoltaic (PV) systems into energy grids has necessitated the development of accurate forecasting models to ensure grid stability, efficient energy management, and enhanced market participation. Advancements in forecasting methodologies, including statistical models, machine learning, and hybrid approaches, have significantly improved prediction accuracy and reliability, addressing challenges such as nonlinear dependencies, temporal correlations, and data variability. These innovations help mitigate the challenges of integrating intermittent PV generation into power grids, ensuring efficient resource allocation, reducing operational costs, and enhancing renewable energy penetration. As the adoption of PV systems continues to grow, forecasting remains essential for sustainable and reliable energy system operations.

1.1. States of the Art

Several research studies have addressed various challenges in PV forecasting. For instance, a CNN-based model was developed to predict the Maximum Power Point (MPP) voltage in pavement PV arrays under complex shading conditions, achieving minimal prediction errors (MAE: 2.54, MSE: 11.13) and outperforming ResNet and MLP in speed and accuracy [1]. Another study proposed a hybrid architecture integrating Convolutional Graph Neural Networks (ConvGNNs) and LSTM networks to model spatial–temporal dependencies in PV systems, outperforming methods like ARIMA, SVM, and CNN+LSTM [2]. A probabilistic ensemble method (PEM) improved PV forecasting under cloudy conditions by excluding statistical outliers, reducing normalized RMSE by up to 15.12% [3]. To address inconsistencies in multi-horizon forecasts, a seamless probabilistic model, the Analog Ensemble (AnEn) was proposed, delivering accurate predictions for timeframes from 5 min to 36 h [4]. Furthermore, an adaptive ML framework was introduced for behind-the-meter PV disaggregation, leveraging models like Random Forest (RF), Decision Tree (DT), and Multilayer Perceptron (MLP) to achieve R-squared values of up to 0.98 [5].
Hybrid approaches have also shown promise. A Boot-LSTM-ICSO-PP model, integrating Improved Chicken Swarm Optimization (ICSO) and Prey–Predator mechanisms, achieved over 60% error reductions in probabilistic forecasting [6]. A combination of numerical weather predictions (NWP) with satellite data for nowcasting demonstrated reductions in MAE and RMSE in diverse geographic contexts [7]. Feature selection techniques like PCA and XGBoost reduced RMSE by 30% in mid-term PV forecasting [8]. A stacking ensemble model combining GRNN, ELM, ElmanNN, and LSTM provided robust performance across weather scenarios [9]. Additionally, a scalable ANN-based method for distributed regional-scale PV forecasting reduced RMSE by 29% using non-irradiance meteorological data [10]. Further advances include comparisons of deep learning architectures, with BiLSTM achieving a 96% correlation coefficient for GHI forecasting in arid regions [11].
Ensemble tree-based methods effectively predicted SPV power in the Qassim region, achieving RMSE values as low as 19.66 W [12]. A feature-selective ensemble framework combining ARIMA and LSTM models reduced RMSE by up to 64% for long-term regional PV generation [13]. ConvLSTM demonstrated superiority in hourly solar radiation forecasting in regions with variable weather, achieving an nRMSE of 1.51% [14]. In Turkey, a comparison between Random Forest and LSTM showed the latter’s superior predictive performance for nonlinear relationships in solar datasets [15]. Efforts to enhance model transparency have focused on explainable AI (XAI) tools, such as SHAP and LIME, which improved RMSE and user trust for Random Forest models [16]. A hybrid model integrating SDS, WPT, and IBBO demonstrated ultra-short-term forecasting accuracy, achieving RMSE as low as 0.0693 kW [17]. LSTM-based models optimized with advanced preprocessing techniques improved short-term PV forecasting metrics like MAE and RMSE [18]. To address harmonics forecasting in PV-wind hybrid systems, a hybrid ANN-ANFIS model achieved significant accuracy gains across various scenarios [19]. A hybrid MOSMA-SVM model further enhanced PV forecasting, achieving MAPE reductions of up to 27.13% [20].
Recent studies have also explored multi-source data integration. Combining remote sensing techniques with NWP data in CNN-LSTM ensembles achieved MAE reductions of 33.7% [21]. A review of ML methods found Random Forest to be the most robust for PV forecasting, reducing forecasting errors by 37.33% [22]. Advanced architectures like GSTANN, which integrate graph convolutional and attention mechanisms, demonstrated MAE reductions of up to 20% in very short-term forecasts [23]. Enhancements to LSTM models with two-stage attention mechanisms achieved RMSE as low as 0.0638 [24]. Hybrid MOSMA models for uncertainty analysis further improved deterministic accuracy in PV forecasts [25].
Temporal convolutional networks optimized for PV forecasts using CEEMDAN achieved RMSE values as low as 1.206 [26]. CNN-LSTM hybrid models validated for multiscale PV systems demonstrated superior scalability [27]. A domain-adaptive learning framework enabled real-time forecasting in dynamic climates, achieving significant MAE reductions without requiring test labels [26]. Forecasting applications extended to integrated PV systems focused on operational reliability [25]. Lastly, integrating synthetic weather data with LSTM models improved short-term forecasts by 33%, outperforming traditional methods [28].

1.2. Practical Decision-Making and Operational Efficiency Needs

Despite the achievements of these previous studies, there are still some significant gaps to fill. For example, many of these existing studies focus solely on PV power generation forecasting without integrating battery management systems, which are essential for energy storage and grid reliability. For instance, refs. [6,26] proposed advanced hybrid models for PV power prediction, but these do not include mechanisms for monitoring or optimizing battery performance. Ref. [25] explored battery-integrated PV systems but lacked robust forecasting frameworks that combine PV generation predictions with battery energy management, leaving a gap in comprehensive energy management systems. Several methods, including LSTM [18] and CNN-LSTM hybrids [27], perform well under relatively stable conditions but often fail to maintain accuracy under highly variable or extreme weather conditions. For example, refs. [9,19] highlighted the limitations of hybrid models under complex meteorological scenarios, where prediction errors increase due to overfitting or limited adaptability. This underscores the need for models capable of handling such variability without compromising accuracy.
Furthermore, the computational intensity of advanced architectures like ConvGNNs [2] and MOSMA-SVM hybrids [20] presents significant challenges to their scalability and real-time applications. These models often require considerable computational resources, making them impractical for large-scale or real-time energy management systems. This limits their utility in scenarios where fast and efficient forecasting is critical. Many existing models focus on forecasting single variables, which restricts their ability to predict multiple interdependent factors simultaneously. While studies such as [24,25] have achieved high prediction accuracy, they lack the multivariate capabilities necessary for simultaneously forecasting temperature, irradiance, and PV power output. This gap reduces the efficiency and practicality of these models in real-world applications. Some studies, such as [3,26], have integrated uncertainty quantification into their models, but these approaches often rely on ensemble or statistical methods that significantly increase computational demands. Furthermore, their robustness under dynamic weather conditions remains questionable, as they often fail to adapt to unexpected variations effectively.
Moreover, deep learning models, including those in [11,23], are frequently labeled as “black-box” systems due to their lack of interpretability, which hampers operator trust and understanding. Although efforts such as [16] utilized XAI tools to improve interpretability, these tools are external and not inherently part of the forecasting models, creating additional complexity for operational use. Many studies, including [7,28], focused on improving forecasting accuracy but failed to address downstream applications such as load scheduling and resource coordination. This limitation reduces the applicability of these models in integrated energy systems, where such capabilities are essential for practical decision-making and operational efficiency.

2. Proposed ANFIS-Based PV Prediction Model

The proposed study integrates PV power forecasting with battery energy management, providing a unified approach to energy utilization and grid stability. Unlike models such as [6,26], which separate forecasting from battery management, this study ensures robust SoC stability and achieves long hours of battery autonomy under specified load conditions. This integration enables seamless energy planning and resource allocation. The ANFIS-based model demonstrates high accuracy in predicting PV power under diverse meteorological conditions, achieving better accuracy for temperature and irradiance. This adaptability addresses the limitations of hybrid models like [9,19], which struggle with fluctuating weather patterns. Additionally, the model surpasses the performance of single-variable frameworks such as LSTM [18] and CNN-LSTM [27]. By employing a MIMO algorithm, the proposed study enables simultaneous prediction of temperature, irradiance, and PV power output, enhancing prediction efficiency and utility. This capability improves upon methods like [24,25], which lack multivariate prediction capabilities and focus on single-variable outputs. The integration of MIMO with ANFIS provides a comprehensive forecasting approach. The computational efficiency of the ANFIS model makes it superior to architectures like ConvGNNs [2] and MOSMA-SVM [20], which are computationally intensive. The lightweight nature of ANFIS ensures scalability and real-time applicability, making it practical for large-scale energy management systems and real-world deployments.
Furthermore, the inherent explainability of ANFIS models, combining neural networks with fuzzy logic, offers transparent and interpretable predictions without requiring external XAI tools like those used in [16,23]. For example, CNN-LSTM models often require post hoc tools like SHAP or LIME to approximate interpretability. In contrast, each rule in the ANFIS model corresponds to intuitive “if–then” statements derived from input data, making the model easier to interpret, validate, and adjust. This transparency builds trust in predictions and helps understand how variables, e.g. temperature and irradiance, influence PV generation and battery performance. This feature significantly enhances operator trust and usability, and it provides an effective balance of high predictive accuracy and inherent interpretability, making it well-suited for energy applications that demand both precision and transparency.
This study also focuses on battery integration and load scheduling, providing direct applications in energy planning, resource coordination, and operational efficiency. Unlike [7,28], which concentrated purely on forecasting, the proposed model extends its functionality to downstream energy management tasks, offering a holistic solution for integrated energy systems. Finally, the robustness of the ANFIS model under complex data variations and its ability to maintain high accuracy across diverse datasets make it ideal for real-world scenarios. Its performance surpasses that of [3,26], ensuring reliability under dynamic and extreme weather conditions without significant computational overhead. This work is distinguished from the existing models in Table 1.
The major contributions of this paper can be summarized as follows.
  • Integrating PV forecasting with battery SoC management, maintaining SoC stability (0.88% drop over 7 days) and providing 11 h of battery autonomy, supporting energy planning and load scheduling.
  • High prediction accuracy with ANFIS, reaching an average of 98.65% for temperature and 98.18% for irradiance, outperforming traditional methods like ANN under complex meteorological conditions.
  • Employing a MIMO algorithm to simultaneously predict temperature, irradiance, and PV power, improving forecasting efficiency over single-variable methods.
  • Computational efficiency with ANFIS, surpassing resource-intensive methods like ConvGNNs, ensuring scalability and real-time applicability for energy systems.
  • Combining fuzzy logic and neural networks for interpretable predictions, removing the need for additional outputs to explain model outputs, and enhancing practical utility in energy management.

3. Materials and Methods

This research utilizes a hybrid approach integrating machine learning (ML) techniques to forecast the power generation of a solar photovoltaic (PV) farm. The proposed ANFIS-based prediction and energy management framework is presented in Figure 1.
As shown in Figure 1, the research framework is divided into two major phases, namely the (i) prediction phase and (ii) energy management phase. The first phase of the research framework represents the machine learning application, which predicts irradiance and temperature for the PV farm using the ANFIS algorithm, with these variables serving as inputs for the energy management phase. In Phase 1, historical data undergoes conditioning, timestamp marking, extrapolation for missing values, FIS membership selection, and ANFIS training/prediction, resulting in predicted temperature and irradiance, which serve as inputs to Phase 2, the solar farm’s equivalent electrical model. Phase 2, modeled in Simulink, includes several key components: (1) Signal Builders, which store predicted irradiance and temperature data for the PV-array model; (2) a Boost Converter Circuit, consisting of an IGBT, diode, and filtering capacitor, where power flow is controlled via a boost signal applied to the IGBT gate and optimized using PWM signals from the MPPT algorithm; (3) a PWM Generator, which produces boost signals for PV and battery-side controls based on a duty cycle computed by the MATLAB Version 24.2 PV control function to enable precise energy management; and (4) a Bi-Directional Converter for the battery, which ensures seamless charging and discharging, allowing the battery to store excess PV energy and supply power when needed through switching signals for both positive and negative sides.

3.1. Phase 1: ANFIS-Based PV Prediction System

3.1.1. Mathematical Modeling of MIMO ANFIS Network

The mathematical modeling of MIMO ANFIS is subdivided into five (5) layers as shown in Figure 2.
Let x 1 and x 2 be the irradiance and temperature historical data inputs to the ANFIS network, respectively (MIMO), and let y 1 and y 2 be the predicted irradiance and temperature outputs of the ANFIS network, respectively. In layer 1, input fuzzification is performed. Assuming each input passes through the membership functions, there are two membership functions per input. Two Adaptive Neuro-Fuzzy Inference System (ANFIS) membership functions, namely the Gaussian and Generalized Bell membership functions, are defined for the input variables to enhance the prediction process as shown in Figure 3b.
The Gaussian Membership Function (GaussianMF) is defined as given in Equation (1).
m u i ( x ) = e x c i 2 2 σ i 2
where c i is the center of the Gaussian curve, and σ i is the standard deviation of the function (controls function width). The GaussianMF can be defined for variables x 1 and x 2 as shown in Equations (2) and (3), respectively.
μ 1 , 1 x 1 = exp x 1 c 1 , 1 2 2 σ 1 , 1 2 ; μ 1 , 2 x 1 = exp x 1 c 1 , 2 2 2 σ 1 , 2 2
μ 2 , 1 x 2 = exp x 2 c 2 , 1 2 2 σ 2 , 1 2 ; μ 2 , 2 x 2 = exp x 2 c 2 , 2 2 2 σ 2 , 2 2
The Generalized Bell Membership Function (GBellMF) can be defined as shown in Equation (4).
m u i ( x ) = 1 1 + x c i a i 2 b i
where a i is the width of the membership function, b i is the slope (controls function smoothness), and c i is the Bell function center. Similarly, for x 1 and x 2 , the GBellMF is defined as shown in Equations (5) and (6), respectively.
μ 1 , 1 x 1 = 1 1 + x 1 c 1 , 1 a 1 , 1 2 b 1 , 1 ; μ 1 , 2 x 1 = 1 1 + x 1 c 1 , 2 a 1 , 2 2 b 1 , 2
μ 2 , 1 x 2 = 1 1 + x 2 c 2 , 1 a 2 , 1 2 b 2 , 1 ; μ 2 , 2 x 2 = 1 1 + x 2 c 2 , 2 a 2 , 2 2 b 2 , 2
In layer 2, a rule-based definition is performed. Each rule represents a combination of membership functions from x 1 and x 2 . For two membership functions per input, there are 2 × 2 = 4 rules. The firing strength of each rule is computed using the fuzzy AND operation (e.g., multiplication) as shown in (7).
w 1 = μ 1 , 1 x 1 · μ 2 , 1 x 2 w 2 = μ 1 , 2 x 1 · μ 2 , 2 x 2 w 3 = μ 1 , 3 x 1 · μ 2 , 3 x 2 w 4 = μ 1 , 4 x 1 · μ 2 , 4 x 2
In layer 3, the firing strength is normalized using standard/classical normalization as shown in Equation (8).
w ¯ i = w i j = 1 4 w j
Standard/classical normalization is chosen in this study due to its simplicity, widespread application, and computational efficiency. It reduces the complexity of the ANFIS model, helping minimize computational time and implementation costs, which is crucial in real-world scenarios involving large datasets. While it may be affected by extreme weights, this issue is mitigated through proper data preprocessing to remove outliers. This normalization method offers a balance between performance and computational feasibility, making it ideal for this research. Other normalization techniques include SoftMax-based normalization, min–max normalization, L1 normalization (sum to one), L2 normalization (unit length normalization), logarithmic normalization, and mean normalization. It should be noted that this standard/classical normalization technique is computationally less expensive to implement, as the CPU could favorably handle the computation within a few seconds, as compared with other methods. It is simple and widely used in traditional ANFIS (Sugeno-type ANFIS). However, it can be affected by very small or very large weights, making learning unstable. This was carefully avoided with proper data processing to remove outliers.
In layer 4, the rule consequences are derived. Each rule has a Sugeno-style output, which is typical of adaptive neural networks, as shown in Equation (9).
f i = p i x 1 + q i x 2 + r i
This is a first-order Sugeno fuzzy model consequent parameters, where x 1 and x 2 are the input variables, p i , q i , and r i are the consequent parameters for the rule i, and f i is the fuzzy rule’s output. The consequent parameters define the linear relationship between inputs and rule outputs and are learnable weights adjusted during training using techniques like gradient descent or hybrid optimization, with this research employing the latter. These adaptive weights are updated in the training phase to minimize the error between predicted and actual outputs, often using a combination of least squares estimation (LSE) for consequent parameters and backpropagation for antecedent parameters. Each fuzzy rule has its own set of p i , q i , and r i parameters, where p i determines the contribution of x 1 to the rule’s output, q i determines the contribution of x 2 , and r i serves as a bias term adjusting the rule’s baseline output. In summary, these parameters define the rule-specific linear functions in a Sugeno-based ANFIS and are learned during training to minimize prediction errors, with the final output being a weighted combination of rule outputs determined by fuzzy rule activations.
In layers 5, the aggregation layer and rule outputs are generated as shown in Equation (10).
y n = i = 1 N w ¯ i · f i
where n is the number of outputs, y n is the final ANFIS output, i = 1 N ( ) is the summation over all FIS rules, N is the number of fuzzy rules in the ANFIS model, w ¯ i is the normalized firing strength of the ith rule, and f i is a consequent function of rule ‘i’. The final predicted outputs y 1 and y 2 are obtained as shown in Equations (11) and (12), respectively.
y 1 = i = 1 4 w ¯ i p i 1 x 1 + q i 1 x 2 + r i 1
y 2 = i = 1 4 w ¯ i p i 2 x 1 + q i 2 x 2 + r i 2
where y 1 and y 2 are the final predicted outputs (irradiance and temperature), w ¯ i is the estimated normalized firing strength/weight, p i 1 is the weight that determines the contribution of x 1 to the rule’s first output ( y 1 ), q i 1 is the weight that determines the contribution of x 2 to the rule’s first output ( y 1 ), p i 2 is the weight that determines the contribution of x 2 to the rule’s second output ( y 2 ), q i 2 is the weight that determines the contribution of x 2 to the rule’s second output ( y 2 ), r i 1 is a bias term (offset) that adjusts the rule’s baseline output ( y 1 ), and r i 2 is a bias term (offset) that adjusts the rule’s baseline output ( y 2 ).

3.1.2. Method Design and Data Preparation

The first stage involves the collection of historical meteorological data. The historical irradiance and temperature data are sourced from the West Texas MESONET database about the solar farm’s geographical location. This dataset includes timestamps, solar irradiance, and ambient temperature values, which form the core variables for the predictive model. Two datasets were obtained from the West Texas MESONET Weather Station at the REESE Center. The first dataset, originally recorded at 1 min intervals, was aggregated to 30 min intervals for easier handling, reducing approximately 10,000 data points to 333. The second dataset, recorded directly at 30 min intervals, contains around 20,000 data points. This second dataset exhibits higher uncertainties, non-linearity, and seasonal variations compared to the first, making it more complex for predictive modeling.
The raw data underwent preprocessing steps to enhance its suitability for modeling purposes. These steps included:
  • Data Cleaning: Removal of erroneous entries and outliers to maintain data consistency;
  • Interpolation: Estimation of missing values using statistical techniques to ensure completeness;
  • Normalization: Scaling the data to a standard range, reducing variability and improving model performance.
The processed data were structured in a matrix format and stored in an Excel file. Each entry comprises timestamps representing the temporal aspect of the dataset, solar radiance values, and ambient temperature values.

3.1.3. ANFIS Model Implementation

The ANFIS framework is employed to model and predict the temporal variation of solar irradiance and temperature. The design and implementation process follows a systematic workflow:
  • Data Loading: The algorithm imports the Excel-stored matrix, where the timestamp column serves as the time index and the irradiance and temperature columns represent the input variables.
  • Feature Engineering: Lag features are created to capture temporal dependencies and recognize patterns in the dataset, which are critical for accurate predictions. These features enable the model to identify trends and seasonal effects in the historical data.
  • Data Splitting: The dataset is divided into input matrices and target vectors, ensuring systematic utilization during the training phase. The input matrix comprises time-dependent features, while the target vector includes corresponding irradiance and temperature values. The cleaned data is divided into training and test sets, 80% and 20%, respectively.
  • Model Training: The ANFIS algorithm trains on the input data, employing fuzzy logic to address uncertainties and adaptive learning to refine predictive capabilities. The training process is iterative, optimizing model parameters to minimize prediction errors.
  • Prediction and Validation: Once trained, the ANFIS model forecasts the next day’s irradiance and temperature values. Validation involves comparing predictions against known historical data to ensure the model’s reliability.

3.1.4. Accuracy Enhancement Techniques

To improve the accuracy of the results, additional measures are incorporated:
  • Cleansing and Conditioning: Rigorous preprocessing ensures the data is free from inconsistencies and suitable for modeling.
  • Normalization/Denormalization: Scaling reduces the effect of variable magnitudes, enhancing the stability of the training process. The normalized dataset is guided by Equation (13). The output results are then denormalized and compared with the historical data using Equation (14).
    x norm ( i ) = x i x min x max x min
    x i = x norm ( i ) x max x min + x min
    where x i is the historical data, x min is the minimum raw data, x max is the maximum raw data, and x norm ( i ) is the normalized dataset.
  • Pattern Recognition: Temporal lag features and multivariate analysis allow the model to learn complex interdependencies within the dataset.
The flowchart of the ANFIS training and prediction network is shown in Figure 4.

3.1.5. Prediction Measurement Metrics

RMSE and Accuracy Estimation

To evaluate the prediction performance of the forecasting model, two primary metrics are used: the Root Mean Square Error (RMSE) and a relative accuracy percentage based on RMSE [29,30,31].
The RMSE is defined as follows:
RMSE = 1 n i = 1 n y ^ i y i 2
where:
  • y ^ i is the predicted value,
  • y i is the actual observed value,
  • n is the number of prediction samples.
This metric captures the average magnitude of prediction error, with larger errors penalized more due to squaring. It is widely used in time series and regression analysis because it preserves the unit of the predicted variable, offering interpretability [29,30,31].
To provide a scale-aware performance measure, a relative accuracy metric was also calculated:
Accuracy ( % ) = 100 RMSE y ¯ × 100
where y ¯ denotes the mean of the actual observed values. This formulation enables easier interpretation of model performance in terms of percentage deviation from the average.

Application to Forecasted Variables

For temperature and solar irradiance forecasting, the following metrics were computed:
RMSE Temp = 1 n i = 1 n T ^ i T i 2
RMSE Irrad = 1 n i = 1 n I ^ i I i 2
Accuracy Temp = 100 RMSE Temp T ¯ × 100
Accuracy Irrad = 100 RMSE Irrad I ¯ × 100
where:
  • T ^ i , T i are the predicted and true temperature values,
  • I ^ i , I i are the predicted and true irradiance values,
  • T ¯ , I ¯ are the mean of the measured temperature and irradiance values.

Why RMSE-Based Accuracy Is Preferred

Using RMSE provides sensitivity to large errors, making it suitable for forecasting applications where peak values are significant. The scale-normalized accuracy metric further enhances interpretability by expressing model performance as a percentage relative to typical observed values. This approach aligns with evaluation methods used in energy and environmental forecasting domains [29,30,31].

3.2. Phase 2: Energy Management System

The energy management phase consists of three major subsystems: the (i) PV array, (ii) boost converter/load, and (iii) battery control. The electrical system of the PV farm is modeled in Simulink, incorporating a PV array, boost converter, and shunt load to replicate the real PV farm configuration. The PV array simulates the I-V and P-V characteristics of the entire PV-farm configuration as shown in Figure 5, while the boost converter regulates output voltage for efficient energy transfer between the PV array and the load. Four common battery types (lead acid, nickel cadmium, lead-ion, and nickel-metal hydride) are used to validate the system autonomy consistency.
Figure 5 shows the overall solar farm PV-array I-V and P-V characteristic curves at Maximum Power Point Tracking (MPPT) under standard test conditions (STC). Maximum Power Point Tracking (MPPT) is an optimization technique used in solar panel arrays to ensure they operate at their Maximum Power Point (MPP), which is the point where the product of voltage (V) and current (I) is maximized. The curves are used for validating the Simulink PV-array model, ensuring that the solar farm nameplate power yield (150 kW) is maintained at STC conditions. The P-V curve of the Simulink PV-array confirms this at MPPT under STC with the red curve.
The shunt load models the system’s interaction with the grid or storage, enabling analysis of performance under varying conditions. The electrical model includes the following components:
  • Signal Builders: These extract forecasted irradiance and temperature data from an Excel databank, feeding the values into the PV-array model. The model accurately represents the real PV farm with a capacity of 150 kW and utilizes SolarWorld Sunmodule SWA 320 XL mono modules.
  • Boost Converter Circuit: This circuit consists of an IGBT, a diode, and a PV-side filtering capacitor. A boost signal applied to the IGBT gate controls power flow on the PV side, while the diode ensures unidirectional current flow and enhances capacitor efficiency. The converter operates using PWM signals generated by the MPPT algorithm to optimize energy transfer.
  • Bi-Directional Converter for Battery: This component supports seamless battery charging and discharging. It ensures the battery charges when excess PV energy is available and discharges to meet load demands during deficits, using switching signals for both positive and negative sides.
  • PWM Generator: Generates boost signals for both PV and battery-side controls, based on a duty cycle computed by the MATLAB PV control function, enabling precise energy management within the system.
The maximum power point transfer (MPPT) and maximum load (shunt resistance, R s h ) are determined by Equations (21) and (22), respectively.
R s h = V M P P T I M P P T
Maximum Load = I M P P T 2 R s h
where V M P P T and I M P P T are the voltage and current at MPPT.
PV-side capacitance, C P V and inductance, L P V are estimated by Equations (23) and (24), respectively.
C P V = I o V 0 V i f s w Δ V V 0
L P V = V i V 0 V i f s w Δ I V 0
where V i is the boost converter input voltage, V o is the boost converter output voltage, f s w is the boost converter switching frequency, I o is the output current of the converter, Δ V is the ripple voltage, and Δ I is the ripple current.
The battery side capacitance value is set to be equal to the PV side capacitance value to ensure maximum power transfer to the load side. The duty cycle is adjusted in relation to Equation (25).
R load = R source ( 1 D ) 2
where R source is the source resistance given by Equation (22).

3.3. Use Cases

3.3.1. Data Structure

Five distinct datasets were used to validate the superiority of the ANFIS prediction model over other conventional models derived from West Texas Mesonet (WTM) raw archive data. The period ranges from January 2021 through March 2022. Data sampling time varied from 1 min to 5 min, as shown in Table 2. The first dataset was resampled at a 30 min interval for Simulink electrical model validation. This is necessary for the accommodation of the simulation time complexity involved in electrical model design with 1 × 10 5 s sampling time. The initial Simulation was carried out on a PC with 32 GB of RAM, Intel(R) Core(TM) i7-10700 CPU@2.90 GHz, 2904 Mhz, 8 Core(s), 16 Logical Processor(s).

3.3.2. Parameter Settings

The structures of LSTM, CNN-LSTM, ConvGNNs, and ANFIS differ significantly in their design and application. LSTM is based on recurrent neural networks (RNNs) with memory cells and gates that capture long-term dependencies in sequential data, making it ideal for time-series and sequence modeling. CNN-LSTM combines CNNs for spatial feature extraction with LSTM to handle both spatial and temporal data, such as in videos or time series with spatial patterns. ConvGNNs extend this concept to graph-structured data, using convolutional operations to capture relationships between nodes and edges. In contrast, ANFIS integrates fuzzy logic with neural networks, where fuzzy inference rules are used to handle nonlinear relationships and uncertainty in data, providing greater interpretability than the data-driven, black-box approach of LSTM, CNN-LSTM, and ConvGNNs. ANFIS’s hybrid model allows for explainable predictions, making it distinct from the purely neural network-based structures of the other models, which focus on capturing complex patterns without offering inherent transparency. As a result, we did not compare additional similar machine learning models, aside from Curve Fitting, LSTM, and ANN with ANFIS, as all the other models are also black-box-oriented, lacking the transparency and explainability offered by the ANFIS model.
The ANFIS training hyperparameters for both irradiance and temperature are set at 600 epochs at three (3) lagged features for better prediction. The training failed to predict accurately above two (2) membership functions defined for each of the variables, while the number of iterations was carefully tuned from 300 to 600 epochs for best forecast accuracy, above which the performance of the algorithm remains the same. The selection of a two-member FIS rule base and 600 training epochs was determined through rigorous initial training to optimize ANFIS performance during validation while minimizing training time. The two-member choice was guided by the stochastic nature of the variables—temperature and irradiance, which exhibit highly irregular patterns. As observed during the simulation, increasing the number of memberships led to overfitting, excessive computational demands, and the generation of unnecessary rules, ultimately reducing model accuracy. The open-loop ANFIS prediction algorithm deploys temporal pattern recognition with the introduction of lag features, while the closed-loop algorithm feeds back the current prediction into the historical data for the next prediction, capturing the dynamics of an erratic data set.

4. Results

The prediction results for both the open-loop and closed-loop are presented in the following subsections. Open-loop prediction utilizes pattern learning during training and predicts the next iteration while continuously updating its historical dataset for subsequent predictions. This approach ensures that the model remains responsive to new data during the forecasting process. In contrast, closed-loop prediction uses its own predicted data to generate subsequent forecasts, with historical data updated with the new prediction. This involves a single training phase, relying on the learned patterns to iteratively predict future points.

4.1. Closed-Loop Predictions

Closed-loop prediction leverages the training knowledge of the historical data and predicts the next iteration using the feedback closed-loop data. Thus, closed-loop prediction is carried out with a historical data update using the most recent forecast. The closed-loop prediction of the irradiance and temperature at 600 epochs is presented in Figure 6 using Dataset-1.
The ANFIS network effectively predicts up to 10 h of irradiance and temperature data. However, even after historical data updates and model recalibration, the ANFIS model’s accuracy declines beyond the 10 h forecast horizon due to error accumulation and environmental variability. While dynamic updates improve short-term forecasting, longer predictions remain challenging as small inaccuracies compound over time. Additionally, the stochastic nature of irradiance and temperature introduces uncertainties that become harder to predict over extended periods. Despite these challenges, the model still provides reliable short-term forecasts, making it a valuable tool for energy management and operational planning. These predicted values are then input into the electrical model to predict the power generation of the site.
Using Dataset-2 from the West Texas MESONET meteorological data center at REESE, the predicted irradiance and temperature using curve fitting, LSTM, ANN, and ANFIS are presented in Figure 7a and Figure 7b, respectively. Their accuracy and computational speed are presented in Table 3.
The results in Table 3 indicate that ANFIS outperforms the other methods in both prediction accuracy and RMSE. Specifically, ANFIS achieves the highest irradiance accuracy of 84.55% and matches ANN in temperature accuracy at 97.27% while exhibiting the lowest RMSE for both irradiance (32.85) and temperature (0.25). This suggests that ANFIS is particularly effective in modeling the complex, nonlinear relationships inherent in the data, providing more reliable predictions. In comparison, ANN performs well with 83.01% irradiance accuracy and 97.27% temperature accuracy, but with a higher RMSE of 33.94 for irradiance. Curve-fit demonstrates lower performance, particularly for irradiance, with an accuracy of 73.15% and an elevated RMSE of 53.64, though it is computationally faster than the other models. LSTM exhibits the lowest temperature accuracy (72.87%) and the highest RMSE for both irradiance (50.55) and temperature (0.75), and it is also the most computationally expensive, taking 371 s. Overall, ANFIS still provides the best combination of prediction accuracy, reliability, and computational efficiency.

4.2. Open-Loop Predictions

For the open-loop prediction using Dataset-1, the irradiance and temperature historical data are trained with the training data using ANFIS, ANN, LSTM, and curve-fitting algorithms at suitable numbers of epochs. The algorithm performances are then evaluated with the test dataset as presented in Figure 8. As could be observed, the ANFIS model outperformed the ANN and curve-fitting models.
As shown in Figure 8, both the ANN and curve-fitting techniques exhibit lower accuracy in predicting variables, with higher root mean square errors (RMSE) of approximately 35 and 30, respectively. These less reliable predictions make them unsuitable for operational planning and energy management. In contrast, the ANFIS model demonstrates greater robustness, efficiency, and accuracy, achieving 98% for irradiance and 95% for temperature. Additionally, ANFIS has the lowest RMSE, indicating its superior reliability in handling uncertainties and irregularities within the data structure. Other prediction algorithms suffer from accumulated learning errors resulting from overfitting of the training dataset. The raw data are spurious and come with significant inconsistency, which makes learning difficult over a wide range of time, like ours. The higher the volume of data, the higher the number of outliers that make learning difficult for the algorithm; cumulative error increases over time, which accounts for increased RMSE in the other models.
The prediction performances, including the accuracy and root mean square error, are presented in Table 4.

4.3. Further Validation of ANFIS Model

To further investigate the robustness and effectiveness of ANFIS on a dedicated system (64GB RAM, 8-Core Processor, 4501Mhz, x64-based PC), its performance was evaluated and compared with other machine/deep learning models across five different datasets with varying sizes. Figure 9a,b, present the predictions from the stacked LSTM, Random Forest, XGBoost, GBoostM, Transformer, individual/ensembles, LightGBM, CatBoost, CNN-LSTM, MOSMA-SVM, and ANFIS ML models using Data-1 (333 data points).
Figure 10a,b present the predictions from the stacked LSTM, Random Forest, XGBoost, GBoostM, Transformer, Individual/Ensembles, LightGBM, CatBoost, CNN-LSTM, MOSMA-SVM, and ANFIS ML models using Data-2 (8760 data points).
Figure 11a,b, present the predictions from the stacked LSTM, Random Forest, XGBoost, GBoostM, Transformer, individual/ensembles, LightGBM, CatBoost, CNN-LSTM, MOSMA-SVM, and ANFIS ML models using Data-3 (19,954 data points).
Figure 12a,b, present the predictions from the stacked LSTM, Random Forest, XGBoost, GBoostM, Transformer, individual/ensembles, LightGBM, CatBoost, CNN-LSTM, MOSMA-SVM, and ANFIS ML models using Data-4 (44,579 data points).
Figure 13a,b, present the predictions from the stacked LSTM, Random Forest, XGBoost, GBoostM, Transformer, individual/ensembles, LightGBM, CatBoost, CNN-LSTM, MOSMA-SVM, and ANFIS ML models using Data-4 (125,074 data points).
The detailed performance comparison of the model is tested with different datasets with varying data sizes for temperature and irradiance predictions, which are presented in Table 5 and Table 6, respectively. The computational time (training plus inference) is presented in Table 7.

4.4. Comparative Analysis of Forecasting Models for Temperature and Irradiance Prediction

The performance of ten forecasting models was evaluated across five datasets of varying sizes, capturing temperature and irradiance dynamics under diverse conditions. The forecast datasets ranged from small samples (62 observations) to large-scale datasets (25,011 observations), providing a comprehensive benchmark for assessing model robustness, accuracy, and computational cost.

4.4.1. General Trends and ANFIS Strengths

Overall, ANFIS consistently achieved the best average performance in terms of predictive accuracy, with the lowest mean RMSE for both temperature (0.45 °C) and irradiance (31.98 W/m2), and highest average R2 scores (98.65% for temperature and 98.18% for irradiance). Moreover, ANFIS demonstrated superior computational efficiency, with the lowest average runtime (17.99 s) among all models tested, making it particularly attractive for time-sensitive or resource-constrained environments.

4.4.2. Notable Instances Where ANFIS Underperformed

Despite its overall superiority, ANFIS did not always outperform the other models in every scenario. For instance, for temperature forecasting (Dataset 2), ANFIS achieved an RMSE of 0.97 °C, which was higher than that of several models, including the ensemble (ENS: 0.84 °C), LGB (0.86 °C), and XGB (0.92 °C). This suggests that ensemble-based gradient boosting models might generalize better on medium-sized datasets when slight nonlinearities are predominant and fuzzification offers less marginal gain. Similarly, for irradiance forecasting (Dataset 2), ANFIS registered an RMSE of 52.08 W/m2, which, although competitive, was not the lowest. MOSMA-SVM (44.72 W/m2), ENS (47.19 W/m2), and even XGB (48.07 W/m2) outperformed ANFIS in this instance, reflecting cases where metaheuristically optimized or ensemble models could better capture irradiance variability in medium-density data. For Dataset 2 irradiance, ANFIS’s R2 was 95.00%, which, while still high, was slightly lower than MOSMA-SVM (95.89%) and marginally below the ensemble and boosting models, reinforcing the point that its superiority is not universal across all data conditions. These deviations may be attributed to the sensitivity of ANFIS to data structure and fuzzification granularity. In medium-sized datasets, where the balance between data variance and rule complexity becomes delicate, ANFIS may face overfitting or under-generalization, especially if not carefully tuned.

4.4.3. Comparative Model Behavior

  • Ensemble Models (XGB, GBM, LGB): These models offered strong baseline performance across datasets, especially in irradiance prediction. Their ability to handle high-dimensional and non-linear patterns with relatively low variance makes them robust alternatives, particularly in medium-to-large datasets.
  • MOSMA-SVM: Though computationally intensive, it showed high accuracy, especially in irradiance prediction, likely due to the fine-tuning enabled by swarm metaheuristics. However, its scalability and real-time applicability remain limited by computational cost.
  • Deep Learning Models (CNN-LSTM, Stacked-LSTM): While conceptually powerful, these models did not outperform simpler architectures in this study. Their effectiveness appeared to be dataset-dependent, and they incurred the highest computational burden, making them less suitable without sufficient data volume or processing power.
The bootstrapped 95% confidence intervals of the mean R 2 values for each of the ten models on the five datasets for the two cases (temperature and irradiance prediction) are represented by the error bars in Figure 14a,b.
Here, M1 is Stacked-LSTM, M2 is Random Forest, M3 is XGBoost, M4 is GBoostM, M5 is ensemble, M6 is LGBoost, M7 is CatBoost, M8 is CNN-LSTM, M9 is MOSMA-SVM and M10 is ANFIS. ANFIS (model M10) maintains the tightest CI interval [0.0098] and LGBoost has the widest CI [0.2811]—indicating that it is the least consistent model—as shown in Figure 14a. Therefore, ANFIS has the highest accuracy and precision compared to the other models in predicting temperature, as ANFIS is well within the range of the 95% CI. ANFIS outperforms the other models in terms of accuracy, reliability, performance, and precision in the five different datasets, as the other models have CIs lower than 95%, as shown in Figure 14a. Figure 14b reveals that M3-XGBoost, M5-XGBoost, M6-LGBoost, M9-MOSMA-SVM, and M10-ANFIS are excellent models for irradiance prediction, as their CIs are in the range of 95% and above. This further confirms the adaptability of the ANFIS model to multiple variable predictions, applicable in MIMO system forecasting. This is very important, as it reduces the computational time.
Figure 15, Figure 16, Figure 17 and Figure 18 are heatmap tables representing Table 5 and Table 6, respectively.
Figure 15 and Figure 16 illustrate the R 2 score (accuracy %) heatmap and R M S E for temperature in Table 5, with ANFIS having superior prediction accuracy, precision, and consistency over the other models. LGBoost was the model with the widest confidence interval [0.2811], indicating the lowest performance.
Figure 17 and Figure 18 illustrate the R 2 score (accuracy %) heatmap and R M S E for irradiance in Table 6 for XGBoost, M5-GBoostM, LGBoost, MOSMA-SVM, and ANFIS, which were identified as excellent models for irradiance prediction.

4.4.4. Practical Implications

Although certain models, such as XGBoost, LightGBM, and MOSMA-SVM, demonstrated localized superiority in specific datasets or performance metrics, the Adaptive Neuro-Fuzzy Inference System (ANFIS) consistently outperformed all other models on average across diverse scenarios. ANFIS achieved the lowest average RMSE for both temperature (0.45 °C) and irradiance (31.98 W/m2), as well as the highest average R2 values of 98.65% and 98.18%, respectively. Crucially, ANFIS excelled in computational efficiency, recording the lowest average runtime (17.99 s) among all of the models. This is especially significant when compared to deep learning approaches such as CNN-LSTM and stacked-LSTM, which, although occasionally competitive in accuracy, incurred computational times up to 32 times greater than ANFIS. For applications involving real-time prediction, limited hardware capacity, or embedded systems in renewable energy infrastructure, this computational advantage positions ANFIS as a highly practical and scalable solution. Thus, despite some fluctuations in performance on specific subsets, ANFIS demonstrates superior generalization, accuracy, and speed, making it an ideal candidate for robust and interpretable short-term forecasting in smart energy systems.

4.5. PV-Farm Power Generation Forecast

The electrical model’s accuracy is validated using actual historical data under ideal conditions of maximum generation (150 kW) at 1000 W/m2 and 25 °C. Similarly, four different battery types are used to validate the consistency of the electrical energy management model on the battery SoC. Each battery type has a rated capacity of 1600 Ah, nominal voltage 480 V, initial State-of-Charge of 80%, and battery response time of 1 s. The study assumes continuous battery charging and discharging cycles over a simulated period of 333 s, with each second representing 30 min, equivalent to approximately 7 days. These parameters are useful for expansion planning purposes and battery autonomy (i.e., the battery lifespan).
As shown in Figure 19a, the generation remains within the expected limits and varies with irradiance and temperature, as anticipated. The model demonstrates excellent performance, achieving a validation accuracy of 98%. Also, as can be observed in Figure 19b that the nickel–cadmium battery SoC discharged from 80% to 79.24% over 7 days. Comparing the different battery types shown in Figure 20, it could be observed that the nickel–cadmium battery exhibited the highest autonomy, as it incurred 79.24% SoC compared to 79.18% in nickel–metal–hydride, 79.17% for lead acid, and 79.12% for the lithium ion battery.
From Figure 20 and Table 8, it is clear that the proposed model supported four major types of batteries commonly deployed in power systems, with battery autonomy maintained within a reasonable common range. Nickel–cadmium had the highest autonomy.
Figure 21a is a combined voltage view of the PV-array, DC bus, and battery voltages. The battery voltage (DC source) is linear as expected, and the bus voltage is maintained within the permissible range (>480 V). Figure 21b is the combined current view. The battery current, as shown in Figure 21b, was negative at peak sunlight hours as the PV array produced more energy than the load demanded, especially during peak sunlight hours. The excess current generated was used to charge the battery. Thus, the battery was in a charging cycle, and current flowed into the battery.

5. Discussion

This research introduces a new model for energy management in a microgrid with PV resources, integrating a PV-battery system with ANFIS capabilities. The model operates in two phases: the ANFIS prediction phase and the electrical modeling phase. The ANFIS prediction model uses site-specific irradiance and temperature as inputs to forecast variables, which are then fed into the electrical model for accurate power forecasting. The ANFIS algorithm demonstrates superior accuracy compared to ANN and curve-fitting techniques, effectively predicting real solar farm power generation. The model’s predictions align closely with the actual site power capacity, making it a reliable tool for informed energy management decisions. The ANFIS model performs well due to its hybrid approach, combining fuzzy logic and neural networks, which allows it to effectively handle uncertain, nonlinear, and irregular patterns in data, such as temperature and irradiance. The fuzzy logic component manages imprecise data by assigning membership values, while the neural network adapts and learns from training data to minimize error and improve accuracy. This combination also enhances interpretability, making it easier to understand input–output relationships. Additionally, ANFIS requires fewer rules compared to other methods, reducing overfitting and computational complexity. As a result, ANFIS provides accurate predictions with high efficiency, making it highly suitable for tasks like energy management and operational planning. However, the ANFIS accuracy may decline after the 10 h horizon, as observed in this research. Even after historical data updates and model recalibration, the ANFIS model’s accuracy gradually declines after 10 h due to error accumulation and environmental variability. While dynamic updates improve short-term forecasting, longer predictions remain challenging as small inaccuracies compound over time. Additionally, the stochastic nature of irradiance and temperature introduces uncertainties that become harder to predict over extended periods. Despite these challenges, the model still provides reliable short-term forecasts, making it a valuable tool for energy management and operational planning. Future work will focus on incorporating a load profile estimator to enhance load management and implement demand response strategies for improved energy planning.

6. Conclusions

In conclusion, a robust PV-battery model incorporating the ANFIS algorithm has been developed to forecast power generation in a solar farm effectively. The electrical model benefits from the ANFIS algorithm’s superior capability to accurately predict irradiance and temperature, providing reliable input for the electrical simulations. The clear distinction between the ML prediction algorithm and the electrical model highlights the modularity and adaptability of the approach, enabling precise power forecasting and facilitating informed energy management.

Author Contributions

Conceptualization, A.B.A. and O.M.A.; methodology, A.B.A.; software, A.B.A. and O.M.A.; validation, A.B.A., O.M.A. and M.C.; formal analysis, A.B.A. and O.M.A.; investigation, A.B.A. and O.M.A.; resources, A.B.A., M.C., S.B. and A.B.; data curation, A.B.A. and A.B.; writing—original draft preparation, A.B.A. and O.M.A.; writing—review and editing, A.B.A., O.M.A. and M.C.; visualization, A.B.A., O.M.A. and M.C.; supervision, A.B.A., O.M.A., M.C., S.B. and A.B.A.; project administration, A.B.A., O.M.A., M.C. and S.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

AnEnAnalog Ensemble
ANFISAdaptive Neuro-Fuzzy Inference System
ANNArtificial Neural Network
ARIMAAuto-Regressive Moving Average
CIConfidence Interval
CNNConvolutional Neural Network
CEEMDANComplete Ensemble Empirical Mode Decomposition with Adaptive Noise
ELMExtreme Learning Machine
GNNGraph Neural Network
GHIGlobal Horizontal Irradiance
GRNNGeneral Regression Neural Network
GSTANNGraph Spatial–Temporal Attention Neural Network
IBBOImproved Binary Bat Optimization
ICSOImproved Chicken Swarm Optimization
IGBTInsulated-Gate Bipolar Transistor
LIMELocal Interpretable Model-Agnostic Explanations
LSELeast Square Estimation
LSTMLong Short-Term Memory
MAEMean Absolute Error
MAPEMean Absolute Percentage Error
MIMOMulti-Input Multi-Output
MLPMulti-Layer Perceptron
MPPTMaximum Power Point Tracking
MSEMean Square Error
MOSMAMulti-Objective SLIME Mold Algorithm
NWPNumerical Weather Prediction
RMSERoot Mean Square Error
RFRandom Forest
SoCState-of-Charge
SHAPShapley Additive Explanations
SVMSupport Vector Machine
STCStandard Test Conditions
PCAPrincipal Concept Analysis
PVPhotovoltaic
PWMPulse Width Modulation
WPTWireless Power Transfer
XAIExplainable Artificial Intelligence
XGBoostExtreme Gradient Boosting
GBoostMGradient Boost Machine
CatBoostCategorical Boosting
LGBoostLight Gradient Boost Machine
NNumber of FIS rules
muiμi—mean of weights
min(w)Minimum firing weight
max(w)Maximum firing weight
w ¯ i Original firing strength
w i Normalized firing strength—weight
| w i | Absolute value of the firing strength of the weights
fiFuzzy rules’ output
x1, x2ANFIS input variables (temperature and irradiance)
piConsequent parameter: determines the contribution of x1 to the rule’s output.
qiConsequent parameter: determines the contribution of x2 to the rule’s output.
riConsequent parameter: a bias term (offset) that adjusts the rule’s baseline output.
nNumber of final ANFIS outputs
ynFinal ANFIS output

References

  1. Mao, M.; Feng, X.; Xin, J.; Chow, T.W. A Convolutional Neural Network-Based Maximum Power Point Voltage Forecasting Method for Pavement PV Array. IEEE Trans. Instrum. Meas. 2022, 72, 1–12. [Google Scholar] [CrossRef]
  2. Jiao, X.; Li, X.; Lin, D.; Xiao, W. A Graph Neural Network-Based Deep Learning Predictor for Spatio-Temporal Group Solar Irradiance Forecasting. IEEE Trans. Ind. Inform. 2022, 18, 6142–6151. [Google Scholar] [CrossRef]
  3. Pretto, S.; Ogliari, E.; Niccolai, A.; Nespoli, A. A New Probabilistic Ensemble Method for an Enhanced Day-Ahead PV Power Forecast. IEEE J. Photovoltaics 2022, 12, 581–588. [Google Scholar] [CrossRef]
  4. Carriere, T.; Vernay, C.; Pitaval, S.; Kariniotakis, G. A Novel Approach for Seamless Probabilistic Photovoltaic Power Forecasting Covering Multiple Time Frames. IEEE Trans. Smart Grid 2020, 11, 2281–2292. [Google Scholar] [CrossRef]
  5. Saeedi, R.; Sadanandan, S.K.; Srivastava, A.K.; Davies, K.L.; Gebremedhin, A.H. An Adaptive Machine Learning Framework for Behind-the-Meter Load/PV Disaggregation. IEEE Trans. Ind. Inform. 2021, 17, 7060–7070. [Google Scholar] [CrossRef]
  6. Bazionis, I.K.; Kousounadis-Knousen, M.A.; Katsigiannis, V.E.; Catthoor, F.; Georgilakis, P.S. An Advanced Hybrid Boot-LSTM-ICSO-PP Approach for Day-Ahead Probabilistic PV Power Yield Forecasting and Intra-Hour Power Fluctuation Estimation. IEEE Access 2024, 12, 43703–43720. [Google Scholar] [CrossRef]
  7. Catalina, A.; Alaíz, C.M.; Dorronsoro, J.R. Combining Numerical Weather Predictions and Satellite Data for PV Energy Nowcasting. IEEE Trans. Sustain. Energy 2020, 11, 1930–1937. [Google Scholar] [CrossRef]
  8. Mohamed, M.; Mahmood, F.E.; Abd, M.A.; Chandra, A.; Singh, B. Dynamic Forecasting of Solar Energy Microgrid Systems Using Feature Engineering. IEEE Trans. Ind. Appl. 2022, 58, 7857–7869. [Google Scholar] [CrossRef]
  9. Liu, L.; Sun, Q.; Wennersten, R.; Chen, Z. Day-Ahead Forecast of Photovoltaic Power Based on a Novel Stacking Ensemble Method. IEEE Access 2023, 11, 113593–113604. [Google Scholar] [CrossRef]
  10. Asiri, E.C.; Chung, C.Y.; Liang, X. Day-Ahead Prediction of Distributed Regional-Scale Photovoltaic Power. IEEE Access 2023, 11, 27303–27316. [Google Scholar] [CrossRef]
  11. Boubaker, S.; Benghanem, M.; Mellit, A.; Lefza, A.; Kahouli, O.; Kolsi, L. Deep Neural Networks for Predicting Solar Radiation at Hail Region, Saudi Arabia. IEEE Access 2021, 9, 36719–36730. [Google Scholar] [CrossRef]
  12. Alaraj, M.; Kumar, A.; Alsaidan, I.; Rizwan, M.; Jamil, M. Energy Production Forecasting From Solar Photovoltaic Plants Based on Meteorological Parameters for Qassim Region, Saudi Arabia. IEEE Access 2021, 9, 83241–83251. [Google Scholar] [CrossRef]
  13. Eom, H.; Son, Y.; Choi, S. Feature-Selective Ensemble Learning-Based Long-Term Regional PV Generation Forecasting. IEEE Access 2020, 8, 54620–54630. [Google Scholar] [CrossRef]
  14. Obiora, C.N.; Hasan, A.N.; Ali, A.; Alajarmeh, N. Forecasting Hourly Solar Radiation Using Artificial Intelligence Techniques. IEEE Can. J. Electr. Comput. Eng. 2021, 44, 497–507. [Google Scholar] [CrossRef]
  15. Olcay, K.; Tunca, S.G.; Özgür, M.A. Forecasting and Performance Analysis of Energy Production in Solar Power Plants Using Long Short-Term Memory (LSTM) and Random Forest Models. IEEE Access 2024, 12, 103299–103312. [Google Scholar] [CrossRef]
  16. Kuzlu, M.; Cali, U.; Sharma, V.; Güler, Ö. Gaining Insight Into Solar Photovoltaic Power Generation Forecasting Utilizing Explainable Artificial Intelligence Tools. IEEE Access 2020, 8, 187814–187823. [Google Scholar] [CrossRef]
  17. Goh, H.H.; Luo, Q.; Zhang, D.; Liu, H.; Dai, W.; Lim, C.S.; Kurniawan, T.A.; Goh, K.C. Hybrid SDS and WPT-IBBO-DNM Based Model for Ultra-Short Term Photovoltaic Prediction. Csee J. Power Energy Syst. 2023, 9, 66–76. [Google Scholar] [CrossRef]
  18. Elsaraiti, M.; Merabet, A. Solar Power Forecasting Using Deep Learning Techniques. IEEE Access 2022, 10, 31690–31698. [Google Scholar] [CrossRef]
  19. Al Hadi, F.M.; Aly, H.H.; Little, T. Harmonics Forecasting of Wind and Solar Hybrid Model Based on Deep Machine Learning. IEEE Access 2023, 11, 55413–55424. [Google Scholar] [CrossRef]
  20. Zhang, C.; Xu, M. Time-Segment Photovoltaic Forecasting and Uncertainty Analysis Based on Multi-Objective Slime Mould Algorithm to Improve Support Vector Machine. IEEE Trans. Power Syst. 2024, 39, 5103–5114. [Google Scholar] [CrossRef]
  21. Kim, B.; Suh, D. Solar PV Generation Prediction Based on Multisource Data Using ROI and Surrounding Area. IEEE Trans. Geosci. Remote Sens. 2024, 62, 4704511–4704523. [Google Scholar] [CrossRef]
  22. Gaboitaolelwe, J.; Zungeru, A.M.; Yahya, A.; Lebekwe, C.K.; Vinod, D.N.; Salau, A.O. Machine Learning Based Solar Photovoltaic Power Forecasting: A Review and Comparison. IEEE Access 2023, 11, 40819–40845. [Google Scholar] [CrossRef]
  23. Yao, T.C.; Wang, J.; Wang, Y.; Zhang, P.; Cao, H.; Chi, X.; Shi, M. Very Short-Term Forecasting of Distributed PV Power Using GSTANN. CSEE J. Power Energy Syst. 2024, 10, 1491–1501. [Google Scholar] [CrossRef]
  24. Aslam, M.; Lee, S.-J.; Khang, S.-H.; Hong, S. Two-Stage Attention Over LSTM With Bayesian Optimization for Day-Ahead Solar Power Forecasting. IEEE Access 2021, 9, 107387–107398. [Google Scholar] [CrossRef]
  25. Huang, Y.; Wang, A.; Jiao, J.; Xie, J.; Chen, H. Short-Term PV Power Forecasting Based on CEEMDAN and Ensemble DeepTCN. IEEE Trans. Instrum. Meas. 2023, 72, 2526012. [Google Scholar] [CrossRef]
  26. Sheng, H.; Ray, B.; Chen, K.; Cheng, Y. Solar Power Forecasting Based on Domain Adaptive Learning. IEEE Access 2020, 8, 198580–198590. [Google Scholar] [CrossRef]
  27. Niccolai, A.; Dolara, A.; Ogliari, E. Hybrid PV Power Forecasting Methods: A Comparison of Different Approaches. Energies 2021, 14, 451. [Google Scholar] [CrossRef]
  28. Hossain, M.S.; Mahmood, H. Short-Term Photovoltaic Power Forecasting Using an LSTM Neural Network and Synthetic Weather Forecast. IEEE Access 2020, 8, 172524–172533. [Google Scholar] [CrossRef]
  29. Hyndman, R.J.; Athanasopoulos, G. Forecasting: Principles and Practice, 2nd ed.; OTexts: Melbourne, Australia, 2018. [Google Scholar]
  30. Mellit, A.; Kalogirou, S.A. Artificial intelligence techniques for photovoltaic applications: A review. Prog. Energy Combust. Sci. 2008, 34, 574–632. [Google Scholar] [CrossRef]
  31. Celik, A.N. A techno-economic analysis of autonomous PV-wind hybrid energy systems using different sizing methods. Energy Convers. Manag. 2004, 45, 2459–2475. [Google Scholar] [CrossRef]
Figure 1. Proposed ANFIS-based prediction and energy management framework.
Figure 1. Proposed ANFIS-based prediction and energy management framework.
Solar 05 00026 g001
Figure 2. ANFIS layers.
Figure 2. ANFIS layers.
Solar 05 00026 g002
Figure 3. Membership Functions: (a) Gaussian Function Member. (b) Generalized Bell Membership Function.
Figure 3. Membership Functions: (a) Gaussian Function Member. (b) Generalized Bell Membership Function.
Solar 05 00026 g003
Figure 4. Flowchart of the ANFIS training and prediction model.
Figure 4. Flowchart of the ANFIS training and prediction model.
Solar 05 00026 g004
Figure 5. Solar Farm PV-module characteristic curves: (a) I-V characteristic curve at 1, 0.5, and 0.1 W / m 2 of irradiance. (b) P-V characteristic curve at 1, 0.5, and 0.1 W / m 2 of irradiance.
Figure 5. Solar Farm PV-module characteristic curves: (a) I-V characteristic curve at 1, 0.5, and 0.1 W / m 2 of irradiance. (b) P-V characteristic curve at 1, 0.5, and 0.1 W / m 2 of irradiance.
Solar 05 00026 g005
Figure 6. Closed-Loop predictions using ANFIS: (a) test and predicted data— irradiance. (b) test and predicted data—temperature.
Figure 6. Closed-Loop predictions using ANFIS: (a) test and predicted data— irradiance. (b) test and predicted data—temperature.
Solar 05 00026 g006
Figure 7. Closed-loop predictions using ANFIS, ANN, CFiT, and LSTM: (a) test and predicted data—irradiance. (b) test and predicted data—temperature.
Figure 7. Closed-loop predictions using ANFIS, ANN, CFiT, and LSTM: (a) test and predicted data—irradiance. (b) test and predicted data—temperature.
Solar 05 00026 g007
Figure 8. Open-loop predictions using ANFIS, ANN, CFiT, and curve-fitting: (a) Historical data—irradiance. (b) Historical data—temperature. (c) Test and predicted data—irradiance. (d) Test and predicted data—temperature.
Figure 8. Open-loop predictions using ANFIS, ANN, CFiT, and curve-fitting: (a) Historical data—irradiance. (b) Historical data—temperature. (c) Test and predicted data—irradiance. (d) Test and predicted data—temperature.
Solar 05 00026 g008
Figure 9. Predictions using stacked LSTM, Random Forest, XGBoost, GBoostM, Transformer, individual/ensembles, LightGBM, CatBoost, CNN-LSTM, MOSMA-SVM, and ANFIS: (a) Forecasted data points—irradiance. (b) Forecasted data points—temperature. 62 observations forecasted.
Figure 9. Predictions using stacked LSTM, Random Forest, XGBoost, GBoostM, Transformer, individual/ensembles, LightGBM, CatBoost, CNN-LSTM, MOSMA-SVM, and ANFIS: (a) Forecasted data points—irradiance. (b) Forecasted data points—temperature. 62 observations forecasted.
Solar 05 00026 g009
Figure 10. Predictions using stacked LSTM, Random Forest, XGBoost, GBoostM, Transformer, individual/ensembles, LightGBM, CatBoost, CNN-LSTM, MOSMA-SVM, and ANFIS: (a) Forecasted data points—irradiance. (b) Forecasted data points—temperature. 1748 observations forecasted.
Figure 10. Predictions using stacked LSTM, Random Forest, XGBoost, GBoostM, Transformer, individual/ensembles, LightGBM, CatBoost, CNN-LSTM, MOSMA-SVM, and ANFIS: (a) Forecasted data points—irradiance. (b) Forecasted data points—temperature. 1748 observations forecasted.
Solar 05 00026 g010
Figure 11. Predictions using stacked LSTM, Random Forest, XGBoost, GBoostM, Transformer, individual/ensembles, LightGBM, CatBoost, CNN-LSTM, MOSMA-SVM, and ANFIS: (a) Forecasted data points—irradiance. (b) Forecasted data points—temperature. 3987 observations forecasted.
Figure 11. Predictions using stacked LSTM, Random Forest, XGBoost, GBoostM, Transformer, individual/ensembles, LightGBM, CatBoost, CNN-LSTM, MOSMA-SVM, and ANFIS: (a) Forecasted data points—irradiance. (b) Forecasted data points—temperature. 3987 observations forecasted.
Solar 05 00026 g011
Figure 12. Predictions using stacked LSTM, Random Forest, XGBoost, GBoostM, Transformer, individual/ensembles, LightGBM, CatBoost, CNN-LSTM, MOSMA-SVM, and ANFIS: (a) Forecasted data points—irradiance. (b) Forecasted data points—temperature. 8911 observations forecasted.
Figure 12. Predictions using stacked LSTM, Random Forest, XGBoost, GBoostM, Transformer, individual/ensembles, LightGBM, CatBoost, CNN-LSTM, MOSMA-SVM, and ANFIS: (a) Forecasted data points—irradiance. (b) Forecasted data points—temperature. 8911 observations forecasted.
Solar 05 00026 g012
Figure 13. Predictions using stacked LSTM, Random Forest, XGBoost, GBoostM, Transformer, individual/ensembles, LightGBM, CatBoost, CNN-LSTM, MOSMA-SVM, and ANFIS: (a) Forecasted data points—irradiance. (b) Forecasted data points—temperature. 25011 observations forecasted.
Figure 13. Predictions using stacked LSTM, Random Forest, XGBoost, GBoostM, Transformer, individual/ensembles, LightGBM, CatBoost, CNN-LSTM, MOSMA-SVM, and ANFIS: (a) Forecasted data points—irradiance. (b) Forecasted data points—temperature. 25011 observations forecasted.
Solar 05 00026 g013
Figure 14. Bootstrapped 95% confidence intervals of mean R 2 values: (a) Temperature R 2 . (b) Irradiance mean R 2 .
Figure 14. Bootstrapped 95% confidence intervals of mean R 2 values: (a) Temperature R 2 . (b) Irradiance mean R 2 .
Solar 05 00026 g014
Figure 15. R 2 score heatmap for temperature predictions as shown in Table 5.
Figure 15. R 2 score heatmap for temperature predictions as shown in Table 5.
Solar 05 00026 g015
Figure 16. R M S heatmap for temperature predictions as shown in Table 5.
Figure 16. R M S heatmap for temperature predictions as shown in Table 5.
Solar 05 00026 g016
Figure 17. R 2 score heatmap for irradiance predictions as shown in Table 6.
Figure 17. R 2 score heatmap for irradiance predictions as shown in Table 6.
Solar 05 00026 g017
Figure 18. R M S heatmap for irradiance predictions as shown in Table 6.
Figure 18. R M S heatmap for irradiance predictions as shown in Table 6.
Solar 05 00026 g018
Figure 19. Energy management: (a) PV-power generation for 7 consecutive days. (b) Nickel–cadmium battery State-of-Charge (SoC).
Figure 19. Energy management: (a) PV-power generation for 7 consecutive days. (b) Nickel–cadmium battery State-of-Charge (SoC).
Solar 05 00026 g019
Figure 20. Battery SoC plot using Data-1 on the model: (a) Lead acid battery SoC. (b) Lithium-ion battery SoC. (c) Nickel–cadmium battery SoC. (d) Nickel–metal–hydride battery SoC.
Figure 20. Battery SoC plot using Data-1 on the model: (a) Lead acid battery SoC. (b) Lithium-ion battery SoC. (c) Nickel–cadmium battery SoC. (d) Nickel–metal–hydride battery SoC.
Solar 05 00026 g020
Figure 21. Voltages and currents: (a) PV-array, bus, and battery voltages. (b) PV and battery current.
Figure 21. Voltages and currents: (a) PV-array, bus, and battery voltages. (b) PV and battery current.
Solar 05 00026 g021
Table 1. Prediction performance—accuracy/Root Mean Square Error (RMSE).
Table 1. Prediction performance—accuracy/Root Mean Square Error (RMSE).
S/NReference %Title %AchievementGap
1[6,26]PV Forecasting/Battery Management (Separate)Forecasting and battery management performed separatelyLacks integration, leading to suboptimal SoC stability and energy use
2[9,19]Hybrid PV Prediction ModelsHybrid approach for PV forecastingStruggles with fluctuating weather, reducing accuracy
3[18,27]CNN-LSTM for PV ForecastingImproved accuracy in single-variable predictionsCannot model multivariate dependencies, limiting efficiency
4[24,25]Single-Variable Prediction ModelsPredicts one variable at a timeLacks multivariate capability, reducing forecasting efficiency
5[2,20]ConvGNNs/MOSMA-SVMHigh accuracyComputationally intensive, limiting real-time use
6[16]Explainable AI (XAI) for Energy SystemsUses external tools for interpretabilityRequires additional tools for explanation
7[7,28]Forecasting-Focused ModelsEffective energy predictionDoes not include battery control or load scheduling
8[3,26]PV Forecasting Under Extreme ConditionsAccurate in controlled settingsStruggles with complex weather without higher computation cost
9ANFIS (Proposed)Integrated PV Forecasting/Battery ManagementUnifies PV forecasting and SoC management, ensuring stability and long battery autonomy. Uses MIMO for multivariate prediction, improving efficiency. Computationally lightweight, scalable, and explainable, making it ideal for real-world deployment.Addresses key limitations, offering a holistic and interpretable solution.
Table 2. Dataset summary: size, sampling, forecast scope, and observed challenges.
Table 2. Dataset summary: size, sampling, forecast scope, and observed challenges.
S/NData SizeSampling IntervalForecast Data SizeObserved Challenges
133330 min62Outliers, poor data quality, environmental variability, and missing data
287601 min1748Outliers due to data noise; high temporal and spatial variability
319,9541 min3987Poor data synchronization, time resolution issues, and missing data
444,5791 min8911Outliers, poor data quality, and missing data
5125,0741 min25,011Long-term data gap, seasonality variations, and sensor maintenance issues
Table 3. Prediction performance—accuracy/Root Mean Square Error (RMSE).
Table 3. Prediction performance—accuracy/Root Mean Square Error (RMSE).
MethodIrradiance Accuracy %Temperature Accuracy %Irradiance RMSETemperature RMSETime (s)
ANN83.0197.2733.940.25159
Curve-Fit73.1593.7753.640.58224
LSTM80.1472.8750.550.75371
ANFIS84.5597.2732.850.2558
Table 4. Prediction performance—accuracy/Root Mean Square Error (RMSE).
Table 4. Prediction performance—accuracy/Root Mean Square Error (RMSE).
MethodIrradiance Accuracy %Temperature Accuracy %Irradiance RMSETemperature RMSE
ANN78.2563.5533.8734.98
Curve-Fitting72.364.327.2429.56
ANFIS98.1795.103.720.64
Table 5. Temperature prediction performance: R 2 score (%) and Root Mean Square Error (RMSE).
Table 5. Temperature prediction performance: R 2 score (%) and Root Mean Square Error (RMSE).
S/NStacked-LSTMRForestXGBoostGBoostMEnsembleLGBoostCatBoostCNN-LSTMMOSMA-SVMANFISData Length—Forecast
R2-Score (Accuracy %)
161.7373.4077.5277.5272.4353.2862.2656.5287.7595.0762
297.9198.4198.4798.4498.7398.6798.4497.9198.8798.461748
399.7099.8399.8199.8199.8499.8399.6699.7899.8699.853987
498.5995.7495.2595.3496.0495.4195.0499.9299.7999.988911
599.8399.8999.8699.8899.8899.8899.6699.8699.9099.8725,011
Avg91.5593.4594.1894.2093.3989.4191.0190.8097.2498.65
RMSE
13.262.722.502.502.773.603.243.481.840.6462
21.080.940.920.930.840.860.931.080.790.971748
30.360.270.280.280.260.270.380.310.240.253987
41.031.791.891.881.731.861.930.240.400.128911
50.290.240.260.240.240.250.410.260.220.2525,011
Avg1.201.191.171.171.171.371.381.070.700.45
Table 6. Irradiance prediction performance: R 2 score (%) and Root Mean Square Error (RMSE).
Table 6. Irradiance prediction performance: R 2 score (%) and Root Mean Square Error (RMSE).
S/NStacked-LSTMRForestXGBoostGBoostMEnsembleLGBoostCatBoostCNN-LSTMMOSMA-SVMANFISData Length—Forecast
R 2 Score (%)
137.7997.5798.7998.4898.9198.8991.8177.9097.7099.9862
293.9094.7295.2594.1495.4294.8495.2494.3295.8995.001748
398.1893.7896.7494.7196.2597.1996.8598.5298.7998.633987
498.5296.9398.5796.4498.1898.5598.3498.4498.4798.728911
598.4898.5398.5698.5598.5498.5698.4898.4098.5098.5725,011
Avg85.3796.3197.5896.4797.4697.6196.1493.5197.8798.18
RMSE
1254.1850.2335.3839.7233.6133.9892.21151.4948.853.9662
254.4950.6948.0753.4147.1950.1348.1452.6044.7252.081748
339.8373.7453.3467.9657.2649.5252.4735.9832.5834.853987
442.7561.4541.9766.2547.3742.2545.2743.8643.4439.778911
530.1229.6229.3429.3529.4629.2730.1430.8829.9029.2525,011
Avg84.2753.1441.6251.3442.9841.0353.6562.9639.9031.98
Table 7. Model computational time—temperature and irradiance.
Table 7. Model computational time—temperature and irradiance.
S/NStacked-LSTMRForestXGBoostGBoostMEnsembleLGBoostCatBoostCNN-LSTMMOSMA-SVMANFISData Length—Forecast
1125.681.0115.243.199.7114.8113.7635.420.040.2562
2431.996.3411.9117.5635.2034.0934.62151.2620.314.031748
3901.8130.9118.5652.44104.55104.19106.26306.0658.619.063987
42012.6264.3024.55114.67208.89211.53204.89624.81146.0720.038911
54747.04243.3258.88310.71569.46552.50556.301784.195917.9756.5825011
Avg1643.8369.1725.8399.71185.56183.42183.17580.351228.6017.99
Table 8. SoC summary table in descending order.
Table 8. SoC summary table in descending order.
S/NBattery TypeSoC (%)
1Nickel–cadmium79.24
2Nickel–metal–hydride79.18
3Lead acid79.17
4Lithium ion79.12
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alao, A.B.; Adeyanju, O.M.; Chamana, M.; Bayne, S.; Bilbao, A. Photovoltaic Farm Power Generation Forecast Using Photovoltaic Battery Model with Machine Learning Capabilities. Solar 2025, 5, 26. https://doi.org/10.3390/solar5020026

AMA Style

Alao AB, Adeyanju OM, Chamana M, Bayne S, Bilbao A. Photovoltaic Farm Power Generation Forecast Using Photovoltaic Battery Model with Machine Learning Capabilities. Solar. 2025; 5(2):26. https://doi.org/10.3390/solar5020026

Chicago/Turabian Style

Alao, Agboola Benjamin, Olatunji Matthew Adeyanju, Manohar Chamana, Stephen Bayne, and Argenis Bilbao. 2025. "Photovoltaic Farm Power Generation Forecast Using Photovoltaic Battery Model with Machine Learning Capabilities" Solar 5, no. 2: 26. https://doi.org/10.3390/solar5020026

APA Style

Alao, A. B., Adeyanju, O. M., Chamana, M., Bayne, S., & Bilbao, A. (2025). Photovoltaic Farm Power Generation Forecast Using Photovoltaic Battery Model with Machine Learning Capabilities. Solar, 5(2), 26. https://doi.org/10.3390/solar5020026

Article Metrics

Back to TopTop