1. Introduction
ELM (Extreme Learning Machine) [
1,
2] is an efficient learning algorithm for single hidden-layer feed-forward neural networks. Its main contribution is that its internal parameters are assigned randomly without the need for tuning and updating, thus alleviating the burden of parameter tuning associated with widely used traditional optimization algorithms [
3] and saving training time.
In recent years, a substantial amount of work around ELMs has been carried out in order to meet certain requirements [
3,
4,
5,
6]. Some research mainly focuses on optimizing and improving the ELM itself. An incremental ELM [
7,
8,
9,
10,
11] controls the number of hidden-layer neurons by adding them one by one, while a regularized ELM [
12,
13,
14] can prune the model structure and maintain stability by controlling the trade-off between the penalty term and training error. A kernel ELM [
15,
16,
17] introduces the kernel function to address the situation in which the feature mapping functions of hidden-layer neurons are unknown to users [
18]. Another important development in ELM research is the orthogonality-based ELM. This method imposes orthogonal constraints on the transformation matrix, and generates orthogonal basis functions to preserve local structure information and avoid trivial problems [
19,
20,
21]. To further improve performance, an orthogonal constraint is introduced into the I-ELM. By incorporating Gram–Schmidt orthogonalization, it efficiently reduces redundant hidden-layer neurons, making the neural network more compact; moreover, the OI-ELM has a much faster convergence rate and better generalization [
22,
23].
Some studies have effectively combined multiple independent models and, based on statistical results, found that the variance of the average values of multiple independent models is less than that of individual models, thus demonstrating that combination models outperform single ones [
24,
25]. Heeswijk et al. proposed a parallel combination model [
26] that trains each ELM independently and determines output weights based on leave-one-out cross-validation. Liao and Feng [
27] proposed a new model incorporating ELMs, called a Meta-ELM, which is considered a “top” ELM whose hidden nodes are base ELMs. After theoretical analysis and experimental verification, it was found to exhibit efficiently decreased computational cost and improved performance. Zou et al. [
28] introduced an error feedback I-ELM with meta-learning as the base-learner, proposing an improved Meta-ELM, which includes two main stages: it obtains compact neural networks and efficiently avoids the overfitting problem. Xu et al. [
29] used the basic MAML framework and integrated the characteristics of ELMs and MAML to create an MLELM method. During learning and training, the MLELM learned information from multiple tasks and determined the initial parameters for the ELM, enhancing the capability to handle limited samples. Song et al. [
30] considered the time characteristics in time series data and proposed the weighted Meta-ELM, or WMeta-ELM. Different weights were assigned to each base ELM to guide the model to learn different information, with the results showing that it is efficient and workable.
Motivated by these works and to further explore the spatial characteristics of data and alleviate the overfitting problem, in this study, a novel meta-learning model is proposed incorporating a QEC-ELM as the base-learner, called a Meta-QEC-ELM. Different from an OELM, the output weight in the model is a column vector; thus, the traditional optimal problem with orthogonal constraints is transformed to one with a quadratic equality constraint, i.e., a one-column Procrustes problem. To project the vector itself onto the one-dimensional subspace, it efficiently avoids locally quadratic convergence [
31]. The remainder of this paper is organized as follows.
Section 2 briefly reviews the basic ELM and Meta-ELM.
Section 3 describes the Meta-QEC-ELM in detail.
Section 4 briefly analyzes the convergence and complexity of the proposed model. In
Section 5, experiments are conducted to analyze the performance of the Meta-QEC-ELM.
The main contributions of the study include the following: (1) reducing computational complexity by transforming the optimal problem to a quadratic equality constraint problem; and (2) preserving more data information by introducing orthogonal constraints.
2. Related Theory
2.1. ELM
Given
distinct samples
,
,
,
is the input and
is the corresponding expected output. SLFN with
hidden layer neurons and activation function
is modeled by ELM as
where
is input weight, which connects the input layer and the
ith hidden-layer neuron;
is the basis of the
ith hidden-layer neuron; and
is the output weight, which connects
hidden-layer neuron and output layer.
Equation (1) can be rewritten as follows:
in which
is the output matrix of the hidden-layer neurons,
, and
.
For ELM [
1,
2], if
, the ELM can approximate the training sample with zero error; however, this can lead to overfitting. In most applications,
is much less than
so as to avoid overfitting and improve performance. According to the base theory of ELM, the optimal solution is
in which
is the Moore–Penrose inverse of
, and
.
2.2. Meta-ELM
The Meta-ELM is a hybrid model with a hierarchical structure. By configurating it with multiple base ELMs and the meta-learner, it can learn information generated by multiple base ELMs [
24,
27,
32]. Each of the base ELMs generates one approximator, and the meta-learner generates the meta-approximator. Finally, its goal is to achieve overall accuracy [
32], not to pick the “best” base ELM.
The optimal problem of the Meta-ELM is minimizing the cost function in the following form:
in which
is the size of the training dataset;
is the number of base ELMs; and
and
are the output and output weight of the
jth base ELM, with
as input.
The optimal problem of the Meta-ELM is to minimize the cost function
by solving the least-square solution
; that is,
=
, where
is the output matrix of multiple base ELMs:
and
; thus, the complete output of the Meta-ELM is
,
.
3. Meta-QEC-ELM
Referring to the idea of the OELM and Meta-ELM, we propose a novel model in this section.
3.1. QEC-ELM
For the Meta-ELM,
is a column vector; thus, the optimal problem (5) with orthogonal constraints is not rational. For processing the same performance with an OELM (Orthogonal ELM),
,
is the unit. Thus, a simple ELM with a quadratic equality constraint (abbreviated to QEC-ELM) is proposed:
The optimal problem (7) is a one-column Procrustes problem that is equivalent to solving the normal form with constraint:
with vector
and multiplier
. Thus, the optimal problems of (7) or (8) involve solving the following secular equation:
It is proven that when
[
31,
33], the secular equation holds,
is the optimal solution, and
is the minimum singular value of
. When
, the optimal solution
of (7) is unique, but it is difficult to determine
. If
, there are multiple solutions. However, if
is close to the optimal
, it leads to local quadratic convergence. Thus, the projection method [
31] is used to solve the optimal problem with a quadratic equality constraint. Here, the vector
is projected onto the one-dimensional subspace spanned by
. The projection method can be expressed as
where
,
,
,
, and
is the acceleration factor When
, the iterations of the projection method are fewer and the convergence is stronger.
As is known,
exists, which makes
hold; this implies that
,
, and then
, so
. If
,
,
Thus obtaining the inequality , . So, if , is closer to the optimal .
3.2. Meta-QEC-ELM Model
Different from the Meta-ELM, the QEC-ELM is introduced to replace the base ELM as the hidden nodes in the proposed model, and the other component, the meta-learner, is also achieved through the base configuration, which consists of the base QEC-ELM. As depicted in
Figure 1, it can learn knowledge from the output of the base QEC-ELMs.
To balance the structural and empirical risk based on statistical learning theory, the output weight is introduced into the cost function. Then, the Meta-QEC-ELM can optimize the following function with orthogonal constraints:
where
is the size of the training dataset,
is the number of base QEC-ELMs in the Meta-QEC-ELM,
is the output, and
is the weight of the
jth base QEC-ELM, which takes
as the input, as depicted in
Figure 2.
To resolve the minimizing cost function
, Equation (1) is rewritten in its general form:
Because
,
is unit vector, so
, and Equation (12) is transformed into the following form:
Finally, resolving the minimizing cost function
is equivalent to
The meta-learner and meta-approximator are basically the same as the base QEC-ELM and QEC-ELM-approximator, only the number of hidden-layer neurons is different. Since resolving is the same with the QEC-ELM, it will not be detailed here.
Based on the analysis, the proposed method is a two-stage learning algorithm, showing in the Algorithm 1. The first step is to train the base QEC-ELMs with the same training dataset. This is a parallel process used to save more time when training and predicting. The second step is to train the “top” QEC-ELM, whose hidden nodes are the base QEC-ELM.
Algorithm 1. Meta-QEC-ELM |
Input: training samples . |
set threshold and |
Output: // the output weight in the section stage |
First Stage: |
| S1 | Partition the training dataset into subset , the size of is . |
| S2 | For |
| S3 | Each QEC-ELM learn on |
| | SS1 | |
| SS2 | Calculate the ( is the output of hidden layer of one based QEC-ELM with hidden nodes) |
| SS3 | If |
| |
Calculate based on the Equation (11) |
|
Go to step SS2 |
Else |
Go to step S3 |
End else |
End For |
Second Stage: |
| S4 | Calculate the output matrix of each base QEC-ELM |
|
| S5 | |
| S6 | Calculate the ( is the number of base QEC-ELM) |
| S7 | If |
| |
Calculate based on the Equation (11) |
|
Go to step S6 |
End If |
4. Performance Analysis
Considering the convergence of the proposed model, and are the sequences of and , respectively, generated during iterating, where is the number of iterations. Because of the perturbations and , , and are the optimal solutions closest to and . Set , .
Using Equation (8),
,
, yields the following:
By multiplying
by the right and left side of Equation (16), the following is obtained for all
:
Therefore, , , meaning that sequence convergences quadratically.
As we know, the role of the ELM in training is to obtain the output weight , that is, to solve the inverse of . Thus, its complexity is . In most cases, , so its computational complexity is significantly reduced in comparison with an LS-SVM and PSVM, as they must solve the inverse of the matrix. Regarding the complexity of the Meta-QEC-ELM, the main complexity is that in order to obtain the optimal solution , in each iteration, it needs to solve the intermediate variables and , respectively, which also need to solve the inverse of . Thus, the complexity of the “top” ELM with the QEC-ELM is . In addition, the proposed model runs in parallel to train each of the base QEC-ELMs, which also need to obtain the output weight and solve the inverse of the matrix. Thus, the complexity of each base QEC-ELM is . So, the complexity of the Meta-QEC-ELM is , where and are the number of updates to and , respectively. In real applications, is much less than , and is much less than .
Since , then , and . The spatial distance between any two points , (Euclidean distance). Due to the introduction of the quadratic equality constraint, , we obtain . As is known, is the point in the feature space of ELM, therefore is the distance in the feature space. This means that the proposed Meta-QEC-ELM is superior in preserving the data metric structure from feature space to reduced subspace.
5. Experimental Verification
To verify the performance of the Meta-QEC-ELM proposed in this paper, some tests were carried out to compare it with other algorithms using several benchmark datasets under the Matlab 2023a environment. These algorithms included an ELM, NOELM [
19], Ensemble model [
34], Meta-ELM [
27], Meta-LSTM [
35], Improved Meta-ELM [
28], and Meta-QEC-ELM. Their activation function is the sigmoid function and the number of hidden-layer neurons was set to three times the input dimension. Some other key parameters (the biases, input weights, etc.) were generated randomly from [−1, 1].
5.1. Function Approximation with Noise Data
In the experiments, the ELM, NOELM, and Meta-QEC-ELM approximate the “SinC” function:
First, 5000 data are generated as a training and a testing dataset
, respectively, in which
is randomly and uniformly distributed at
. Then, to make the experiments more “real”, noise data are generated and added to the training dataset, which are uniformly distributed at
. The training time of the Meta-QEC-ELM consists of two parts: one is the maximum time to train the base QEC-ELMs, and the other is the time to train the “top” ELM. For greater accuracy, 30 trials were conducted.
Table 1 presents the performance comparison for the approximation function “SinC”. The RMSE of the ELM is 0.00425, and its training time is 0.3125 s. The QEC-ELM algorithm obtains a performance close to that of the ELM. The Meta-QEC-ELM is a more complex algorithm than the ELM and obtains better testing accuracy, but does not take much more time to train.
Figure 3,
Figure 4 and
Figure 5 show the approximation results for the algorithms ELM, QEC-ELM, and Meta-QEC-ELM, respectively. The green curve is the expected result and the red curve is the actual result, which are approximately symmetrical. All of the algorithms can approximate function “SinC” accurately, but the performance of the Meta-QEC-ELM is better.
5.2. Benchmark Regression
To verify the performance of the Meta-QEC-ELM on the regression problems in comparison with the NOELM, Ensemble model, Meta-ELM, Meta-LSTM, and Improved Meta-ELM, seven benchmark datasets were gathered from the UCI repository [
36], as shown in
Table 2. Due to potential problems such as noise and missing data in these datasets, it must be preprocessed, including identifying abnormal data, deleting extreme data, and filling in missing data. For each dataset, 30 trials were conducted for all of the algorithms.
For the Meta-ELM and Meta-QEC-ELM, both the base ELM and the “top” ELM need to be trained, meaning that their computing complexity is relatively higher. Theoretically, the training time is significantly longer, especially for smaller datasets. The shortest training time is 0.0228 for the Meta-QEC-ELM on the dataset “Auto price”. Due to the small dataset and relatively small node number, its comprehensive complexity is less than others. However, as the size of the dataset increases, the neural network structure becomes more complex with an increasing number of hidden nodes, and the number of base ELMs does increase significantly. This leads to a shorter training time than for the NOELM, Ensemble model, and Meta-LSTM. Its longest training time is about 0.62, but this has been shortened by more than 5%, as shown in
Table 3. Because the quadratic equality constraint has been introduced, the complexity of the Meta-QEC-ELM is a little larger than the Meta-ELM, but its training time does not increase significantly. For some datasets, the training time is still less than for the Meta-ELM: the maximum difference is only 0.1887 on the dataset “California housing” because of its more complex network structure, with 11 base ELMs and up to 139 nodes in each base ELM. As observed from
Table 4, the RMSE of the Meta-QEC-ELM is similar to or less than that of the Meta-ELM in all datasets, except on “Auto-MPG”, where its RMSE is a little larger, but only by less than 3%. The maximum testing difference is about 0.006 on the dataset “Auto price”, an improvement of more than 6%, and it preserves more data information. The Meta-LSTM needs to optimize the LSTM, so its training time is relatively long, and the longest time has increased by about 25%. Moreover, the Ensemble model involves more factors and needs more iterative training, so its training time is longer. The average training time has increased by 30%, as shown in
Table 3.
In summary, compared with the NOELM, Ensemble Model, Meta-LSTM, and Meta-ELM, our proposed Meta-QEC-ELM model achieves better results, although the network structure is more complex.
6. Conclusions
In this paper, learning from the basic idea of the NOELM, we introduced orthogonal constraints into the Meta-ELM and proposed the novel Meta-QEC-ELM. Different from the original and improved Meta-ELM, orthogonal constraints are introduced by the base ELM and “top” ELM in the Meta-QEC-ELM, respectively. Because of the particularity of the Meta-ELM, the Meta-QEC-ELM transforms the orthogonal Procrustes problem into a quadratic equality constraint problem based on single vectors—that is, the one-column Procrustes problem—and also preserves more data information. Compared with the ELM, NOELM, Meta-LSTM, and Meta-ELM, the Meta-QEC-ELM can achieve much better results. It is slightly weaker than Meta-ELM in some points, but the weakness is limited and still completely acceptable. Because of the good characteristics of the Meta-QEC-ELM, it could also be applied to regression problems such as traffic flow prediction. Thus, the node number of the Meta-QEC-ELM should be optimized further to improve its ability to solve regression problems and enhance scalability across diverse problem domains.
Author Contributions
Conceptualization, L.C. and H.Z. methodology, L.C.; validation, H.Z.; writing—original draft preparation, L.C.; writing—review and editing, L.C. and H.Z. All authors have read and agreed to the published version of the manuscript.
Funding
This research was funded in part by the Liaoning Provincial Natural Science Foundation of China, grant numbers 2022-MS-420, 2023JH2/101300134, and 2024-MSLH-203; in part by the Key Laboratory Project of Intelligent Application of Liaoning Province Public Security Big Data, grant number 2022JH13/10200047; and in part by the technical research program project of the Ministry of Public Security, grant number 2023JSZ11.
Data Availability Statement
Conflicts of Interest
The authors declare no competing interests.
References
- Ding, S.; Zhao, H.; Zhang, Y.; Xu, X.; Nie, R. Extreme Learning Machine: Algorithm, Theory and Applications. Artif. Intell. Rev. 2015, 44, 103–115. [Google Scholar] [CrossRef]
- Huang, G.B.; Zhou, H.; Ding, X.; Zhang, R. Extreme Learning Machine for Regression and Multiclass Classification. IEEE Trans. Syst. Man Cybern. Part B Cybern. 2012, 42, 513–529. [Google Scholar] [CrossRef] [PubMed]
- Wang, H.; Zhang, Y.; Mu, L. Short-Term Electrical Load Forecasting Using an Enhanced Extreme Learning Machine Based on the Improved Dwarf Mongoose Optimization Algorithm. Symmetry 2024, 16, 628. [Google Scholar] [CrossRef]
- Dokur, E.; Erdogan, N.; Yuzgec, U. Swarm Intelligence-Based Multi-Layer Kernel Meta Extreme Learning Machine for Tidal Current to Power Prediction. Renew. Energy 2025, 243, 122516. [Google Scholar] [CrossRef]
- Filelis-Papadopoulos, C.K.; Morrison, J.P.; O’Reilly, P. Adaptive Multilayer Extreme Learning Machines. Math. Comput. Simul. 2025, 231, 71–98. [Google Scholar] [CrossRef]
- Perfilieva, I.; Madrid, N.; Ojeda-Aciego, M.; Artiemjew, P.; Niemczynowicz, A. A Critical Analysis of the Theoretical Framework of the Extreme Learning Machine. Neurocomputing 2025, 621, 129298. [Google Scholar] [CrossRef]
- Wong, H.-T.; Leung, H.-C.; Leung, C.-S.; Wong, E. Noise/Fault Aware Regularization for Incremental Learning in Extreme Learning Machines. Neurocomputing 2022, 486, 200–214. [Google Scholar] [CrossRef]
- Xiao, R.; Yin, Y.; Chen, G.; Pan, N.; Jia, X. Online State of Charge Estimation for Lithium-Ion Batteries Based on Incremental Learning under Real-World Driving Conditions. J. Energy Storage 2025, 118, 116234. [Google Scholar] [CrossRef]
- Adnan, R.M.; Liang, Z.; Trajkovic, S.; Zounemat-Kermani, M.; Li, B.; Kisi, O. Daily Streamflow Prediction Using Optimally Pruned Extreme Learning Machine. J. Hydrol. 2019, 577, 123981. [Google Scholar] [CrossRef]
- Pan, Z.; Meng, Z.; Chen, Z.; Gao, W.; Shi, Y. A Two-Stage Method Based on Extreme Learning Machine for Predicting the Remaining Useful Life of Rolling-Element Bearings. Mech. Syst. Signal Process. 2020, 144, 106899. [Google Scholar] [CrossRef]
- Yang, Y.; Wang, Y.; Yuan, X. Bidirectional Extreme Learning Machine for Regression Problem and Its Learning Effectiveness. IEEE Trans. Neural Netw. Learn. Syst. 2012, 23, 1498–1505. [Google Scholar] [CrossRef]
- Yang, S.; Wang, S.; Sun, L.; Luo, Z.; Bao, Y. Output Layer Structure Optimization for Weighted Regularized Extreme Learning Machine Based on Binary Method. Symmetry 2023, 15, 244. [Google Scholar] [CrossRef]
- Zhang, Y.; Dai, Y.; Li, J. Incremental and Sequence Learning Algorithms for Weighted Regularized Extreme Learning Machines. Appl. Intell. 2024, 54, 5859–5878. [Google Scholar] [CrossRef]
- Tian, D.; Shi, J.; Zhang, H.; Wei, X.; Zhao, A.; Feng, J. Regularized Extreme Learning Machine-Based Temperature Prediction for Edible Fungi Greenhouse. Appl. Eng. Agric. 2024, 40, 483–499. [Google Scholar] [CrossRef]
- Xiao, Y.; Qi, S.; Guo, S.; Zhang, S.; Wang, Z.; Gong, F. Rockburst Intensity Prediction Based on Kernel Extreme Learning Machine (KELM). Acta Geol. Sin. Engl. Ed. 2025, 99, 284–295. [Google Scholar] [CrossRef]
- Gao, H.; Xu, T.; Zhang, N. Robust Kernel Extreme Learning Machines for Postgraduate Learning Performance Prediction. Heliyon 2025, 11, e40919. [Google Scholar] [CrossRef]
- Zhang, M.; Sun, R.; Cui, T.; Ren, Y. An Enhancing Multiple Kernel Extreme Learning Machine Based on Deep Learning. In Proceedings of the 39th Youth Academic Annual Conference of Chinese Association of Automation (YAC 2024), Dalian, China, 7–9 June 2024; pp. 1340–1345. [Google Scholar]
- Afzal, A.L.; Nair, N.K.; Asharaf, S. Deep Kernel Learning in Extreme Learning Machines. Pattern Anal. Appl. 2021, 24, 11–19. [Google Scholar] [CrossRef]
- Cui, L.; Zhai, H.; Lin, H. A Novel Orthogonal Extreme Learning Machine for Regression and Classification Problems. Symmetry 2019, 11, 1284. [Google Scholar] [CrossRef]
- Taylor-Melanson, W.; Ferreira, M.D.; Matwin, S. SGORNN: Combining Scalar Gates and Orthogonal Constraints in Recurrent Networks. Neural Netw. 2023, 159, 25–33. [Google Scholar] [CrossRef]
- Xu, X.; Wei, F.; Yu, T.; Lu, J.; Liu, A.; Zhuo, L.; Nie, F.; Wu, X. Embedded Multi-Label Feature Selection via Orthogonal Regression. Pattern Recognit. 2025, 163, 111477. [Google Scholar] [CrossRef]
- Ying, L. Orthogonal Incremental Extreme Learning Machine for Regression and Multiclass Classification. Neural Comput. Appl. 2016, 27, 111–120. [Google Scholar] [CrossRef]
- Zou, W.; Xia, Y.; Li, H. Fault Diagnosis of Tennessee-Eastman Process Using Orthogonal Incremental Extreme Learning Machine Based on Driving Amount. IEEE Trans. Cybern. 2018, 48, 3403–3410. [Google Scholar] [CrossRef] [PubMed]
- Guo, Y.; Ruger, S.M.; Sutiwaraphun, J.; Forbes-millott, J. Meta-Learning for Parallel Data Mining. In Proceedings of the Seventh Parallel Computing Workshop, Vienna, Austria, 7–11 July 1997. [Google Scholar]
- Tabealhojeh, H.; Adibi, P.; Karshenas, H.; Roy, S.K.; Harandi, M. RMAML: Riemannian Meta-Learning with Orthogonality Constraints. Pattern Recognit. 2023, 140, 109563. [Google Scholar] [CrossRef]
- Van Heeswijk, M.; Miche, Y.; Oja, E.; Lendasse, A. GPU-Accelerated and Parallelized ELM Ensembles for Large-Scale Regression. Neurocomputing 2011, 74, 2430–2437. [Google Scholar] [CrossRef]
- Liao, S.; Feng, C. Meta-ELM: ELM with ELM Hidden Nodes. Neurocomputing 2014, 128, 81–87. [Google Scholar] [CrossRef]
- Zou, W.; Yao, F.; Zhang, B.; Guan, Z. Improved Meta-ELM with Error Feedback Incremental ELM as Hidden Nodes. Neural Comput. Appl. 2018, 30, 3363–3370. [Google Scholar] [CrossRef]
- Xu, Z.; Gao, X.; Fu, J.; Li, Q.; Tan, C. A Novel Fault Diagnosis Method under Limited Samples Based on an Extreme Learning Machine and Meta-Learning. J. Taiwan Inst. Chem. Eng. 2024, 161, 105522. [Google Scholar] [CrossRef]
- Song, Y.; Zang, S.; Ma, J.; Li, H.; Lv, J. Stock Price Prediction Based on Weighted Meta-Extreme Learning Machine. In Proceedings of the 39th Youth Academic Annual Conference of Chinese Association of Automation (YAC 2024), Dalian, China, 7–9 June 2024; pp. 1028–1033. [Google Scholar]
- Zhang, Z.; Huang, Y. A Projection Method for Least Squares Problems with a Quadratic Equality Constraint. SIAM J. Matrix Anal. Appl. 2004, 25, 188–212. [Google Scholar] [CrossRef]
- Chan, P.K.; Stolfo, S.J. Experiments on Multistrategy Learning by Meta-Learning. In Proceedings of the Second International Conference on Information and Knowledge Management, Washington, DC, USA, 1–5 November 1993; pp. 314–323. [Google Scholar]
- Gander, W. Least Squares with a Quadratic Constraint. Numer. Math. 1980, 36, 291–307. [Google Scholar] [CrossRef]
- Zhou, L.; Yan, P.; Li, X.; Liu, T.; Liu, Z.; Jia, W. Research on Prediction Model of High Geothermal Tunnels Temperature Based on CNN-SVM. Energy Build. 2025, 347, 116285. [Google Scholar] [CrossRef]
- Cai, K.; He, J.; Li, Q.; Shangguan, W.; Li, L.; Hu, H. Meta-LSTM in Hydrology: Advancing Runoff Predictions through Model-Agnostic Meta-Learning. J. Hydrol. 2024, 639, 131521. [Google Scholar] [CrossRef]
- Bache, K.; Lichman, M. UCI Machine Learning Repository. Available online: https://archive.ics.uci.edu/ (accessed on 18 August 2019).
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).