What Are Recurrent Expansion Algorithms? Exploring a Deeper Space than Deep Learning †
Abstract
:1. Introduction
2. REMR Background
Algorithm 1 REMR Algorithm |
Inputs: Outputs: % Start training process For % Train the deep network and collect IMTs ; % Evaluate the AULC ; % Reinitialize parameters ; End (For) |
3. Some Illustrative Examples
3.1. Classification Problem
3.2. Regression Problems
3.3. Advantages of REMR
- Improve feature representations through understanding the different IMTs of different deep networks;
- Providing a new source of information, such as estimated targets, helps introduce additional knowledge into the system through a kind of transductive transfer learning;
- The REMR pseudocode in Algorithm 1 shows that building an REMR algorithm does not require much intervention and is simple to design with only a few hyperparameters (i.e., , , and );
- REMR not only shows the ability to improve feature representations, but also the correction of outliers.
3.4. Disadvantages of REMR
- When building an REMR model by merging complex deep networks, it means that the computational complexity will be increased and will require more computational power than ordinary deep learning. This means that this architecture is computationally expensive;
- Convergence of the AULC function strongly depends on IMT initialization. This means that stacking in the inappropriate IMTs will cause the AULC function to diverge, which could lead to a worse outcome at each round;
- Thus far, IMT initialization has been evaluated on the basis of trial and error, which means that it takes a lot of intervention when rebuilding the model at this stage.
3.5. Future Opportunities of REMR
- Target IMT initialization problems by performing experiments on IMT selection and optimization in a few primary rounds. This will be useful in deciding whether to choose this model or not without consuming a lot of computing resources throughout the training rounds;
- Explore available REMR architecture inspired by ensemble learning and parallel architectures to help deliver even better initial IMTs.
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Jordan, M.I.; Mitchell, T.M. Machine learning: Trends, perspectives, and prospects. Science 2015, 349, 255–260. [Google Scholar] [CrossRef] [PubMed]
- Ivanov, T.; Korfiatis, N.; Zicari, R.V. On the inequality of the 3V’s of big data Architectural Paradigms: A case for heterogeneity. arXiv 2013, arXiv:1311.0805. [Google Scholar]
- Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
- Bank, D.; Koenigstein, N.; Giryes, R. Autoencoders. arXiv 2020, arXiv:2003.05991v2. [Google Scholar]
- Yu, Y.; Si, X.; Hu, C.; Zhang, J. A Review of Recurrent Neural Networks: LSTM Cells and Network Architectures. Neural Comput. 2019, 31, 1235–1270. [Google Scholar] [CrossRef] [PubMed]
- Li, Z.; Liu, F.; Yang, W.; Peng, S.; Zhou, J. A Survey of Convolutional Neural Networks: Analysis, Applications, and Prospects. IEEE Trans. Neural Networks Learn. Syst. 2022, 33, 6999–7019. [Google Scholar] [CrossRef] [PubMed]
- Hinton, G. Deep belief networks. Scholarpedia 2009, 4, 5947. [Google Scholar] [CrossRef]
- Berghout, T.; Benbouzid, M.; Ferrag, M.A. Deep Learning with Recurrent Expansion for Electricity Theft Detection in Smart Grids. In Proceedings of the IECON 2022—48th Annual Conference of the IEEE Industrial Electronics Society, Brussels, Belgium, 17–20 October 2022. [Google Scholar]
- Berghout, T.; Benbouzid, M.; Amirat, Y. Improving Small-scale Machine Learning with Recurrent Expansion for Fuel Cells Time Series Prognosis. In Proceedings of the IECON 2022—48th Annual Conference of the IEEE Industrial Electronics Society, Brussels, Belgium, 17–20 October 2022. [Google Scholar]
- Berghout, T.; Benbouzid, M.; Bentrcia, T.; Amirat, Y.; Mouss, L. Exposing Deep Representations to a Recurrent Expansion with Multiple Repeats for Fuel Cells Time Series Prognosis. Entropy 2022, 24, 1009. [Google Scholar] [CrossRef] [PubMed]
- Tian, Y.; Zhang, Y.; Zhang, H. Recent Advances in Stochastic Gradient Descent in Deep Learning. Mathematics 2023, 11, 682. [Google Scholar] [CrossRef]
- Berghout, T.; Mouss, M.-D.; Mouss, L.; Benbouzid, M. ProgNet: A Transferable Deep Network for Aircraft Engine Damage Propagation Prognosis under Real Flight Conditions. Aerospace 2022, 10, 10. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Berghout, T.; Benbouzid, M. What Are Recurrent Expansion Algorithms? Exploring a Deeper Space than Deep Learning. Comput. Sci. Math. Forum 2023, 7, 10. https://doi.org/10.3390/IOCMA2023-14387
Berghout T, Benbouzid M. What Are Recurrent Expansion Algorithms? Exploring a Deeper Space than Deep Learning. Computer Sciences & Mathematics Forum. 2023; 7(1):10. https://doi.org/10.3390/IOCMA2023-14387
Chicago/Turabian StyleBerghout, Tarek, and Mohamed Benbouzid. 2023. "What Are Recurrent Expansion Algorithms? Exploring a Deeper Space than Deep Learning" Computer Sciences & Mathematics Forum 7, no. 1: 10. https://doi.org/10.3390/IOCMA2023-14387
APA StyleBerghout, T., & Benbouzid, M. (2023). What Are Recurrent Expansion Algorithms? Exploring a Deeper Space than Deep Learning. Computer Sciences & Mathematics Forum, 7(1), 10. https://doi.org/10.3390/IOCMA2023-14387