Exploring Underlying Features in Hidden Layers of Neural Network
Abstract
1. Introduction
- How is knowledge represented in neural network weights?
- Does the concept of least and most significant weights exist?
2. Related Work
Neural Networks and Knowledge Discovery
3. Component-Based Model
3.1. Proposed Model
- Layers near the input consist of features that are raw and low-level, as discrete pieces that reflect a form of input.
- The layers near the output are high-level representations that are very similar to the class labels responsible for the classification.
- The transformation of features (the learning mechanism) occurs between the input and output layers.
3.2. Components and Factor Analysis
3.3. Extracting Components
- Should all the layers be used for component extraction?
- How can the number of components or any exit criteria be determined to make sure sufficient knowledge is extracted?
- represents the projection of features F as a vector component;
- is the stochastic error;
- S represents the underlying functional values responsible for changes in the transformation of input features into components.
Determining Components
- is the factor based on variance.
4. Experiments Results and Discussion
4.1. Evaluating the Relationship
4.2. Exploring Components
4.3. Limitations
5. Conclusions
Future Work
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Tirumala, S.S. Exploring neural network layers for knowledge discovery. Procedia Comput. Sci. 2021, 193, 173–182. [Google Scholar] [CrossRef]
- Yu, F.; Xiu, X.; Li, Y. A survey on deep transfer learning and beyond. Mathematics 2022, 10, 3619. [Google Scholar] [CrossRef]
- Sohail, A. “Transfer Learning” for Bridging the Gap Between Data Sciences and the Deep Learning. Ann. Data Sci. 2024, 11, 337–345. [Google Scholar] [CrossRef]
- Chen, X.; Yang, R.; Xue, Y.; Huang, M.; Ferrero, R.; Wang, Z. Deep transfer learning for bearing fault diagnosis: A systematic review since 2016. IEEE Trans. Instrum. Meas. 2023, 72, 1–21. [Google Scholar] [CrossRef]
- Iman, M.; Arabnia, H.R.; Rasheed, K. A review of deep transfer learning and recent advancements. Technologies 2023, 11, 40. [Google Scholar] [CrossRef]
- Weiss, K.; Khoshgoftaar, T.M.; Wang, D. A survey of transfer learning. J. Big Data 2016, 3, 9. [Google Scholar] [CrossRef]
- Cao, Y.; Nunoya, S.; Suzuki, Y.; Suzuki, M.; Asada, Y.; Takahashi, H. Classification of real estate images using transfer learning. In Proceedings of the Tenth International Conference on Graphics and Image Processing (ICGIP 2018), Chengdu, China, 12–14 December 2018; Li, C., Yu, H., Pan, Z., Pu, Y., Eds.; SPIE: Bremerhaven, Germany, 2019; Volume 11069, pp. 435–440. [Google Scholar]
- Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
- Tirumala, S.S. A novel weights of weights approach for efficient transfer learning in artificial neural networks. Procedia Comput. Sci. 2022, 212, 295–303. [Google Scholar] [CrossRef]
- Abdualgalil, B.; Abraham, S. Efficient machine learning algorithms for knowledge discovery in big data: A literature review. Database 2020, 29, 3880–3889. [Google Scholar]
- Samek, W.; Montavon, G.; Lapuschkin, S.; Anders, C.J.; Müller, K.R. Explaining deep neural networks and beyond: A review of methods and applications. Proc. IEEE 2021, 109, 247–278. [Google Scholar] [CrossRef]
- Tsang, M.; Cheng, D.; Liu, Y. Detecting statistical interactions from neural network weights. arXiv 2017, arXiv:1705.04977. [Google Scholar]
- Sexton, R.S.; McMurtrey, S.; Cleavenger, D. Knowledge discovery using a neural network simultaneous optimization algorithm on a real world classification problem. Eur. J. Oper. Res. 2006, 168, 1009–1018. [Google Scholar] [CrossRef]
- Sremath Tirumala, S. A Component Based Knowledge Transfer Model for Deep Neural Networks. Ph.D. Thesis, Auckland University of Technology, Auckland, New Zealand, 2020. [Google Scholar]
- Liu, X.; Gao, J.; He, X.; Deng, L.; Duh, K.; Wang, Y.Y. Representation learning using multi-task deep neural networks for semantic classification and information retrieval. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Denver, CO, USA, 31 May–5 June 2015. [Google Scholar]
- Bengio, Y.; Courville, A.; Vincent, P. Representation learning: A review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1798–1828. [Google Scholar] [CrossRef] [PubMed]
- Härdle, W.K.; Simar, L.; Fengler, M.R. Principal component analysis. In Applied Multivariate Statistical Analysis; Springer: Berlin/Heidelberg, Germany, 2024; pp. 309–345. [Google Scholar]
- Greenacre, M.; Groenen, P.J.; Hastie, T.; d’Enza, A.I.; Markos, A.; Tuzhilina, E. Principal component analysis. Nat. Rev. Methods Prim. 2022, 2, 100. [Google Scholar] [CrossRef]
- Ali, S.; Verma, S.; Agarwal, M.B.; Islam, R.; Mehrotra, M.; Deolia, R.K.; Kumar, J.; Singh, S.; Mohammadi, A.A.; Raj, D.; et al. Groundwater quality assessment using water quality index and principal component analysis in the Achnera block, Agra district, Uttar Pradesh, Northern India. Sci. Rep. 2024, 14, 5381. [Google Scholar] [CrossRef]
- Aicha, A.B.; Guerfel, M.; Ali, R.B.H.; Mansouri, M. Sensor Fault Detection and Isolation with Interval Principal Component Analysis: Application to a Heat Exchangers System. IEEE Sens. J. 2025, 25, 31020–31029. [Google Scholar] [CrossRef]
- Kolla, T.; Vishnu, G.L.P.; Uvais, S.M.; Jahnavi, G.; Sungeetha, A. Privacy-Preserving Face Recognition for Smart Locks using TensorFlow Lite and BLE. In Proceedings of the 2025 3rd International Conference on Self Sustainable Artificial Intelligence Systems (ICSSAS), Erode, India, 11–13 June 2025; IEEE: New York, NY, USA, 2025; pp. 201–206. [Google Scholar]
- Clifford, G.D. Blind source separation: Principal & independent component analysis. In Biomedical Signal and Image Processing; Springer: Berlin/Heidelberg, Germany, 2008; pp. 1–47. [Google Scholar]
- Olden, J.D.; Jackson, D.A. Illuminating the “black box”: A randomization approach for understanding variable contributions in artificial neural networks. Ecol. Model. 2002, 154, 135–150. [Google Scholar] [CrossRef]
- Niu, S.; Liu, Y.; Wang, J.; Song, H. A decade survey of transfer learning (2010–2020). IEEE Trans. Artif. Intell. 2021, 1, 151–166. [Google Scholar] [CrossRef]
- Pan, S.J. Transfer learning. Learning 2020, 21, 1–2. [Google Scholar]
- Torrey, L.; Shavlik, J. Transfer learning. In Handbook of Research on Machine Learning Applications and Trends: Algorithms, Methods, and Techniques; IGI Global Scientific Publishing: Hershey, PA, USA, 2010; pp. 242–264. [Google Scholar]
- Tan, C.; Sun, F.; Kong, T.; Zhang, W.; Yang, C.; Liu, C. A Survey on Deep Transfer Learning; Springer International Publishing: Cham, Switzerland, 2018. [Google Scholar]
- Data, M.C.; Komorowski, M.; Marshall, D.C.; Salciccioli, J.D.; Crutain, Y. Exploratory data analysis. In Secondary Analysis of Electronic Health Records; Springer: Cham, Switzerland, 2016; pp. 185–203. [Google Scholar]
- Raza, A.; Younas, F.; Siddiqui, H.U.R.; Rustam, F.; Villar, M.G.; Alvarado, E.S.; Ashraf, I. An improved deep convolutional neural network-based YouTube video classification using textual features. Heliyon 2024, 10, e35812. [Google Scholar] [CrossRef]
- Fisher, R.A. The use of multiple measurements in taxonomic problems. Ann. Eugen. 1936, 7, 179–188. [Google Scholar] [CrossRef]
- Van der Maaten, L.; Hinton, G. Visualizing data using t-SNE. J. Mach. Learn. Res. 2008, 9, 2579–2605. [Google Scholar]
SNo | Layers | RMS Error | Accuracy | t-Test |
---|---|---|---|---|
1 | 3 | 2.64 | 73.8 | 0.023 |
2 | 5 | 1.9 | 79.0 | 0.019 |
3 | 9 | 0.41 | 84.5 | 0.013 |
4 | 13 | 5.53 | 27.1 | 0.24 |
SNo | Layers | RMS Error | Accuracy | t-Test |
---|---|---|---|---|
1 | 3 | 0.024 | 98.6 | 0.02 |
2 | 5 | 0.0092 | 99.8 | 0.011 |
3 | 9 | 0.48 | 88.3 | 0.02 |
4 | 13 | 3.21 | 41.8 | 0.89 |
SNo | Layers | RMS Error | Accuracy | t-Test |
---|---|---|---|---|
1 | 3 | 5.5 | 58.4 | 0.12 |
2 | 5 | 4.24 | 43.7 | 0.18 |
3 | 9 | 4.09 | 45.5 | 0.06 |
4 | 13 | 11.9 | 16.4 | 0.08 |
SNo | Layers | RMS Error | Accuracy | t-Test |
---|---|---|---|---|
1 | 3 | 7.43 | 28.9 | 0.069 |
2 | 5 | 4.44 | 43.8 | 0.072 |
3 | 9 | 4.682 | 45.3 | 0.012 |
4 | 13 | 13.81 | 11.2 | 0.63 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Tirumala, S.S.; Whalley, J. Exploring Underlying Features in Hidden Layers of Neural Network. Sensors 2025, 25, 5755. https://doi.org/10.3390/s25185755
Tirumala SS, Whalley J. Exploring Underlying Features in Hidden Layers of Neural Network. Sensors. 2025; 25(18):5755. https://doi.org/10.3390/s25185755
Chicago/Turabian StyleTirumala, Sreenivas Sremath, and Jacqui Whalley. 2025. "Exploring Underlying Features in Hidden Layers of Neural Network" Sensors 25, no. 18: 5755. https://doi.org/10.3390/s25185755
APA StyleTirumala, S. S., & Whalley, J. (2025). Exploring Underlying Features in Hidden Layers of Neural Network. Sensors, 25(18), 5755. https://doi.org/10.3390/s25185755