Vectorial Image Representation for Image Classification
Abstract
:1. Introduction
2. Materials and Methods
2.1. Texture Space
2.1.1. Texture Unit Definition
2.1.2. Graphical Representation
2.1.3. Image Representation on the Texture Space
2.2. Similarity Measurement between a Prototype Image and Test Image
- If , then , because and are orthogonal, . Ergo, the and images are completely different (see Figure 5a).
- If , then , because and have the same direction and magnitude, . For this case, the and images are identical (see Figure 5b).
- If then ; consequently, the and images have a certain degree of similarity between them, given that the and vectors are not parallel within the texture space. Therefore, the condition is satisfied (see Figure 5c).
2.3. Image Classification in the Texture Space
3. Experimental Work and Results
3.1. Transformation of an Image Onto a Texture Vector
3.2. Image Recognition in the Texture Space
- In the transformation, image is completely characterized through its local texture characteristics, and these are represented by the texture vector .
- The digital image is essentially a field of randomness, given the nature of the light source and the noise detected by the system; henceforth, for each image , a unique vector is generated in the texture space with a particular direction and magnitude that differ for each class.
4. Discussion
- The image is fully characterized in the transformation , where the texture space is represented by the texture vector . The new transformation can be termed Vectorial Image Representation on the Texture Space (VIR-TS) because, in the image transformation, the image comes to be represented by the vector .
- Due to the irregular nature of the light source and the noise during the photodetection process, the image is considered a field of randomness; consequently, a unique vector is generated for each digital image (see Table 1).
- The vector withholds all local texture characteristics of the image under study, given that the vector is calculated by the sum of all of the radius vectors, where a radius vector is defined as texture unit .
- The texture unit possesses a vectorial character because it is calculated by solving a homogeneous equation system of .
- The value employed for the solution of the homogeneous equation system does not affect the results of the image recognition.
- The transformation has a potential application in the development of artificial vision systems focused on the recognition of digital images.
- In the experimental work, the number of classes does not affect the results of the classification efficiency, given that each digital image is represented by its own vector in the texture space (see point 2).
- Because medical images contain local textural features that can be extracted through local analysis [3,4,26,27], and knowing that the technique reported in this work also extracts texture features based on local analysis, then the VIR-TS transform and the classifier described in Section 2.3 can be applied in medical image recognition. The benefit would be the development of medical diagnostic systems with high efficiency, easy to implement because the definition of the texture unit is based on a linear transformation and not on pattern encoding [21,28], where the overflow of physical memory of the computer is possible [29].
- Comparing the statistical texture extraction techniques reported in reference [21] with the VIR-TS technique based on linear transformations, both texture extraction techniques are very different. In statistical techniques, the texture unit is calculated based on the encoding of discrete random patterns located on the digital image, its texture unit is considered a random event and the texture characteristics are represented through a discrete histogram. In our technique called VIR-TS, the texture unit is calculated based on a linear transformation, its texture unit is a radius vector, and the texture features are represented in a texture space through a random vector.
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Laws, K.I. Textured Images Sgmentation. Ph.D. Thesis, University of Southern California, Los Angeles, CA, USA, 1980. [Google Scholar]
- de Matos, J.; Soares de Oliveira, L.E.; Britto, A.d.S., Jr.; Lameiras Koerch, A. Large-margin representation learning for texture classification. Pattern Recognit. Lett. 2023, 170, 39–47. [Google Scholar] [CrossRef]
- Aguilar Santiago, J.; Guillen Bonilla, J.T.; Garcia Ramírez, M.A.; Jiménez Rodríguez, M. Identification of Lacerations Caused by Cervical Cancer through a comparative study among texture-extraction techniques. Appl. Sci. 2023, 13, 8292. [Google Scholar] [CrossRef]
- Sharma, R.; Kumar Mahanti, G.; Panda, G.; Rath, A.; Dash, S.; Mallik, S.; Hu, R. A framework for detecting thyroid cancer from ultrasound and histopathological images using deep learning, meta-heuristics and MCDM algorithms. J. Image 2023, 9, 173. [Google Scholar] [CrossRef]
- Almakady, Y.; Mahmhoodi, S.; Bennett, M. Adaptive columetric texture segmentation based on Gaussian Markov random fields features. Pattern Recognit. Lett. 2020, 140, 101–108. [Google Scholar] [CrossRef]
- Qiu, J.; Li, B.; Liao, R.; Mo, H.; Tian, L. A dual-task región-boundary aware neural network for accurate pulmonary nodule segmentation. J. Vis. Commun. Image Represent. 2023, 96, 103909. [Google Scholar] [CrossRef]
- Anderson, K.; Richardson, J.; Lennartson, B.; Fabian, M. Sinthesis of hierarchical and distributed control functions for multi-product manufacturing cells. In Proceedings of the 2006 IEEE International Conference on Automation Sciences and Engineering, Shanghai, China, 8–10 October 2006. [Google Scholar] [CrossRef]
- Elber, G. Geometric texture modeling. IEEE Comput. Graph. Appl. 2005, 25, 66–76. [Google Scholar] [CrossRef] [PubMed]
- Sánchez Yáñez, R.; Kurmyshev, E.K.; Cuevas, F.J. A framework for texture classification using the coordinated clusters representation. Pattern Recognit. Lett. 2003, 24, 21–31. [Google Scholar] [CrossRef]
- Fuentes Alventosa, A.; Gómez Luna, J.; Medina Carnicer, R. GUD-Canny: A real-time GPU-based unsupervised and distributed Canny edge detector. J. Real-Time Image Process. 2022, 19, 591–605. [Google Scholar] [CrossRef]
- Elhanashi, A.; Saponara, S.; Dini, P.; Sheng, Q.; Morita, D.; Raytchev, B. An integrated and real-time social distancing, mask detection, and facial temperature video measurement system for pandemic monitoring. J. Real-Time Image Process. 2023, 20, 95. [Google Scholar] [CrossRef]
- Marin, Y.; Miteran, J.; Dubois, J.; Herryman, B.; Ginhac, D. An FPGA-based desing for real-time super-resolution reconstruction. J. Real-Time Image Process. 2020, 17, 1765–1785. [Google Scholar] [CrossRef]
- Xu, Y.; Fermuller, C. Viewpoint invariant texture description using fractal analysis. Int. J. Comput. Vis. 2009, 83, 85–100. [Google Scholar] [CrossRef]
- Yapi, D.; Nouboukpo, A.; Said Allili, M. Mixture of multivariate generalized Gaussians fr multi-band texture modeling and representation. Signal Process. 2023, 209, 109011. [Google Scholar] [CrossRef]
- Zou, C.; Ian Kou, K.; Yan Tang, Y. Probabilistic quaternion collaborative representation and its application to robust color face identification. Signal Process. 2023, 210, 109097. [Google Scholar] [CrossRef]
- Shu, X.; Pan, H.; Shi, J.; Song, X.; Wu, X.J. Using global information to refine local patterns for texture representation and classification. Pattern Recognit. 2022, 131, 108843. [Google Scholar] [CrossRef]
- Chen, Z.; Quan, Y.; Xu, R.; Jin, L.; Xu, Y. Enhancing texture representation with deep tracing pattern encoding. Pattern Recognit. 2024, 146, 109959. [Google Scholar] [CrossRef]
- Scabini, L.; Zielinski, K.M.; Ribas, L.C.; Goncalves, W.N.; De Baets, B.; Bruno, O.M. RADAM: Texture recognition through randomized aggregated encoding of dee activation maps. Pattern Recognit. 2023, 143, 109802. [Google Scholar] [CrossRef]
- Sánchez Yáñez, R.E.; Kurmyshev, E.V.; Fernández, A. One-class texture classifier in the CCR feature space. Pattern Recognit. Lett. 2003, 24, 1503–1511. [Google Scholar] [CrossRef]
- Lee, H.H.; Park, S.; Im, J. Resampling approach for one-class classification. Pattern Recognit. 2023, 143, 109731. [Google Scholar] [CrossRef]
- Fernádez, A.; Álvarez, M.X.; Bianconi, F. Texture description through histograms of equivalent patterns. J. Math. Imaging Vis. 2013, 45, 76–102. [Google Scholar] [CrossRef]
- Ghoneim, A.; Muhammad, G.; Hossain, M.S. Cervical cancer classification using convolutional neural networks and extreme learning machines. Future Gener. Comput. Syst. 2020, 102, 643–649. [Google Scholar] [CrossRef]
- Padilla Leyferman, C.E.; Guillen Bonilla, J.T.; Estrada Gutiérrez, J.C.; Jiménez Rodríguez, M. A novel technique for texture description and image classification based in RGB compositions. IET Commun. 2023, 17, 1162–1176. [Google Scholar] [CrossRef]
- Kurmyshev, E.V.; Sanchez-Yanez, R.E. Comparative experiment with colour texture classifiers using the CCR feature space. Pattern Recognit. Lett. 2005, 26, 1346–1353. [Google Scholar] [CrossRef]
- Guillen Bonilla, J.T.; Kurmyshev, E.; Fernandez, A. Quantifying a similarity of classes of texture image. Appl. Opt. 2007, 46, 5562–5570. [Google Scholar] [CrossRef] [PubMed]
- González-Castro, V.; Cernadas, E.; Huelga, E.; Fernández-Delgado, M.; Porto, J.; Antunez, J.R.; Souto-Bayarri, M. CT Radiomics in Colorectal Cancer: Detection of KRAS Mutation Using Texture Analysis and Machine Learning. Appl. Sci. 2020, 10, 6214. [Google Scholar] [CrossRef]
- Park, Y.R.; Kim, Y.J.; Ju, W.; Nam, K.; Kim, S.; Kim, K.G. Comparison of machine and deep learning for the classification of cervical cancer based on cervicography images. Sci. Rep. 2021, 11, 16143. [Google Scholar] [CrossRef]
- Kurmyshev, E.V. Is the Coordinated Clusters Representation an analog of the Local Binary Pattern? Comput. Sist. 2010, 14, 54–62. [Google Scholar]
- Kurmyshev, E.V.; Guillen Bonilla, J.T. Complexity reduced coding of binary pattern units in image classification. Opt. Lasers Eng. 2011, 49, 718–722. [Google Scholar] [CrossRef]
Vector | ||
---|---|---|
+ 105 + 177608 | ||
104 + 105 + 177608 | ||
+ 105 + 177608 | ||
+ 105 + 177608 | ||
+ 105 + 177608 | ||
+ 105 + 177608 | ||
+ 105 + 177608 | ||
+ 105 + 177608 | ||
+ 105 + 177608 | ||
+ 105 + 177608 |
Experimental Results for λ = 2 (First Confusion Matrix) | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Tree stem images (prototypes) | |||||||||||
Tree stem images (test) | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | |
1 | 1.0000 | 0.9919 | 0.9453 | 0.9997 | 0.9985 | 0.9583 | 0.9923 | 0.9915 | 0.9898 | 0.9880 | |
2 | 0.9919 | 1.0000 | 0.9759 | 0.9916 | 0.9970 | 0.9868 | 0.9999 | 0.9999 | 0.9998 | 0.9996 | |
3 | 0.9453 | 0.9759 | 1.0000 | 0.9424 | 0.9576 | 0.9938 | 0.9745 | 0.9764 | 0.9780 | 0.9803 | |
4 | 0.9997 | 0.9916 | 0.9424 | 1.0000 | 0.9986 | 0.9577 | 0.9922 | 0.9912 | 0.9896 | 0.9878 | |
5 | 0.9985 | 0.9970 | 0.9576 | 0.9986 | 1.0000 | 0.9714 | 0.9973 | 0.9968 | 0.9958 | 0.9945 | |
6 | 0.9583 | 0.9868 | 0.9938 | 0.9577 | 0.9714 | 1.0000 | 0.9861 | 0.9872 | 0.9890 | 0.9908 | |
7 | 0.9923 | 0.9999 | 0.9745 | 0.9922 | 0.9973 | 0.9861 | 1.0000 | 0.9999 | 0.9998 | 0.9995 | |
8 | 0.9915 | 0.9999 | 0.9764 | 0.9912 | 0.9968 | 0.9872 | 0.9999 | 1.0000 | 0.9999 | 0.9996 | |
9 | 0.9898 | 0.9998 | 0.9780 | 0.9896 | 0.9958 | 0.9890 | 0.9998 | 0.9999 | 1.0000 | 0.9999 | |
10 | 0.9880 | 0.9996 | 0.9803 | 0.9878 | 0.9945 | 0.9908 | 0.9995 | 0.9996 | 0.9999 | 1.0000 |
Experimental Results for λ = 25 (Second Confusion Matrix) | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Tree stem images (prototypes) | |||||||||||
Tree stem images (test) | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | |
1 | 1.0000 | 0.9919 | 0.9453 | 0.9997 | 0.9985 | 0.9583 | 0.9923 | 0.9915 | 0.9898 | 0.9880 | |
2 | 0.9919 | 1.0000 | 0.9759 | 0.9916 | 0.9970 | 0.9868 | 0.9999 | 0.9999 | 0.9998 | 0.9996 | |
3 | 0.9453 | 0.9759 | 1.0000 | 0.9424 | 0.9576 | 0.9938 | 0.9745 | 0.9764 | 0.9780 | 0.9803 | |
4 | 0.9997 | 0.9916 | 0.9424 | 1.0000 | 0.9986 | 0.9577 | 0.9922 | 0.9912 | 0.9896 | 0.9878 | |
5 | 0.9985 | 0.9970 | 0.9576 | 0.9986 | 1.0000 | 0.9714 | 0.9973 | 0.9968 | 0.9958 | 0.9945 | |
6 | 0.9583 | 0.9868 | 0.9938 | 0.9577 | 0.9714 | 1.0000 | 0.9861 | 0.9872 | 0.9890 | 0.9908 | |
7 | 0.9923 | 0.9999 | 0.9745 | 0.9922 | 0.9973 | 0.9861 | 1.0000 | 0.9999 | 0.9998 | 0.9995 | |
8 | 0.9915 | 0.9999 | 0.9764 | 0.9912 | 0.9968 | 0.9872 | 0.9999 | 1.0000 | 0.9999 | 0.9996 | |
9 | 0.9898 | 0.9998 | 0.9780 | 0.9896 | 0.9958 | 0.9890 | 0.9998 | 0.9999 | 1.0000 | 0.9999 | |
10 | 0.9880 | 0.9996 | 0.9803 | 0.9878 | 0.9945 | 0.9908 | 0.9995 | 0.9996 | 0.9999 | 1.0000 |
Experimental Results for λ = 2 | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Tree stem images (prototypes) | |||||||||||
Tree stem images (test) | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | |
2 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | |
3 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | |
4 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | |
5 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | |
6 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | |
7 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | |
8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | |
9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | |
10 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
Experimental Results for λ = 25 | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Tree stem images (prototypes) | |||||||||||
Tree stem images (test) | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | |
2 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | |
3 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | |
4 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | |
5 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | |
6 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | |
7 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | |
8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | |
9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | |
10 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Sánchez-Morales, M.-E.; Guillen-Bonilla, J.-T.; Guillen-Bonilla, H.; Guillen-Bonilla, A.; Aguilar-Santiago, J.; Jiménez-Rodríguez, M. Vectorial Image Representation for Image Classification. J. Imaging 2024, 10, 48. https://doi.org/10.3390/jimaging10020048
Sánchez-Morales M-E, Guillen-Bonilla J-T, Guillen-Bonilla H, Guillen-Bonilla A, Aguilar-Santiago J, Jiménez-Rodríguez M. Vectorial Image Representation for Image Classification. Journal of Imaging. 2024; 10(2):48. https://doi.org/10.3390/jimaging10020048
Chicago/Turabian StyleSánchez-Morales, Maria-Eugenia, José-Trinidad Guillen-Bonilla, Héctor Guillen-Bonilla, Alex Guillen-Bonilla, Jorge Aguilar-Santiago, and Maricela Jiménez-Rodríguez. 2024. "Vectorial Image Representation for Image Classification" Journal of Imaging 10, no. 2: 48. https://doi.org/10.3390/jimaging10020048
APA StyleSánchez-Morales, M. -E., Guillen-Bonilla, J. -T., Guillen-Bonilla, H., Guillen-Bonilla, A., Aguilar-Santiago, J., & Jiménez-Rodríguez, M. (2024). Vectorial Image Representation for Image Classification. Journal of Imaging, 10(2), 48. https://doi.org/10.3390/jimaging10020048