Vectorial Image Representation for Image Classification

This paper proposes the transformation S→C→, where S is a digital gray-level image and C→ is a vector expressed through the textural space. The proposed transformation is denominated Vectorial Image Representation on the Texture Space (VIR-TS), given that the digital image S is represented by the textural vector C→. This vector C→ contains all of the local texture characteristics in the image of interest, and the texture unit T→ entertains a vectorial character, since it is defined through the resolution of a homogeneous equation system. For the application of this transformation, a new classifier for multiple classes is proposed in the texture space, where the vector C→ is employed as a characteristics vector. To verify its efficiency, it was experimentally deployed for the recognition of digital images of tree barks, obtaining an effective performance. In these experiments, the parametric value λ employed to solve the homogeneous equation system does not affect the results of the image classification. The VIR-TS transform possesses potential applications in specific tasks, such as locating missing persons, and the analysis and classification of diagnostic and medical images.


Introduction
Visual texture is an important element for component classification in scenes and is commonly used for the processing of visual information.The surfaces of all materials are characterized through their texture properties, which can be described as follows: (a) the visual texture is a spatial distribution of gray levels; (b) the visual texture can be perceived through different scales or resolutions; (c) the texture is an area property and not a point property; (d) a region is perceived as texture when the number of primitive objects within it is large.On the other hand, according to reference [1], some important perceptions in the quality of a texture are uniformity, density, rugosity, linearity, direction, frequency, and phase.Henceforth, a texture can be considered as fine, rough, soft, regular, irregular, or linear.The grade of irregularity or the properties of a texture can be found scattered throughout the entire image.In the field of texture analysis, there exist three major problems: (a) texture classification, focused on determining to which class the sampled texture belongs [2][3][4]; (b) texture segmentation, where an image is sectioned into multiple regions and each region has a specific type of texture [5,6]; and (c) texture synthesis, which focuses on constructing a model that can be employed to produce artificial textures for specific applications such as computer graphics [7,8].Furthermore, according to reference [9], the characteristics extraction techniques can be classified into three categories: geometrical methods, signal processing, and statistical models.Geometrical methods are based on the analysis of primitive textures.Some geometrical methods for primitive extractions include adaptative region extractions, mathematical morphology, structural methods, and border detection [10,11].The model-based methods hypothesize the subjacent texture, constructing a parametric model that can generate the intensity's distribution of interest.Ergo, these models can also be employed for texture synthesis.Some of these models that are applied for texture synthesis are called stochastic spatial interaction models, random field models, and fractals [12,13].The signal processing methods perform an analysis of the frequency components of the images; the latter are also known as filtering methods, and to mention only some of these, we submit spatial domain filter, frequency analysis, and spatial/spatial-frequency methods [14,15].Last but not least, the statistical methods offer an analysis of the spatial distribution of the local texture characteristics.Such characteristics are represented through a histogram of a variable dimension depending on the procedure employed to calculate the texture unit [16][17][18].This histogram presents the occurrence frequency of the estimated texture units within the digital image, and its dimension is dependent on the unit texture definition.Selection of the texture extraction method is conducted in agreement with the problem under consideration.There are two types of classifiers for image classification in an a priori knowledge scheme: one-class and multiclass.For one-class classifiers [19,20], an unequivocal class is clearly defined, while the remaining classes are of no interest.In this situation, a region is defined within the characteristics space; this region represents the textural characteristics of the known class.This region is the acceptance zone for the class of interest or is employed as a prototype.On the other hand, in the multiclass classifiers [21][22][23], the characteristics space is divided into multiple regions, each region corresponding to the characteristics of a class and, frequently, the class (image) is represented by a characteristics vector known as a prototype vector.The classification of multiclass images consists of comparing the characteristics vector of a test image with the characteristics vectors of the known classes.Henceforth, the test image is assigned to the class with the most similar characters.This discrimination is performed by means of finding the distance between the vectors within the characteristics space.
To our knowledge, the texture unit has not been defined through a homogeneous equation system, which is defined through an observation window.In this paper, the local texture characteristics are extracted from grayscale images S. To extract the texture characteristics, a mobile observation window of W = 3 × 3 in size is employed to detect local random patterns of P pixels across the image.In each detected position, the pixel values are considered constants within a homogeneous equation system whose solution is the vectorial unit texture The report has the following structure.Materials and methods are presented in Section 2. In Section 2.1, the texture space is described based on three subsections: Section 2.1.1,the definition of the texture unit is shown; Section 2.1.2,the definition of the texture unit is represented graphically; and Section 2.1.3,the representation of a digital image in texture space is described.In Section 2.2, the procedure to measure the similarity in texture space between a prototype vector and a test vector is explained.Section 2.3 describes a classifier for multiple classes in texture space and where the VIR-TS vector is used as a feature vector.In Section 3, the experimental work is developed.In Section 3.1, a digital image database is vector-represented in texture space where each vector has its own direction and magnitude.Furthermore, using the vectors obtained in the transformation, the similarity between images is measured.In Section 3.2, experimental results of image classification are reported, which demonstrate the high efficiency of the VIR-TS technique.A discussion of our work is provided in Section 4. Finally, in Section 5 the most relevant conclusions are presented.

Texture Unit Definition
In the texture analysis, a mobile observation window W frequently bears a W = I × J = 3 × 3 size [21,24,25]; it is deployed to extract the local texture characteristics of an image under study.This window is shifted pixel-by-pixel across the whole image and, for each position, the window detects a discrete pattern, which is employed to generate a decimal code called a texture unit.Afterward, the texture unit is interpreted as a discrete variable and is then taken as an index to generate a discrete histogram h(k).Such a histogram h(k) is interpreted as a texture spectrum and is then deployed as a characteristics vector in image classifiers [21].Now, bearing in mind the structure of the mobile observation window, and considering the gray-level image such as a random matrix = {s m,n } (m = 1, 2, . . ., M; n = 1, 2, . . ., N), with size M × N, and for each position, a discrete pattern P = p i,j (I = 1, 2, 3; J = 1, 2, 3) is detected through the window, as shown in Figure 1.If the pattern elements are considered the coefficient of a homogeneous equation system, the system will be:  is called the unit texture vector.The trivial solution of the homogeneous equation system occurs when all of the elements of vector T have a value of zero: t 1 = 0, t 2 = 0, t 3 = 0. Nonetheless, this solution is not functional for our interests; thus, a nontrivial solution must be found.Therefore, based on a linear algebra concept, the nontrivial solution is possible when its determinant is equal to zero; as a consequence, there will be infinite solutions.To achieve this, the term K is introduced within the equations and their determinant is equal to zero, as shown in Equation ( 2  Hence, the problem becomes that in finding a K value, so that the condition    0 is satisfied.From Equation ( 2  Hence, the problem becomes that in finding a K value, so that the condition det C p = 0 is satisfied.From Equation (2), in terms of the matrix elements C P , K has a value of Once the value K is determined, it is introduced into the equation system; Equation (1) then takes the following form: where the value K is determined by Equation (3).
Afterward, to determine the texture unit T, the nontrivial solution of Equation (4) must be found.As a first step, the first two linear equations are left depending on t 3 : employing the Cramer Rule method, the solution for t 1 is obtained through: Observing Expression (8), for each real value of lambda λ, a unique resolution of the infinite solution is found.For example, when λ = 0, the trivial solution of the equation system is obtained (t 1 = 0, t 2 = 0, and t 3 = 0); henceforth, the nontrivial solution is obtained when λ ̸ = 0.

Graphical Representation
Based on Equation (1) and Expression ( 8), the unit texture vector is defined by: T where û1 , û2 , û3 are the unit vectors that indicate the axis direction in a rectangular coordinate system of three dimensions (Figure 2a).Hereafter (9), the where with ⇀ T being the magnitude of vector ⇀ T; its graphic presentation is displayed in Figure 2b.
Based on Figure 2b, the texture unit ⇀ T is a radius vector that extends from the origin to the coordinates t 1 = p 13 p 22 −p 23 p 12

Image Representation on the Texture Space
Given that, if a grayscale image S has an   size and if this image is analyzed through an   window, then there are       patterns .Furthermore, given that each  pattern (in the image domain) generates a texture unit  ⃗ (in texture space), then when the image  is analyzed locally through the observation window for the   pattern    , , , … ,         , the   texture unit  ⃗  is calculated (radius vector in texture space); as a consequence, the image  can be represented through a series of radius vectors.Thus, adding together all of the components of all of these radius vectors in the texture space, the image  is represented by vector  ⃗ , defined as: The directions are given by   ,   ,   and the components  ,  ,  are calculated with: where  is the  ℎ component of the elements for  ,  is the  ℎ component of the elements for  , and  is the  ℎ component of the elements for  .Equation ( 9) was considered, and  is the total of the patterns found in the digital image under study.Figure 3 depicts vector  ⃗ , which is in texture space.T and its components t 1 û1 + t 2 û2 + t 3 û3 .

Image Representation on the Texture Space
Given that, if a grayscale image S has an M × N size and if this image is analyzed through an I × J window, then there are (M − I + 1) × (N − J + 1) patterns P. Furthermore, given that each P pattern (in the image domain) generates a texture unit → T (in texture space), then when the image S is analyzed locally through the observation window for the n − th pattern P n (n = 1, 2, 3, . . ., N P = (M − I + 1) × (N − J + 1)), the n − th texture unit → T n is calculated (radius vector in texture space); as a consequence, the image S can be represented through a series of radius vectors.Thus, adding together all of the components of all of these radius vectors in the texture space, the image S is represented by vector → C, defined as: The directions are given by û1 , û2 , û3 and the components a 1 , a 2 , a 3 are calculated with: where t 1n is the n − th component of the elements for t 1 , t 2n is the n − th component of the elements for t 2 , and t 3n is the n − th component of the elements for t 3 .Equation ( 9) was considered, and N P is the total of the patterns found in the digital image under study.Considering Figure 3 and Equation ( 13), the magnitude of vector  ⃗ is: where its directing cosines are given with: and holding the equivalence: Based on the performed analysis, image  can be represented as a radius vector  ⃗ in the texture space whose magnitude and direction depend on the randomness in the image under study.

Similarity Measurement between a Prototype Image and Test Image
With the knowledge that the  →  ⃗ transformation is possible, then the measurement of similarity between a prototype image and an unknown test image can be performed in the texture space.
Given a digital image   of a c class whose texture vector is  ⃗  , and given an unknown test image   whose vector is  ⃗  , the difference between the   and   images in the texture space can be calculated through subtraction of the unknown image  ⃗  minus the vector of the prototype image  ⃗  :  ⃗   ⃗   ⃗  (18) where  ⃗  is the difference vector between the texture images.Images deploy the  ⃗  and  ⃗  vectors.Considering the cosines law and the geometry present in Figure 4, we obtain: Considering Figure 3 and Equation ( 13), the magnitude of vector where its directing cosines are given with: and holding the equivalence: Based on the performed analysis, image S can be represented as a radius vector → C in the texture space whose magnitude and direction depend on the randomness in the image under study.

Similarity Measurement between a Prototype Image and Test Image
With the knowledge that the S → → C transformation is possible, then the measurement of similarity between a prototype image and an unknown test image can be performed in the texture space.
Given a digital image S c of a c class whose texture vector is where → C dif is the difference vector between the texture images.
Images deploy the → C c and → C Test vectors.Considering the cosines law and the geometry present in Figure 4, we obtain: and from (19), we obtain: Due to the geometry of the problem, if (18) is substituted in (20), we obtain: On applying the distributive law: On reducing, we reach From ( 23), the following relationship can be achieved: where the symbol indicates a scalar product,  24) is employed to measure the similarity between vectors, this equivalence is achieved: (25) where sim(S_Test, S_c) is the similarity measurement between the S Test and S c images.Thus, based on Figure 4 and Equation (25), the following conditions (as points) can be indicated: 1.
If cosφ = 0, then sim(S Test , S c ) = 0, because → C Test and → C c are orthogonal, φ = 90 • .Ergo, the S Test and S c images are completely different (see Figure 5a).

2.
If cosφ = 1, then sim(S Test , S c ) = 1, because → C Test and → C c have the same direction and magnitude, φ = 0 • .For this case, the S Test and S c images are identical (see Figure 5b).

3.
If 0 < cosφ < 1, then 0 < sim(S Test , S c ) < 1; consequently, the S Test and S c images have a certain degree of similarity between them, given that the → C Test and → C c vectors are not parallel within the texture space.Therefore, the condition 0 • < φ < 90 • is satisfied (see Figure 5c).Based on conditions 1-3 and on Figure 5, it is possible to measure the similarity between images within the texture space; therefore, texture image classification is also a possibility.Based on conditions 1-3 and on Figure 5, it is possible to measure the similarity between images within the texture space; therefore, texture image classification is also a possibility.Based on conditions 1-3 and on Figure 5, it is possible to measure the similarity between images within the texture space; therefore, texture image classification is also a possibility.

Image Classification in the Texture Space
Figure 6 schematically displays the proposed multiclass classifier for image recognition within the texture space.The classifier consists of two phases: learning and recognition.During the learning phase, a human expert identifies and names a known image database    1,2,3, … ,  , where each image is considered as an independent class; each class has a series of radius vectors  ⃗  that are calculated, and with these radius vectors, the prototype vector  ⃗  is obtained.This  ⃗  vector represents all of the local texture characteristics of the image   within the  class.In the recognition phase, an unknown test image   is represented through a series of radius vectors  ⃗    , and the  ⃗  vector is calculated with these.Afterward, the similarity between the test image   and the prototype images   is measured in the texture space employing Expression (25).The test image   is then assigned to the most similar class; such a condition is achieved when the angle y is the smallest of these during the comparison between the  ⃗  and  ⃗  vectors (see Figure 5) and when the following condition is satisfied: Ergo, the image   is assigned to the  class when the projection of the vector  ⃗  above the  ⃗  vector is the unit or that closest to the unit.The classifier results are displayed in a confusion matrix  ℎ ; the rows show the prototype images, the columns show the test images, the elements of the main diagonal correspond to the correct classification hits, and the elements outside of the diagonal represent the classification errors.The classification efficiency in terms of percentage is calculated with: where % is the efficiency in terms of percentage, ∑  ℎ is the sum of all of the elements of the main diagonal in the confusion matrix, and ∑ ∑ ℎ is the sum of all of the elements within the confusion matrix.The classifier results are displayed in a confusion matrix H = {h cc }; the rows show the prototype images, the columns show the test images, the elements of the main diagonal correspond to the correct classification hits, and the elements outside of the diagonal represent the classification errors.The classification efficiency in terms of percentage is calculated with: where E f % is the efficiency in terms of percentage, ∑ diag({h cc }) is the sum of all of the elements of the main diagonal in the confusion matrix, and ∑ c ∑ c h cc is the sum of all of the elements within the confusion matrix.Additionally, the   →  ⃗  transformation was performed applying the followin steps: (a) the RGB image acquired with the Smartphone LG 50 was transformed into grayscale level   deploying MatLab 2016b ® scientific software; (b) an observation wi dow with a  3 3 size is selected; (c) the window  is displaced element-by-el ment across the entire gray-level image   with a M N 3120 4160 size; (d) for ea pattern , a homogeneous equation system is proposed, then its  ⃗ unit is calculated; ( all units  ⃗ are represented in the texture space as a radius vector, and (f) by adding t gether all of the radius vectors, the vector  ⃗  is estimated.Exercising steps a-f, the imag in Figure 7 were represented through a texture vector  ⃗   1,2,3, … ,10 .The results a presented in Table 1.T are represented in the texture space as a radius vector, and (f) by adding together all of the radius vectors, the vector → C c is estimated.Exercising steps a-f, the images in Figure 7 were represented through a texture vector → C c (c = 1, 2, 3, . . ., 10).The results are presented in Table 1.Considering Figure 7 and Table 1, the digital image S c is represented in the texture space through a radius vector → C c , whose components are dependent on the texture characteristics of the image and on the parametrization value λ.During the transformation, the texture characteristics of the image render the → C c vector unique in the texture space, while the parameter λ operates as a scale factor.

Transformation of an
To verify the uniqueness of each vector in Table 1, the similarity between these is measured employing the scalar product in Equation (25).The results are displayed in a confusion matrix, where the elements of the main diagonal correspond to the similarity measurements of the same vector  2 and 3 present these results: Based on the results of Tables 2 and 3, both confusion matrixes are identical, given that the elements in their respective diagonals are the unit, and the elements outside of their diagonals are fewer than the unit.This corroborates that a digital image S c is represented in the texture space through a unique vector → C c , and that the parameter λ, operated as a scale factor and its value, does not affect the results.Furthermore, the similarity measurements between images above 0.94 are attributed to the parametrization of the homogeneous equation system due to its resolution.This causes the third component of all of the vectors to bear the same value λ, and the remaining two components (first and second) are the only components scaled by the value of λ (see Equation ( 8)).

Image Recognition in the Texture Space
Knowing that each digital image can be represented in the texture space through a vector, the goal of this section is to prove that the digital images can be classified in the texture space.As previously presented in Figure 7, the database consists of 10 digital images with a size of M × N = 3120 × 4160 pixels; these images show the bark of tree stems and were acquired under natural lighting and controlled scale and rotation.The classifier employed for image recognition was described in Section 4. In both phases, the same images are employed for both learning and recognition, along with the same observation window size of W = 3 × 3 pixels.The similarity measurement in the texture space is performed considering the maximal likeness between the → C Test and → C c vectors (Equations ( 25) and ( 26)).To conclude, the classification results are presented through two confusion matrixes: Table 4 displays the confusion matrix for λ = 2, and Table 5 presents the confusion matrix for λ = 25.It is worth recalling that the elements of the main diagonal in these matrixes represent the correct classification hits, and the elements outside of the main diagonal are the identification errors.In this manner, based on Equation ( 27) and Tables 4 and 5, the classification is: where E f λ=2 % is the image classification efficiency in terms of percentage for λ = 2, and E f λ=25 % is the image classification efficiency for λ = 25.The efficiency is 100% in both cases.This further confirms that the proposed transformation in Section 2, along with the classifier described in Section 4, entertain a high efficiency and that the recognition of the images can be performed in the texture space.The high efficiency is attributed to the following points: 1.
In the S → → C transformation, image S is completely characterized through its local texture characteristics, and these are represented by the texture vector → C.

2.
The digital image is essentially a field of randomness, given the nature of the light source and the noise detected by the system; henceforth, for each image S c , a unique vector → C c is generated in the texture space with a particular direction and magnitude that differ for each class.
Nonetheless, the efficiency of our proposal can be reduced if the digital images are classified dynamically (in real time).This is due to the fluctuation in the light source temporarily and spatially.Consequently, for each instant of time, the pixels of the digital camera vary in intensity.In other words, the noise during the acquisition of the image increases; thus, the texture vector → C changes, causing recognition errors.

Discussion
In this paper, the S → → C transformation is proposed where S is a grayscale image and → C is a vector in a new space, which is denominated texture space.Essentially, the transformation consists of representing the image S through a series of radius vectors in the texture space, with each radius vector a texture unit → T, and this is calculated by solving a homogeneous equation system.Afterward, the vector → C is calculated by the sum of all of the radius vectors and, subsequently, all of the local texture characteristics of the image under study are considered in it.Its direction and magnitude are in agreement with the randomness in the digital image and, for each image S c , a unique vector → C c is generated.Additionally, a multiclass classifier is proposed and applied within the texture space where the vector → C c is employed as a characteristics vector, demonstrating its potential application for image classification.Based on these results, the following points are worth mentioning:

2.
Due to the irregular nature of the light source and the noise during the photodetection process, the image S is considered a field of randomness; consequently, a unique vector → C is generated for each digital image (see Table 1).

The vector
→ C withholds all local texture characteristics of the image under study, given that the vector is calculated by the sum of all of the radius vectors, where a radius vector is defined as texture unit → T.

4.
The texture unit → T possesses a vectorial character because it is calculated by solving a homogeneous equation system of 3 × 3.

5.
The texture vector → C can be employed as a characteristics vector in classifiers with a high efficiency (see Tables 3 and 4).6.
The value λ employed for the solution of the homogeneous equation system does not affect the results of the image recognition.

7.
The S → → C transformation has a potential application in the development of artificial vision systems focused on the recognition of digital images.8.
In the experimental work, the number of classes does not affect the results of the classification efficiency, given that each digital image is represented by its own vector → C in the texture space (see point 2).9.
Because medical images contain local textural features that can be extracted through local analysis [3,4,26,27], and knowing that the technique reported in this work also extracts texture features based on local analysis, then the VIR-TS transform and the classifier described in Section 2.3 can be applied in medical image recognition.The benefit would be the development of medical diagnostic systems with high efficiency, easy to implement because the definition of the texture unit is based on a linear transformation and not on pattern encoding [21,28], where the overflow of physical memory of the computer is possible [29].10.Comparing the statistical texture extraction techniques reported in reference [21] with the VIR-TS technique based on linear transformations, both texture extraction techniques are very different.In statistical techniques, the texture unit is calculated based on the encoding of discrete random patterns located on the digital image, its texture unit is considered a random event and the texture characteristics are represented through a discrete histogram.In our technique called VIR-TS, the texture unit is calculated based on a linear transformation, its texture unit is a radius vector, and the texture features are represented in a texture space through a random vector.
The Vector Image Representation on the Texture Space (VIR-TS) transformation is very different from the statistical techniques reported in reference [21].In the VIR-TS transformation, the texture unit is a radius vector, the vector is calculated by solving a homogeneous system of equations, and its graph can be visualized in the texture space.With the transformation, the digital image S is expressed in the texture space by the random vector → C, which consists of three components, a 1 , a 2 , a 3 , and whose addresses are û1 , û2 , û3 .Because the image is vector-represented, image classification in texture space is performed by measuring the similarity between the prototype vectors and the test vector.Their similarity is calculated through the projection between both vectors.Finally, the test image is assigned to the most similar class.Based on the experimental work, the VIR-TS transformation has high classification efficiency because its texture feature extraction efficiency is very high.Furthermore, its implementation is very easy because the digital image is represented through a three-component random vector.
With the knowledge that our proposal has potential application in image recognition, our future lines of research will include rendering the VIR-TS transform invariant to rotation and scale; proposing the VIR-TS transform for color image classification; applying the VIR-TS transform in the recognition of biomedical images; and performing an efficiency study of classification in images with noise.

Conclusions
In this paper, the Vectorial Image Representation on the Texture Space (VIR-TS) transform is proposed and applied.The VIR-TS transform is based on the extraction of local texture characters in the image S and represents these through the vector → C in the texture space.Each radius vector is a texture unit → T, which is estimated by solving a homogeneous equation system of 3 × 3.In the texture space, each image has a corresponding unique vector, given that the image is a random field of pixels.Experimentally, the vector → C was employed as a characteristics vector in a new multiclass classifier; thus, the high efficiency of the VIR-TS transform was corroborated through the classification of tree stem digital images.The efficiency reached 100%; however, in applications under natural environments, its efficiency may be significantly less due to the noise in photodetections and the random nature of light.
The VIR-TS transform has potential application in locating missing persons and classifying medical images.

→T .
This unit → T is represented in a new texture space as a vector radius that extends from the origin to the vector position → T, such that each random pattern of P pixels has a corresponding texture unit vector → T (vector radius).By adding together all of the components of the vector radius, → C is calculated; this latter vector contains all of the local texture characteristics of the image under study, S. Ergo, the transformation represents a gray-level image S through a vector → C, whose direction and magnitude depend entirely on the textures of the image.This transformation has been denominated Vectorial Image Representation on the Texture Space (VIR-TS), due to the representation of a digital image S through the vector → C. The efficiency of the VIR-TS transform was experimentally corroborated through the classification of tree stem images with a multiclass classifier, where the → C vector is employed as a characteristics vector.

Figure 1 .
Figure 1.A pattern P detected in the grayscale image S through an observation window of 3 × 3 elements.

Figure 1 .
Figure 1.A pattern P detected in the grayscale image S through an observation window of 3 × 3 elements.

p 11 pFigure 2 .
Figure 2. Representation of unit  ⃑ in the texture space: (a) graphic representation of unit vectors   ,   ,   ; (b) graphic representation of texture unit  ⃑ and its components          .

Figure 2 .
Figure 2. Representation of unit

Figure 3
Figure 3 depicts vector

Figure 3 .
Figure 3. Graphic representation of the texture vector  ⃗ with its directing cosines.

Figure 3 .
Figure 3. Graphic representation of the texture vector

→
C c , and given an unknown test image S Test whose vector is → C Test , the difference between the S c and S Test images in the texture space can be calculated through subtraction of the unknown image → C Test minus the vector of the prototype image → C c : →

→C
Test is the magnitude of vector → and cosφ is the cosine of the angle formed between the → C Test and → C c vectors.With the knowledge that Expression (

Figure 4 .Figure 5 .
Figure 4. Geometry employed for the similarity measurement between   and   .CTest

Figure 4 .Figure 4 .Figure 5 .
Figure 4. Geometry employed for the similarity measurement between S c and S Test .

Figure 6 C
Figure 6 schematically displays the proposed multiclass classifier for image recognition within the texture space.The classifier consists of two phases: learning and recognition.During the learning phase, a human expert identifies and names a known image database S c (c = 1, 2, 3, . . ., C), where each image is considered as an independent class; each class has a series of radius vectors

Figure 6 .
Figure 6.A schematic representation of the multiclass classifier.

Figure 6 .
Figure 6.A schematic representation of the multiclass classifier.

C 3 .
Image S Onto a Texture Vector → In this section, a database comprising 10 digital images of tree stems S c (c = 1, 2, . . ., 10) is represented through texture vectors → C c , employing λ = 2 and λ = 25 values and an observation window of W = 3 × 3 size.The database is presented in Figure 7.Each image S c was acquired with a Smartphone LG 50, and rotation and scale are controlled under natural illumination and with a fixed resolution of M × N = 3120 × 4160 pixels.J. Imaging 2024, 10, x FOR PEER REVIEW 11 of Experimental Work and Results 3.1.Transformation of an Image  Onto a Texture Vector  ⃗ In this section, a database comprising 10 digital images of tree stems    1,2, … ,10 is represented through texture vectors  ⃗  , employing  2 and  25 va ues and an observation window of  3 3 size.The database is presented in Figure Each image   was acquired with a Smartphone LG 50, and rotation and scale are co trolled under natural illumination and with a fixed resolution of M N 3120 416 pixels.

Figure 7 .
Figure 7. Digital images of the tree stems employed in the experiments.

Figure 7 .
Figure 7. Digital images of the tree stems employed in the experiments.Additionally, the S c → hence, its value is the unit (marked in blue).Otherwise, the elements outside of the main diagonal correspond to the similarity measurement between two different vectors → C c y → C m ; consequently, such elements have a value lower than the unit.Tables

Table 4 .Table 5 .Experimental Results for λ = 25 Tree
Confusion matrix obtained for the image classification when λ = 2. Confusion matrix obtained for the image classification when λ = 25.

1 .
The image S is fully characterized in the transformation S → → C , where the texture space is represented by the texture vector → C. The new transformation can be termed Vectorial Image Representation on the Texture Space (VIR-TS) because, in the image transformation, the image S comes to be represented by the vector → C.

): det C p = p 11 p 12 p 13 p 21 p 22 p 23 p 31 p 32 Kp 33
), in terms of the matrix elements   , K has a

Table 1 .
Texture vectors  ⃗  obtained from the digital images shown in Figure6.