Estimation of Symmetry in the Recognition System with Adaptive Application of Filters

: The aim of this work is to study the influence of lighting on different types of filters in order to create adaptive systems of perception in the visible spectrum. This problem is solved by estimating symmetry operations (operations responsible for image/image transformations). Namely, the authors are interested in an objective assessment of the possibility of reproducing the image of the object (objective symmetry of filters) after the application of filters. This paper investigates and shows the results of the most common edge detection filters depending on the light level; that is, the behavior of the system in a room with indirect natural and standard (according to the requirements of the educational process in Ukraine) electric lighting was studied. The methods of Sobel, Sobel x, Sobel y, Prewitt, Prewitt x, Prewitt y, and Canny were used and compared in experiments. The conclusions provide a subjective assessment of the performance of each of the filters in certain conditions. Dependencies are defined that allow giving priority to certain filters (from those studied) depending on the lighting.


Introduction
In the last decade, the possibilities of autonomous decision making by computer vision systems have been actively studied. The topic of autonomous agents to support solutions that are able to adapt to the environment is relevant in a variety of applications: medicine [1][2][3], systems and means of artificial intelligence [4][5][6]. The problem includes the question of where fuzzy logic is needed [7], security issues [8][9][10][11] and biometric recognition systems [12,13], systems to support people with disabilities and related technologies and applications aimed to include wayfinding and navigation [5,13], land research, space research, agricultural issues, industrial climate issues, smart things, etc.
The task of automated selection of key characteristics for the classification of images using computer tools is not a trivial problem [14][15][16]; especially for the variable field of attention [17,18]. There are many methods and algorithms for identifying key characteristics for image classification [19][20][21][22][23][24], but each has its disadvantages and advantages. Most of the existing methods that solve this problem are effective only for individual objects, such as human faces, simple geometric shapes, and handwritten or printed symbols, but only under certain conditions, including certain lighting and the position of the object from the wearable camera and background. An urgent problem now is the creation of automated systems that compensate for the limited capabilities of people at different levels. When using artificial neural networks [19,20], reducing the amount of computation in learning is an important element to teach mathematical options for processing. This paper considers the problem of comparing the methods of selection of characteristic features under different external conditions in order to identify the best method for given conditions. Adaptive algorithms for detecting, classifying, and tracking the edges of objects have been developed Table 1. Lighting on the object in the Suites.

Intensity Scale
Lighting on the Object in the Suites Inside the Near Window, Lux/Time 10 8 The Sun disc is at noon 10 6 Glitter of water and metal under sunlight The aim of the work is to create adaptive systems of perception in the visible spectrum by constructing the dependences of the quality of the applied filters in dynamically changing conditions. In particular, the paper studies the behavior (in terms of symmetry of object representation) of the most popular methods of detecting edges under different lighting conditions.

Sobel's Operator
The Sobel operator is a discrete differential operator that calculates the approximate value of the image gradient [28]. The result of the application of the Sobel operator at each point of the image is either the brightness gradient vector at this point or its norms.

Description
The image convolution on which the Sobel operator is based is performed by small separable integer filters in the vertical and horizontal directions. The Sobel operator uses a gradient approximation that is not accurate, and this becomes especially noticeable at high-frequency image oscillations.
At each point of the image, the brightness gradient is calculated by the Sobel operator. Thus, there is the direction of the greatest increase in brightness and change of this value. Changing the brightness indicates the smoothness or sharpness of the change at each point of the image, the probability of finding a point on the border, and hence the orientation of the contour. This calculation is more reliable and simpler than calculating the direction of orientation.
A two-dimensional vector is a gradient of the function of two variables for each point of the image. Its components are derivatives of image brightness, which are calculated horizontally and vertically. The result of the work of the Sobel operator will be a zero vector at the point of the region of constant brightness and at the point lying on the boundary of the regions, a vector with the direction of increasing brightness.

Formulation
Strictly speaking, the operator uses 3 × 3 cores to calculate output values. If A is the original image, and G x and G y are two images with approximate points of derivatives in x and y: where * means a two-dimensional convolution operation. The x-coordinate here increases "to the right", and y "down". At each point of the image, the approximate value of the gradient value can be calculated using the obtained approximate values of the derivatives (meaning element by element): Using this information, we can also calculate the direction of the gradient: where, for example, the angle Θ is zero for the vertical boundary in which the dark side is on the left. The brightness function is known to us at discrete points, we need to determine the differentiated function that passes through these points. Derivatives at any single point are functions of brightness from all points of the image. The solution of the derivatives can be calculated with a certain degree of accuracy.
Sobel's operator is an inaccurate approximation of the image gradient, but it is high enough for practical application in many problems. Specifically, the operator uses the intensity value only around 3 × 3 of each pixel to obtain an approximation of the corresponding image gradient and uses only integer values of the luminance weights to estimate the gradient.

Extension to Another Number of Dimensions
The Sobel operator consists of two separate operations:

•
Smoothing with a triangular filter perpendicular to the derivative direction: • Finding a simple central change in the direction of the derivative: Sobel filters for image derivatives in various dimensions for y, z, t ∈ (0, −1, 1) Here is an example of a three-dimensional Sobel core for an axis z:

Technical Details
As follows from the definition, the Sobel operator can be implemented by simple hardware and software. Approximating the vector gradient requires only eight pixels around point x of the image and integer arithmetic. Moreover, both discrete filters described above can be separated: and two derivatives, G x and G y , can now be calculated as The resolution of these calculations can reduce the arithmetic operations with each pixel.

Canny's Operator
Canny's operator (Canny boundary detector, Canny algorithm) is a computer vision detection operator in the field of computer vision. It was developed in 1986 by John F. Canny and uses a multistep algorithm to detect a wide range of boundaries in images [16,29]. Canny said that this filter could be well approximated by the first Gaussian derivative. Kenny introduced the concept of non-maximum suppression, which means that pixels of boundaries are declared pixels in which the local maximum of the gradient is reached in the direction of the gradient vector.

•
Good detection (Kenny interpreted this property as increasing the signal-to-noise ratio); From these criteria, the target function of error cost was then built, the minimization of which is the "optimal" linear operator for convolution with images.
The boundary detector algorithm is not limited to calculating the gradient of the smoothed image. Only the maximum points of the image gradient remain in the border contour, and the maximum points lying near the border are removed. It also uses information about the direction of the border in order to remove the points right next to the border and not to break the border itself near the local gradient maxima. Then, with the help of two thresholds, weak borders are removed. The fragment of the border is processed as a whole. If the value of the gradient somewhere on the observed fragment exceeds the upper threshold; this fragment also remains on the "permissible" edge and in places where the value of the gradient falls below this threshold, until it falls below the lower threshold. If there is no point on the whole fragment with a value greater than the upper threshold, it is deleted. This hysteresis reduces the number of breaks in the original edges. The inclusion of noise attenuation in Kenny's algorithm increases the stability of the results on the one hand and, on the other hand, increases computational costs and leads to distortion and even loss of boundary details. For example, this algorithm rounds the corners of objects and destroys boundaries at connection points.
The main stages of Canny's algorithm are: Smoothing: blurred images to remove noise. Canny's operator uses a filter that may be close to the first Gaussian derivative σ = 1.4: The angle of the direction of the gradient vector [30] is rounded and can take the following values: 0, 45, 90, and 135 (see Figure 1).


A single response to one edge.
From these criteria, the target function of error cost was then built, the of which is the "optimal" linear operator for convolution with images.
The boundary detector algorithm is not limited to calculating the g smoothed image. Only the maximum points of the image gradient remain contour, and the maximum points lying near the border are removed information about the direction of the border in order to remove the poin the border and not to break the border itself near the local gradient maxim the help of two thresholds, weak borders are removed. The fragment o processed as a whole. If the value of the gradient somewhere on the obse exceeds the upper threshold; this fragment also remains on the "permissibl places where the value of the gradient falls below this threshold, until it lower threshold. If there is no point on the whole fragment with a value g upper threshold, it is deleted. This hysteresis reduces the number of breaks edges. The inclusion of noise attenuation in Kenny's algorithm increases the results on the one hand and, on the other hand, increases computati leads to distortion and even loss of boundary details. For example, this alg the corners of objects and destroys boundaries at connection points.
The main stages of Canny's algorithm are: Smoothing: blurred images to remove noise. Canny's operator uses a be close to the first Gaussian derivative σ = 1.4: The angle of the direction of the gradient vector [30] is rounded an following values: 0, 45, 90, and 135 (see Figure 1). Suppression of non-maxima. Only local maxima are marked as edges. Double threshold filtering: potential limits are determined by thresho Trace the area of ambiguity. The final limits are determined by suppre that are not related to certain (strong) limits.
Before using the detector, the image is usually converted to grays computational costs. This stage is typical of many image-processing metho Suppression of non-maxima. Only local maxima are marked as edges. Double threshold filtering: potential limits are determined by thresholds. Trace the area of ambiguity. The final limits are determined by suppressing all edges that are not related to certain (strong) limits.
Before using the detector, the image is usually converted to grayscale to reduce computational costs. This stage is typical of many image-processing methods.

Prewitt's Operator
Prewitt operator is a method of selecting boundaries in image processing, which calculates the maximum response on the set of convolution cores to find the local orientation of the border in each pixel. It was created by Dr. Judith Prewitt to identify the boundaries of medical imaging [31].
Different kernels are used for this operation. From one core, you can obtain eight, rearranging the coefficients in a circle. Each result will be sensitive to the direction of the limit from 0 to 315 with a step of 45, where 0 corresponds to the vertical limit. The maximum response of each pixel is the value of the corresponding pixel in the original image. Its values are between 1 and 8, depending on the number of nuclei that provide the greatest result.
This method of edge detection is also called edge template matching because the image is mapped to a set of templates, and each represents some boundary orientation. The size and orientation of the border in a pixel is then determined by the pattern that best matches the local neighborhood of the pixel.
While a differential gradient detector requires a time-consuming calculation of the orientation estimate for magnitudes in the vertical and horizontal directions, the Prewitt limit detector provides a direct direction from the nucleus with maximum result. The set of nuclei is limited to 8 possible directions, but experience shows that most direct estimates of orientation are also not very accurate. On the other hand, a set of cores requires 8 convolutions for each pixel, while a set of gradients of the gradient method requires only 2: sensitive vertically and horizontally.

Formulation
The operator uses two 3 × 3 cores, collapsing the original image to calculate the approximate values of the derivatives: one horizontally and one vertically. Let A be the original image and G x i G y be two images in which each point contains a horizontal and vertical approximation of the derivative, which is calculated as: 16:00, Action at 67-70 lux The program on the smartphone was used to measure the suites, as the study was performed to link to the most common video cameras.  orientation of the border in each pixel. It was created by Dr. Judith Prewitt to identify the boundaries of medical imaging [31]. Different kernels are used for this operation. From one core, you can obtain eight, rearranging the coefficients in a circle. Each result will be sensitive to the direction of the limit from 0 to 315 with a step of 45, where 0 corresponds to the vertical limit. The maximum response of each pixel is the value of the corresponding pixel in the original image. Its values are between 1 and 8, depending on the number of nuclei that provide the greatest result.
This method of edge detection is also called edge template matching because the image is mapped to a set of templates, and each represents some boundary orientation. The size and orientation of the border in a pixel is then determined by the pattern that best matches the local neighborhood of the pixel.
While a differential gradient detector requires a time-consuming calculation of the orientation estimate for magnitudes in the vertical and horizontal directions, the Prewitt limit detector provides a direct direction from the nucleus with maximum result. The set of nuclei is limited to 8 possible directions, but experience shows that most direct estimates of orientation are also not very accurate. On the other hand, a set of cores requires 8 convolutions for each pixel, while a set of gradients of the gradient method requires only 2: sensitive vertically and horizontally.

Formulation
The operator uses two 3 × 3 cores, collapsing the original image to calculate the approximate values of the derivatives: one horizontally and one vertically. Let A be the original image and Gx і Gy be two images in which each point contains a horizontal and vertical approximation of the derivative, which is calculated as:

Experiment №1. 16:00, Action at 67-70 lux
The program on the smartphone was used to measure the suites, as the study was performed to link to the most common video cameras.

Experiment №3. 16:50, Action at 100 Lux (Includes E-Electric Lighting)
The third experiment (lighting 100 lux) is presented in Figure 10 (original input image and the result of processing by the Sobel method); Figure 11 (normalized Sobel gradient on the X-axis and gradient on the Y-axis); Figure 12

Experiment №3. 16:50, Action at 100 Lux (Includes E-Electric Lighting)
The third experiment (lighting 100 lux) is presented in Figure 10 (original input image and the result of processing by the Sobel method); Figure 11 (normalized Sobel gradient on the X-axis and gradient on the Y-axis); Figure 12    The third experiment (lighting 100 lux) is presented in Figure 10 (original input image and the result of processing by the Sobel method); Figure 11 (normalized Sobel gradient on the X-axis and gradient on the Y-axis); Figure 12    The third experiment (lighting 100 lux) is presented in Figure 10 (original input image and the result of processing by the Sobel method); Figure 11 (normalized Sobel gradient on the X-axis and gradient on the Y-axis); Figure 12

Experiment №3. 16:50, Action at 100 Lux (Includes E-Electric Lighting)
The third experiment (lighting 100 lux) is presented in Figure 10 (original input image and the result of processing by the Sobel method); Figure 11 (normalized Sobel gradient on the X-axis and gradient on the Y-axis); Figure 12

Experiment №3. 13:00, Action at 260 Lux
The forth experiment (lighting 260 lux) is presented in Figure 14 (original input image and the result of processing by the Sobel method); Figure 15 (normalized Sobel gradient on the X-axis and gradient on the Y-axis); Figure 16

Experiment №3. 13:00, Action at 260 Lux
The forth experiment (lighting 260 lux) is presented in Figure 14 (original input image and the result of processing by the Sobel method); Figure 15 (normalized Sobel gradient on the X-axis and gradient on the Y-axis); Figure 16

Experiment №3. 13:00, Action at 260 Lux
The forth experiment (lighting 260 lux) is presented in Figure 14 (original input image and the result of processing by the Sobel method); Figure 15 (normalized Sobel gradient on the X-axis and gradient on the Y-axis); Figure 16

Experiment №3. 13:00, Action at 260 Lux
The forth experiment (lighting 260 lux) is presented in Figure 14 (original input image and the result of processing by the Sobel method); Figure 15 (normalized Sobel gradient on the X-axis and gradient on the Y-axis); Figure 16

Experiment №3. 13:00, Action at 260 Lux
The forth experiment (lighting 260 lux) is presented in Figure 14 (original input image and the result of processing by the Sobel method); Figure 15 (normalized Sobel gradient on the X-axis and gradient on the Y-axis); Figure 16

Experiment №3. 13:30, Action at 2300 Lux
The fifth experiment (lighting 2300 lux) is presented in Figure 18 (original input image and the result of processing by the Sobel method); Figure 19 (normalized Sobel gradient on the X-axis and gradient on the Y-axis); Figure 20

Experiment №3. 13:30, Action at 2300 Lux + Man Shadow
The sixth experiment (lighting 2300 lux) is presented in Figure 22 (original input image and the result of processing by the Sobel method); Figure 23 (normalized Sobel gradient on the X-axis and gradient on the Y-axis); Figure 24

Experiment №3. 13:30, Action at 2300 Lux
The fifth experiment (lighting 2300 lux) is presented in Figure 18 (original input image and the result of processing by the Sobel method); Figure 19 (normalized Sobel gradient on the X-axis and gradient on the Y-axis); Figure 20

Experiment №3. 13:30, Action at 2300 Lux + Man Shadow
The sixth experiment (lighting 2300 lux) is presented in Figure 22 (original input image and the result of processing by the Sobel method); Figure 23 (normalized Sobel gradient on the X-axis and gradient on the Y-axis); Figure 24

Experiment №3. 13:30, Action at 2300 Lux
The fifth experiment (lighting 2300 lux) is presented in Figure 18 (original input image and the result of processing by the Sobel method); Figure 19 (normalized Sobel gradient on the X-axis and gradient on the Y-axis); Figure 20

Experiment №3. 13:30, Action at 2300 Lux + Man Shadow
The sixth experiment (lighting 2300 lux) is presented in Figure 22 (original input image and the result of processing by the Sobel method); Figure 23 (normalized Sobel gradient on the X-axis and gradient on the Y-axis); Figure 24

Experiment №3. 13:30, Action at 2300 Lux
The fifth experiment (lighting 2300 lux) is presented in Figure 18 (original input image and the result of processing by the Sobel method); Figure 19 (normalized Sobel gradient on the X-axis and gradient on the Y-axis); Figure 20

Experiment №3. 13:30, Action at 2300 Lux + Man Shadow
The sixth experiment (lighting 2300 lux) is presented in Figure 22 (original input image and the result of processing by the Sobel method); Figure 23 (normalized Sobel gradient on the X-axis and gradient on the Y-axis); Figure 24

Experiment №3. 13:30, Action at 2300 Lux + Man Shadow
The sixth experiment (lighting 2300 lux) is presented in Figure 22 (original input image and the result of processing by the Sobel method); Figure 23 (normalized Sobel gradient on the X-axis and gradient on the Y-axis); Figure 24

Experiment №3. 13:30, Action at 2350 Lux, Cam Front to Light
The seventh experiment (lighting 2350 lux) is presented in Figure 26 (original input image and the result of processing by the Sobel method); Figure 27 (normalized Sobel gradient on the X-axis and gradient on the Y-axis); Figure 28

Experiment №3. 13:30, Action at 2350 Lux, Cam Front to Light
The seventh experiment (lighting 2350 lux) is presented in Figure 26        The experiment was performed as follows. The original image was illuminated at 50 lux, 70 lux, 100, 260, and 2300 lux. These images are presented in Figures 2,6,10,14,18,22 and 26, respectively. Canny, Sobel, and Prewitt filters were used for these images. These filtered images were compared with each other to assess the best performance of the filter and to make recommendations for the use of certain filters. Diagram 1 shows three graphs of the relationship between the signal-to-noise ratio of the filtered images. The blue color shows the values of the signal-to-noise ratio between img_sobel and img_sobel_x. This value takes the value of 7.23 and is the largest value in this chart, and therefore, they are      Figure 30 shows the dependences of the objective comparison of distances (PSNR) between the original value of the brightness of the pixel with the values obtained after applying the appropriate filters. The data in the chart are summarized. Figure 30 shows the dependences of the objective comparison of distances (PS between the original value of the brightness of the pixel with the values obtained a applying the appropriate filters. The data in the chart are summarized. The experiment was performed as follows. The original image was illuminated a lux, 70 lux, 100, 260, and 2300 lux. These images are presented in Figures 2, 6, 10, 14 22 and 26, respectively. Canny, Sobel, and Prewitt filters were used for these images. Th filtered images were compared with each other to assess the best performance of the f and to make recommendations for the use of certain filters. Diagram 1 shows three gra of the relationship between the signal-to-noise ratio of the filtered images. The blue c shows the values of the signal-to-noise ratio between img_sobel and img_sobel_x. T value takes the value of 7.23 and is the largest value in this chart, and therefore, they The experiment was performed as follows. The original image was illuminated at 50 lux, 70 lux, 100, 260, and 2300 lux. These images are presented in Figures 2, 6, 10, 14, 18, 22 and 26, respectively. Canny, Sobel, and Prewitt filters were used for these images. These filtered images were compared with each other to assess the best performance of the filter and to make recommendations for the use of certain filters. Diagram 1 shows three graphs of the relationship between the signal-to-noise ratio of the filtered images. The blue color shows the values of the signal-to-noise ratio between img_sobel and img_sobel_x. This value takes the value of 7.23 and is the largest value in this chart, and therefore, they are the most similar to each other. The second value is the result of comparing img_sobel and img_sobel_y and takes the value of 6.92. We also observe high PSNR values compared to others, which explains that very similar transformations were used. The other values are a comparison between the pairs img_sobel and img_cany and img_sobel and img_prewitt_x, and img_sobel and img_prewitt_y. Because filtering was applied by other filters, the images are less similar, and that makes sense. Charts of other colors, similar to img_sobel_x and img_sobel_y, work similarly. Slightly higher values between the pairs img_sobel_x and img_prewitt_y are explained by the better finding of contours in the image, which should be taken into account when choosing one filtering method.

Results
We performed an experiment in which the edges of the source objects were marked on the input image in one of the raster graphics editors. Figure 31a shows the image taken at 50 lux, Figure 31b shows the filtered image of the Sobel operator, Figure 31c shows the Canny operator, and Figure 31d shows Prewitt's operator. As a result of filtering, more edges from the original image were formed in Prewitt and Canny, which is seen subjectively. An experiment was performed to compare the input image with the filtered images using PSNR, and the results are shown: Input image, img_sobel-11.7; Input image, img_canny-13.76; Input image, img_prewitt's-14.67. This is best performed by Prewitt's operator, as the PSNR values are the highest.
subjectively. An experiment was performed to compare the input image with the filtered images using PSNR, and the results are shown: Input image, img_sobel-11.7; Input image, img_canny-13.76; Input image, img_prewitt's-14.67. This is best performed by Prewitt's operator, as the PSNR values are the highest.

Discussion
The experiments show that to create an adaptive system, you can use the technique of selecting filters, for which you need to choose a filter that provides the best symmetry of the images under appropriate conditions. The success of the experiments inspires the authors to further test all the most widely used filters in different conditions, which will

Discussion
The experiments show that to create an adaptive system, you can use the technique of selecting filters, for which you need to choose a filter that provides the best symmetry of the images under appropriate conditions. The success of the experiments inspires the authors to further test all the most widely used filters in different conditions, which will allow (according to an objective assessment) choosing filters that provide the best symmetry of image display for specific conditions.
In image segmentation evaluation, the structural similarity index (SSIM) estimates the visual impact of shifts in an image [26]. The SSIM consists of three local comparison functions, namely luminance comparison, contrast comparison, and structure comparison, between two signals excluding other remaining errors. The SSIM is computed locally by moving an 8 × 8 window for each pixel, unlike the peak signal-to-noise ratio (PSNR) or root-mean-square error (RMSE), which are measured at the global level. "Even though SSIM can be applied in the case of an edge detection evaluation, in the presence of too many areas without contours, the obtained score is not efficient or useful (in order to judge the quality of edge detection with the SSIM, it is necessary to compare with an image having detected edges situated throughout the image areas)" [26]. This is why PSNR was chosen as a simple and widespread method of global assessment. Other popular evaluation methods [26] do not have strong superiority and have less popularity. Due to this, this paper had an unsettled task of studying the behavior of filters in a particular education (without strong differences).
The success of this study encourages the authors to expand the number of methods and conditions in the next study, in particular to conduct experimental studies of additional methods [27]. This paper analyzes the speed in contrast to the idea of lighting level.

Conclusions
The conducted experiments show that among the considered methods, in all conditions, only variations of the Sobel operator compete for the title of the best. With different lighting, different variations can be considered the best. In addition, although there is a difference in quality between variations of the Sobel operator, with all types of lighting, they all give a good result. All others highlight the characteristics rather weakly. The resulting dependence presented in the diagram allows fully automating the process of selecting a filter for image preprocessing when designing adaptive computer vision.