BPG-Based Automatic Lossy Compression of Noisy Images with the Prediction of an Optimal Operation Existence and Its Parameters

: With a resolution improvement, the size of modern remote sensing images increases. This makes it desirable to compress them, mostly by using lossy compression techniques. Often the images to be compressed (or some component images of multichannel remote sensing data) are noisy. The lossy compression of such images has several peculiarities dealing with speciﬁc noise ﬁltering effects and evaluation of the compression technique’s performance. In particular, an optimal operation point (OOP) may exist where quality of a compressed image is closer to the corresponding noise-free (true) image than the uncompressed (original, noisy) image quality, according to certain criterion (metrics). In such a case, it is reasonable to automatically compress an image under interest in the OOP neighborhood, but without having the true image at disposal in practice, it is impossible to accurately determine if the OOP does exist. Here we show that, by a simple and fast preliminary analysis and pre-training, it is possible to predict the OOPs existence and the metric values in it with appropriate accuracy. The study is carried out for a better portable graphics (BPG) coder for additive white Gaussian noise, focusing mainly on one-component (grayscale) images. The results allow for concluding that prediction is possible for an improvement (reduction) in the quality metrics of PSNR and PSNR-HVS-M. In turn, this allows for decision-making about the existence or absence of an OOP. If an OOP is absent, a more “careful” compression is recommended. Having such rules, it then becomes possible to carry out the compression automatically. Additionally, possible modiﬁcations for the cases of signal-dependent noise and the joint compression of three-component images are considered and the possible existence of an OOP for these cases is demonstrated.


Introduction
Remote sensing (RS) systems and other imaging tools currently provide valuable data for agriculture, forestry, hydrology, ecological monitoring, non-destructive testing in industry, etc. [1][2][3][4]. Using imaging data produced by modern systems, it is possible to estimate the parameters of sensed territories of large areas, to control their change in time, to detect objects of interest and to solve many other important tasks. Meanwhile, a better spatial resolution and more frequent observation result in a fast increase in the data volume where the images have to be transferred, processed, stored and interpreted [5,6].
At the stages of data transferring and storage, image compression is often applied [7][8][9]. A lossless compression [7,8,10] preserves all the information contained in RS and imaging data, but the compression ratio (CR) attained for the used methods can be inappropriate in practice. To obtain a larger CR, near-lossless and lossy compression is mostly applied [7,[9][10][11][12]; however, distortions are then inevitably introduced and the important task OOP is established depending on the noise standard deviation; this dependence is shown to be logarithmic opposite to the linear dependences established in [24] for other coders. Second, a modification of the procedure [24] is proposed and its thorough analysis is carried out. In particular, it is proposed to use rational functions in curve fitting into scatter-plots. This analysis shows that there are, at least, two statistical parameters that can be quickly and easily calculated in the DCT domain that can be used for predicting an improvement or reduction in two metrics. Third, the accuracy of prediction is analyzed and factors such as noise realization and the number of blocks that influence the accuracy are considered. Based on this analysis, practical recommendations are given that allows compression automation. Fourth, initial data concerning the lossy compression of single-channel images corrupted by Poisson noise and three-channel images corrupted by AWGN are presented, demonstrating the possible existence of OOPs for these cases. Finally, we show that the BPG coder is more efficient compared to other coders earlier considered in [24].
The paper is structured as follows. Section 2 describes the image/noise model, considered metrics and basic dependences. Section 3 analyzes the dependences more in detail. The methodology of prediction and its accuracy analysis are given in Section 4. Decision-making and practical recommendations are discussed in Section 5. The cases of signal-dependent noise and three-channel image compression are also briefly studied in this Section. Finally, the conclusions are given.

Image/Noise Model and Compression Efficiency Criteria
It is a well-known fact that compression characteristics depend on sufficient image properties. Depending on the image complexity, the CR can vary by several (or even tens) of times for the same PSNR or, equivalently, the quality of compressed images can be rather different for the same CR. Because of this, to ensure universality of the conclusions and recommendations, the corresponding analysis and method synthesis should be performed for a set of images with a very wide variation of properties. A set of test images needs to contain simple, medium, and complex structure images where the complexity can be described or characterized by a percentage of pixels that belong to an image's homogeneous regions or entropy for a noise-free case and so on. For a better understanding, we give six images as the examples of images of different complexity ( Figure 1). More than a half of the pixels in Figure 1a belong to quasi-homogeneous regions, a certain part of the pixels in Figure 1c-f relate to homogeneous regions, whilst there are practically no homogeneous regions in the image in Figure 1b.
There are different ways to characterize image complexity. We have noticed that the performance of the lossy compression of noisy images correlates with the image entropy for noise-free cases. Entropy E is the following: 5.82 for Frisco, 7.33 for Diego, 7.46 for Fr01, 7.40 for Fr02, 7.38 for Fr03, and 7.29 for Fr04. It will be shown later that the plots for the test images having similar E values for the noise-free versions usually have a similar behaviour. Note that the marginal cases are of the most interest; however, "average" cases (of middle complexity) are also worth considering. Because of this, there is a tendency in the image processing community for the creation of image databases and for their use in the design and verification of image processing methods [38,39]. As a starting point for analyzing the lossy compression of noisy images using BPG, we concentrate on the case of grayscale images. As a noise model, we consider the AWGN, which is known to be the simplest model. Just such a model is commonly used as a starting point in research, for example, in the prediction of the potential efficiency of image denoising [40]; thus, one can present an observed noisy grayscale image as: where I true ij , i = 1, ..., I Im , j = 1, ..., J Im is the true or noise-free image, n ij denotes AWGN in the ij-th pixel, and I Im and J Im define the considered image size. It is possible to assume the Appl. Sci. 2022, 12, 7555 4 of 23 noise mean equals to zero and the AWGN variance is equal to σ 2 . Moreover, we assume that σ 2 is either a priori known or pre-estimated with a high accuracy [41]. Appl  As a noise model, we consider the AWGN, which is known to be the simplest model. Just such a model is commonly used as a starting point in research, for example, in the prediction of the potential efficiency of image denoising [40]; thus, one can present an observed noisy grayscale image as: where , = 1, . . , , = 1, . . , is the true or noise-free image, denotes AWGN in the -th pixel, and and define the considered image size. It is possible to assume the noise mean equals to zero and the AWGN variance is equal to . Moreover, we assume that is either a priori known or pre-estimated with a high accuracy [41]. Quality of the original noisy image can be characterized in different ways. One standard way is to calculate the peak signal-to-noise ratio as: under the assumption that an image is represented as an 8-bit two-dimensional (2D) data array. Another way is to employ some visual quality metrics, here, we use two of them. The first one is PSNR-HVS-M [26] (a peak signal-to-noise ratio taking into account the human vision system (HVS) and masking (M)), and the multi-scale structural similarity metric (MS-SSIM) [27]. Both are among the best in characterizing the visual quality of images with distortions typical for remote sensing [42], in particular those due to noise and lossy compression. They can be calculated quickly, are applicable to a single-channel (grayscale images), and are based on different principles. The latter is important since no elementary full-reference metric is perfect and, thus, while carrying out an analysis and making conclusions, it is desired to rely on the results obtained for several visual quality metrics. PSNR-HVS-M is defined as: where − − is determined in 8 × 8 blocks in the DCT domain, taking into account the masking effect and lower sensitivity of a human eye to distortions in high Quality of the original noisy image can be characterized in different ways. One standard way is to calculate the peak signal-to-noise ratio as: under the assumption that an image is represented as an 8-bit two-dimensional (2D) data array. Another way is to employ some visual quality metrics, here, we use two of them. The first one is PSNR-HVS-M [26] (a peak signal-to-noise ratio taking into account the human vision system (HVS) and masking (M)), and the multi-scale structural similarity metric (MS-SSIM) [27]. Both are among the best in characterizing the visual quality of images with distortions typical for remote sensing [42], in particular those due to noise and lossy compression. They can be calculated quickly, are applicable to a single-channel (grayscale images), and are based on different principles. The latter is important since no elementary full-reference metric is perfect and, thus, while carrying out an analysis and making conclusions, it is desired to rely on the results obtained for several visual quality metrics. PSNR-HVS-M is defined as: where PSNR − HVS − M n is determined in 8 × 8 blocks in the DCT domain, taking into account the masking effect and lower sensitivity of a human eye to distortions in high spatial frequencies rather than distortions in low spatial frequencies. Note that PSNR and PSNR-HVS-M [26] are both expressed in dB with larger values relating to better quality. Usually, for AWGN and similar distortions, PSNR-HVS-M is slightly larger than PSNR due to masking effect.
For these two metrics, it is important that the distortion visibility thresholds have been determined [38]. Distortions are usually invisible if PSNR exceeds 36 dB and PSNR-HVS-M is larger than 41 dB. This happens if the noise variance is larger than 15...20. Because of this, in this paper we concentrate on considering PSNR n smaller than 36 dB which is typical for the aforementioned applications.
The metric MS-SSIM [27] extracts and employs structural information from the scene. It is in the limits from zero to unity where larger values correspond to a better visual quality. The distortion invisibility threshold is approximately equal to 0.99.
If a lossy compression method is applied, one obtains a compressed image I c ij , i = 1, ..., I Im , j = 1, ..., J Im that differs from I n ij , i = 1, ..., I Im , j = 1, ..., J Im and depends on the compression controlling parameter (CCP). For different coders, different parameters play the role of the CCP. This can be the quality factor, scaling factor, quantization step, and bits per pixel [12,24,31,32]. For the BPG coder, the quality parameter Q allows changing the image quality and compression ratio. Performing a multiple compression of I n using different Q (which are integers in the limits from 1 to 51 for the BPG encoder), the rate-distortion curve Metr nc (Q) can be obtained for any image where Metr nc is a metric under interest calculated between noisy (original) and compressed images.
Such metrics behave in a reasonable manner, i.e., their values become worse (smaller for all three considered metrics) if the Q increases. This is clearly seen in all four plots presented in Figure 2. In Figure 2a,b, the dependences of PSNR nc (Q) for six test images for two values of the noise variance are presented. For very small Q < 7, the original and compressed images practically do not differ and the compression is near-lossless. Then, one has a practically linear part of all curves; thus, it is possible to say that the part starts at Q = 7 and ends at such Q that PSNR nc (Q) ≈ PSNR n . For larger Q, the dependences for different images diverge where larger PSNR nc values are observed for simpler structure images. For the middle part, it is possible to approximate the curves as: The dependences PSNR − HVS − M nc (Q) and MS − SSI M nc (Q) are monotonous. They are "going jointly" for all test images for Q < 25. A joint analysis of all four dependences shows that for Q < 29 the introduced distortions are invisible. This means that it is possible to carry out a visually lossless compression and then, after image transferring (storage) and decompression, to perform noise post-filtering if necessary.
In the case of simulations, i.e., if one has a true image, adds noise to it, and then compresses this image, it is also possible to calculate the metric Metr tc between the compressed and true images, and to get the dependence Metr tc (Q). The properties of such dependences are the most interesting. Two examples are given in Figure 3. For Q < 29, the image quality is "stable" and it is practically the same as for the uncompressed image. Then, with further Q increasing, three options are possible: (1) the image quality starts to improve and, after attaining the OOP (associated with the dependence maximum), the quality starts to quickly decrease; (2) the quality continues to be almost the same and even local maxima are possible, and then a fast reduction takes the place; (3) the quality starts to decrease. The first situation takes place for the metric MS − SSI M tc for all five images except the test image, Diego (Figure 3b), and for the metric PSNR − HVS − M tc for the test image, Frisco ( Figure 3a). The second situation is observed with the metric PSNR − HVS − M tc for some middle complexity test images (Figure 3a). The third situation takes place for the complex structure image, Diego, according to both visual quality metrics (Figure 3a,b). Obviously, the compression in the OOP can be recommended for the first situation, lossy compression in the neighborhood of the local maxima is reasonable in the second situation, and it is unclear what to do in the third situation. The dependences − − ( ) and − ( ) are monotonous. They are "going jointly" for all test images for Q < 25. A joint analysis of all four dependences shows that for Q < 29 the introduced distortions are invisible. This means that it is possible to carry out a visually lossless compression and then, after image transferring (storage) and decompression, to perform noise post-filtering if necessary.
In the case of simulations, i.e., if one has a true image, adds noise to it, and then compresses this image, it is also possible to calculate the metric between the compressed and true images, and to get the dependence ( ). The properties of such dependences are the most interesting. Two examples are given in Figure 3. For Q < 29, the image quality is "stable" and it is practically the same as for the uncompressed image. Then, with further Q increasing, three options are possible: (1) the image quality starts to improve and, after attaining the OOP (associated with the dependence maximum), the quality starts to quickly decrease; (2) the quality continues to be almost the same and even local maxima are possible, and then a fast reduction takes the place; (3) the quality starts to decrease. The first situation takes place for the metric − for all five images except the test image, Diego (Figure 3b), and for the metric − − for the test image, Frisco (Figure 3a). The second situation is observed with the metric − − for some middle complexity test images (Figure 3a). The third situation takes place for the complex structure image, Diego, according to both visual quality metrics ( Figure  3a,b). Obviously, the compression in the OOP can be recommended for the first situation, lossy compression in the neighborhood of the local maxima is reasonable in the second situation, and it is unclear what to do in the third situation.
Thus, we come to several questions. The first and the most complicated deals with the fact that in practice one does not have the true image and, thus, the dependence Thus, we come to several questions. The first and the most complicated deals with the fact that in practice one does not have the true image and, thus, the dependence Metr tc (Q) cannot be attained; therefore, the Q in the OOP cannot be determined and it is impossible to understand if the OOP exists or not. Another, less important, problem is what to do if an OOP does not exist.
The arisen question will become clearer after an additional analysis performed below. The plots in Figure 4 visualizes what CR can be provided by a BPG encoder applied to noisy images. Note that the plots in Figure 4 are given for six single-channel RS images of different complexity. The conclusions are the following. First, the CR (for the same Q) depends on the image complexity. For example, for Q = 28 and a noise variance equal to 64, the attained CR for the simplest and most complex test images, Frisco and Diego, differ sufficiently (they are about 4.3 and 3.7, respectively, i.e., one deals with a near-lossless compression). For Q = 35, the situation is the following: the CR for Frisco is about 35.3 whilst the CR for Diego is about 6.8, i.e., the CR differs from a near-lossless case and it is considerably larger for the simple structure image. Second, from comparing the data in Figure 4a,b it is possible to state that the CR for noisier images is smaller. For example, for σ 2 = 196 and Q = 35, the CR values for the images Frisco and Diego are equal to 5.5 and 6.4, respectively. A sharp increase in the CR starts when Q occurs larger than Q OOP , especially for simple structure images.  The arisen question will become clearer after an additional analysis performed below. The plots in Figure 4 visualizes what CR can be provided by a BPG encoder applied to noisy images. Note that the plots in Figure 4 are given for six single-channel RS images of different complexity. The conclusions are the following. First, the CR (for the same Q) depends on the image complexity. For example, for Q = 28 and a noise variance equal to 64, the attained CR for the simplest and most complex test images, Frisco and Diego, differ sufficiently (they are about 4.3 and 3.7, respectively, i.e., one deals with a near-lossless compression). For Q = 35, the situation is the following: the CR for Frisco is about 35.3 whilst the CR for Diego is about 6.8, i.e., the CR differs from a near-lossless case and it is considerably larger for the simple structure image. Second, from comparing the data in Figure 4a,b it is possible to state that the CR for noisier images is smaller. For example, for = 196 and Q = 35, the CR values for the images Frisco and Diego are equal to 5.5 and 6.4, respectively. A sharp increase in the CR starts when Q occurs larger than QOOP, especially for simple structure images.
Third, there is no sufficient difference in the general tendencies for images of different origin. To partly prove this, we have obtained the dependences of the considered visual quality metrics on Q for two well-known test images, Lenna and Baboon, as well as an artificial image, RSA (the images are presented in Figure 5). These dependences are given in Figure 3c,d. As one can see, the results for the highly textural images, Diego and Baboon, are very close. Similarly, the results for the simple structure images, RSA, Frisco and Lena are similar as well.   Third, there is no sufficient difference in the general tendencies for images of different origin. To partly prove this, we have obtained the dependences of the considered visual quality metrics on Q for two well-known test images, Lenna and Baboon, as well as an artificial image, RSA (the images are presented in Figure 5). These dependences are given in Figure 3c,d. As one can see, the results for the highly textural images, Diego and Baboon, are very close. Similarly, the results for the simple structure images, RSA, Frisco and Lena are similar as well.

Properties of Optimal Operation Point
We have already mentioned that we focused on applying the BPG encoder [35]. It has several obvious advantages stimulating to consider its application for RS and other types of images. First, BPG provides a higher compression ratio compared to JPEG and many other encoders for the same quality. Second, BPG is supported by most web browsers. Third, it supports the same formats as JPEG (grayscale, YCbCr 4:2:0, 4:2:2, and 4:4:4), and the most popular RGB, YCgCo and CMYK color spaces are supported. The available versions are able to work with data from 8 to 14 bits per channel. In this paper, we present the results obtained using the grayscale and color (4:2:2) BPG version 0.9.8 given at https://bellard.org/bpg/, accessed on 25 July 2022.
Our main intention in this section is to understand what the QOOP is and how it can be determined for a given image, metric, and noise variance. The preliminary observations that follow from the plots in Figure 3 are the following. For all images for which OOPs are observed, they take place for approximately the same Q. The OOPs for the metrics, PSNR-HVS-M and MS-SSIM, are observed for practically the same Q. This is a good property that allows to assume that an OOP does not depend on the image at hand and metric used, but, probably, depends on the noise variance.
This hypothesis is based on the results obtained earlier for other coders [24,[43][44][45]. In [43], it has been demonstrated that for an OOP (according to ) the following condition is satisfied: This means that for = 64 ( ) ≈ 30 dB, for = 100 ( ) ≈ 28 dB and for = 196 ( ) ≈ 25 dB. Taking into account the expression (4), this should

Properties of Optimal Operation Point
We have already mentioned that we focused on applying the BPG encoder [35]. It has several obvious advantages stimulating to consider its application for RS and other types of images. First, BPG provides a higher compression ratio compared to JPEG and many other encoders for the same quality. Second, BPG is supported by most web browsers. Third, it supports the same formats as JPEG (grayscale, YCbCr 4:2:0, 4:2:2, and 4:4:4), and the most popular RGB, YCgCo and CMYK color spaces are supported. The available versions are able to work with data from 8 to 14 bits per channel. In this paper, we present the results obtained using the grayscale and color (4:2:2) BPG version 0.9.8 given at https://bellard.org/bpg/, accessed on 25 July 2022.
Our main intention in this section is to understand what the Q OOP is and how it can be determined for a given image, metric, and noise variance. The preliminary observations that follow from the plots in Figure 3 are the following. For all images for which OOPs are observed, they take place for approximately the same Q. The OOPs for the metrics, PSNR-HVS-M and MS-SSIM, are observed for practically the same Q. This is a good property that allows to assume that an OOP does not depend on the image at hand and metric used, but, probably, depends on the noise variance.
This hypothesis is based on the results obtained earlier for other coders [24,[43][44][45]. In [43], it has been demonstrated that for an OOP (according to PSNR tc ) the following condition is satisfied: This means that for σ 2 = 64 PSNR nc (Q) ≈ 30 dB, for σ 2 = 100 PSNR nc (Q) ≈ 28 dB and for σ 2 = 196 PSNR nc (Q) ≈ 25 dB. Taking into account the expression (4), this should occur for Q = 33, 35, and 38, respectively. The plots presented in Figure 6 for noise variances equal to 64 and 100 show that this really is the case. Additional studies carried out for other variance values (25,144,196, and 289) and other test images have shown that expression (5) is really valid for Q = Q OOP . Substituting (4) into (5) and carrying out simple transformations, it is also possible to obtain (for 8-bit images): Then, knowing the noise variance a priori or estimating it with appropriate accuracy, it is possible to predict for what value of Q the OOP is possible. Meanwhile, the values of PSNR tc for Q calculated according to (6) can differ significantly. For example, for data in Figure 6a, PSNR tc is from 28.5 dB to 36.5 dB where the first case (image Diego) corresponds to an OOP absence whilst the second case (image Frisco) relates to an "obvious" OOP.
The following has been demonstrated in [45] for DCT-based coders. First, OOPs according to visual quality metrics are observed less often than OOPs according to PSNR. Second, an OOP according to visual quality metrics is observed for slightly (by about 5...10%) smaller quantization steps than for an optimal PSNR. occur for Q = 33, 35, and 38, respectively. The plots presented in Figure 6 for noise variances equal to 64 and 100 show that this really is the case. Additional studies carried out for other variance values (25, 144, 196, and 289) and other test images have shown that expression (5) is really valid for Q = QOOP. Substituting (4) into (5) and carrying out simple transformations, it is also possible to obtain (for 8-bit images): (a) (b) Then, knowing the noise variance a priori or estimating it with appropriate accuracy, it is possible to predict for what value of Q the OOP is possible. Meanwhile, the values of for Q calculated according to (6) can differ significantly. For example, for data in Figure 6a, is from 28.5 dB to 36.5 dB where the first case (image Diego) corresponds to an OOP absence whilst the second case (image Frisco) relates to an "obvious" OOP.
The following has been demonstrated in [45] for DCT-based coders. First, OOPs according to visual quality metrics are observed less often than OOPs according to PSNR. Second, an OOP according to visual quality metrics is observed for slightly (by about 5…10%) smaller quantization steps than for an optimal PSNR.
The same is observed for the BPG encoder. Figure 7 shows dependences − − ( ) and − ( ) for a noise variance equal to 100. They can be compared to the plots in Figure 6b. As one can see, the OOP (if it is observed) takes place for the same Q. Less OOPs are observed according to the visual quality metric − − (Figure 7a), than according to the standard metric ( Figure 6b). Similar effects have been observed for other test images and noise variances.
Thus, we can suppose that OOPs according to all considered metrics take place for the same Q determined according to (6); however, it might be so that, for a given image and noise variance, an OOP exists according to but does not exist according to − − . Even if the image is compressed in an OOP and the noise is partly removed, residual distortions can be clearly visible ( − − is smaller than 41 dB and − is smaller than 0.99). The same is observed for the BPG encoder. Figure 7 shows dependences PSNR − HVS − M tc (Q) and MS − SSI M tc (Q) for a noise variance equal to 100. They can be compared to the plots in Figure 6b. As one can see, the OOP (if it is observed) takes place for the same Q. Less OOPs are observed according to the visual quality metric PSNR − HVS − M tc (Figure 7a), than according to the standard metric PSNR tc (Figure 6b). Similar effects have been observed for other test images and noise variances. The carried-out analysis also shows that it is worth analyzing many test images and many values of noise variance to obtain statistical data that allow for making certain conclusions.

The Main Idea and Preliminary Results
Our idea of OOP prediction is based on several assumptions. First, we assume that we can predict (estimate) the difference ∆ = ( ) − . If this difference is positive, the OOP exists; if negative but quite close to zero, then we deal with situation 2 described in Section 2 (see the examples for four test images in Figure 6a); if negative and its absolute value is large enough, then, being compressed with QOOP (6), such an image can be sufficiently degraded. Second, we suppose that such a prediction can be accurate enough to undertake reliable decisions (to avoid making wrong decisions). Third, it is assumed that such a prediction can be performed easily and quickly, i.e., considerably faster than the compression itself.
In fact, we already have experience in solving the aforementioned task-the procedure for predicting the OOP's existence and metric values in the OOP have been proposed in [24]. More in detail, two statistical parameters, and . , that can be easily computed in a set of 8 × 8 pixel blocks in the DCT domain were considered. For both of them, the expressions that allow for calculating ∆ and ∆ − − using or Thus, we can suppose that OOPs according to all considered metrics take place for the same Q determined according to (6); however, it might be so that, for a given image and noise variance, an OOP exists according to PSNR tc but does not exist according to PSNR − HVS − M tc . Even if the image is compressed in an OOP and the noise is partly removed, residual distortions can be clearly visible (PSNR − HVS − M tc is smaller than 41 dB and MS − SSI M tc is smaller than 0.99).
The carried-out analysis also shows that it is worth analyzing many test images and many values of noise variance to obtain statistical data that allow for making certain conclusions.

The Main Idea and Preliminary Results
Our idea of OOP prediction is based on several assumptions. First, we assume that we can predict (estimate) the difference ∆Metr = Metr tc (Q OOP ) − Metr n . If this difference is positive, the OOP exists; if negative but quite close to zero, then we deal with situation 2 described in Section 2 (see the examples for four test images in Figure 6a); if negative and its absolute value is large enough, then, being compressed with Q OOP (6), such an image can be sufficiently degraded. Second, we suppose that such a prediction can be accurate enough to undertake reliable decisions (to avoid making wrong decisions). Third, it is assumed that such a prediction can be performed easily and quickly, i.e., considerably faster than the compression itself.
In fact, we already have experience in solving the aforementioned task-the procedure for predicting the OOP's existence and metric values in the OOP have been proposed in [24]. More in detail, two statistical parameters, P 2σ and P 2.7σ , that can be easily computed in a set of 8 × 8 pixel blocks in the DCT domain were considered. For both of them, the expressions that allow for calculating ∆PSNR and ∆PSNR − HVS − M using P 2σ or P 2.7σ as the argument were obtained. In this paper, we employ the ideas of [24] for the BPG coder, keeping in mind that it is based on DCT and, in this sense, statistics in the DCT domain can be highly correlated with the BPG performance characteristics.
The parameter P 2σ is calculated as: where M is the number of the considered blocks, D(k, l, m) is the kl-th DCT coefficient in the m-th block, m = 1, . . . , M. In other words, P 2σ is an estimate of the probability that absolute values of DCT coefficients in blocks are smaller than 2σ supposedly a priori known. Similarly: Both statistical parameters have come from the theory of DCT-based denoising [42]. It has been shown there that such parameters are able to jointly characterize the image complexity and noise intensity. For example, P 2σ is small for complex structure images and/or low intensity noise.
The next task is to obtain dependences that connect ∆PSNR and ∆PSNR − HVS − M with the input parameters, P 2σ or P 2.7σ . This stage is carried out off-line and can be treated as specific training. The main task is, such as in machine learning, to take into account all possible situations, i.e., a wide variety of image properties and noise intensities.
At this stage, scatter-plots of ∆Metr on the input parameters have been formed. Each scatter-plot has been obtained as follows: noise with a given noise variance has been added to a considered test image that has then been compressed using Q OOP (6). Then, the input and output parameters have been determined. In general, eleven test images of different complexity have been exploited and the noise variance was varied in the limits from 0.25 to 400. The obtained scatter-plots are presented in Figures 8-11.
Preliminary analysis shows the following: 1.
The points of the scatter-plots ∆PSNR vs. P 2σ (Figure 8) and ∆PSNR vs. P 2.7σ ( Figure 10) are placed in a very compact manner, clearly showing that there are monotonous increasing and decreasing dependences in the output parameter on the input parameter, respectively; both the input parameters are in the limits from zero to unity, for small P 2σ and large P 2.7σ , ∆PSNR are negative and close to −2 dB (as an example, see the data for the test image Diego in Figure 5a), and in this case the OOP is absent and it is not worth compressing the images using Q OOP (6); 2.
The points of the scatter-plots ∆PSNR − HVS − M vs. P 2σ (Figure 9) and ∆PSNR − HVS − M vs. P 2.7σ ( Figure 11) are placed in a less compact manner; meanwhile, there are obvious tendencies of ∆PSNR − HVS − M increasing with P 2σ increasing and P 2.7σ decreasing; a visual analysis allows supposing that these dependencies are monotonous; the scatter-plot points are placed more compactly for large P 2σ and small P 2.7σ that correspond to simpler structure images and/or more intensive noise; 3.
It is possible to expect that a good curve fitting is possible in both cases that allows establishing approximate analytic dependences between the output and input parameters (examples of the fitted curves are given in all Figures 8-11); 4.
nous increasing and decreasing dependences in the output parameter on the input parameter, respectively; both the input parameters are in the limits from zero to unity, for small and large . , ∆ are negative and close to −2 dB (as an example, see the data for the test image Diego in Figure 5a), and in this case the OOP is absent and it is not worth compressing the images using QOOP (6); 2. The points of the scatter-plots ∆ − − vs. (Figure 9) and ∆ − − vs. . (Figure 11) are placed in a less compact manner; meanwhile, there are obvious tendencies of ∆ − − increasing with increasing and . decreasing; a visual analysis allows supposing that these dependencies are monotonous; the scatter-plot points are placed more compactly for large and small . that correspond to simpler structure images and/or more intensive noise; 3. It is possible to expect that a good curve fitting is possible in both cases that allows establishing approximate analytic dependences between the output and input parameters (examples of the fitted curves are given in all Figures 8-11); 4. There are points where these curves cross the zero level: ≈ 0.73 and . ≈ 0.16 for ∆ ; ≈ 0.84 and . ≈ 0.05 for ∆ − − .

Curve Fitting Details
Having, in general, shown the possibility of a good curve fitting, we come to the next questions-what the best (appropriate) fitting is and how to characterize its accuracy. There are well developed theories of LMSE and robust fitting [46,47]. A visual analysis of the scatter-plots in Figures 7-10 shows that there are no obvious outliers in the data; thus, there is no obvious necessity to apply a robust fitting. LMSE regression is implemented in Figure 11. The scatter-plot ∆PSNR − HVS − M vs. P 2.7σ and the fitted curve.

Curve Fitting Details
Having, in general, shown the possibility of a good curve fitting, we come to the next questions-what the best (appropriate) fitting is and how to characterize its accuracy. There are well developed theories of LMSE and robust fitting [46,47]. A visual analysis of the scatter-plots in Figures 7-10 shows that there are no obvious outliers in the data; thus, there is no obvious necessity to apply a robust fitting. LMSE regression is implemented in many software tools including MATLAB and others; therefore, we can use one of them (the MATLAB Curve Fitting Tool in our case). To characterize the fitting accuracy, it is common [46] to use the root mean square (RMSE) that should be as small as possible, as well as a goodness-of-the-fit R 2 and/or adjusted goodness-of-the-fit AdjR 2 that have to be as large (close to the unity) as possible.
Standard curve fitting tools usually provide several options of fitting functions including polynomials, exponential or weighted sum of exponents, rational functions, Fourier series, etc. Finding the best fitting function is more an engineering and heuristic rather than scientific task. Moreover, some solutions can be very close to each other according to quantitative criteria that does not allow for giving a unique practical recommendation.
The curves fitted for the scatter-plots in Figures 8-11 have been obtained using rational functions. For the data in Figure 8, one has an RMSE about 0.37 and R 2 and AdjR 2 about 0.98. For the scatter-plot in Figure 9, the RMSE is about 1.72 and the R 2 and AdjR 2 are about 0.85. For the data in Figure 10, one has an RMSE about 0.43 and R 2 and AdjR 2 about 0.975. For the scatter-plot in Figure 11, the RMSE is about 1.66 and the R 2 and AdjR 2 are about 0.86. Supposing that the fitting is good enough, it is possible to conclude that the fitting for ∆PSNR is considerably more accurate than for ∆PSNR − HVS − M (this is not because of bad fitting but because of larger sparsity of data for ∆PSNR − HVS − M). It is better to use P 2σ in predicting the ∆PSNR and P 2.7σ in predicting the ∆PSNR − HVS − M. The parameters of the fitted curves are presented in Table 1.
One might be interested whether or not other approximations are able to provide better results. The Fourier models, e.g., the Fourier model 3, are able to provide quite a good fitting in the sense of quantitative criteria (see Figure 12), but the obtained curves can be not monotonous, and that is not in agreement with our assumptions. The same relates to polynomial approximations. For polynomials of the order 2 and 3, the fitting accuracy parameters are worse than reported above, whilst for polynomials for the order 4 and 5, the curves become not monotonous. Finally, good approximations can be obtained for the weighted sums of two exponentials (see Figure 13 and compare it to Figure 9); thus, the use of rational functions or the sums of two exponentials can be considered an appropriate choice.

Dependence Expression Parameters
∆PSNR on P 2σ f (x) = (p 1 x+ p 2 )/(x 3 + q 1 x 2 + q 2 x + q 3 )  Having the fitted curves at hand, it is possible to predict ∆ and ∆ − Having the fitted curves at hand, it is possible to predict ∆PSNR and ∆PSNR − HVS − M for any image to be compressed. For this purpose, one has to calculate the input parameter and to substitute it into the approximating curve defined in Table 1. For example, the estimated P 2σ is equal to 0.6. Then, according to the data in Figure 7 (or Figure 12), the predicted ∆PSNR is about −0.8 dB, and the predicted ∆PSNR − HVS − M is about −6.5 dB according to the data in Figure 8 and about −6.1 dB according to the approximation in Figure 13. In any case, one can conclude that the OOP for this image is absent and that it is not worth compressing it with Q OOP . An example of such a situation is given in Figure 14, where both the predicted and the true ∆PSNR and ∆PSNR − HVS − M are negative (the predicted values are equal to −0.71 dB and −6.34 dB whilst the true values are equal to −0.73 dB and −5.09 dB, respectively). As one can see, the compressed image is slightly smeared.  Having the fitted curves at hand, it is possible to predict ∆ and ∆ − − for any image to be compressed. For this purpose, one has to calculate the input parameter and to substitute it into the approximating curve defined in Table 1. For example, the estimated is equal to 0.6. Then, according to the data in Figure 7 (or Figure  12), the predicted ∆ is about −0.8 dB, and the predicted ∆ − − is about −6.5 dB according to the data in Figure 8 and about −6.1 dB according to the approximation in Figure 13. In any case, one can conclude that the OOP for this image is absent and that it is not worth compressing it with QOOP. An example of such a situation is given in Figure  14, where both the predicted and the true ∆ and ∆ − − are negative (the predicted values are equal to −0.71 dB and −6.34 dB whilst the true values are equal to −0.73 dB and −5.09 dB, respectively). As one can see, the compressed image is slightly smeared. One can argue that the prediction is not accurate for ∆ − − when is about 0.2. Consequently, in this case, ∆ − − can be from −16 dB to −7 dB depending on the image at hand whilst the predicted values are about −10.5 dB (see Figure  13). Such an accuracy of prediction seems low since the errors can be up to 5.5 dB; however, in practice, this is not a problem for several reasons. First, such a small usually corresponds to images corrupted by low intensity noise which is invisible. Then, no positive effect of noise filtering can be seen anyway. Second, such ∆ − − values usually correspond to − − about 55…60 dB. Then, after compression, one has a − − about 45…50 dB, i.e., the introduced distortions are invisible. Analyzing the scatter-plots for ∆ − − , one can expect that the prediction accuracy for 0.5, which is of more interest, is considerably better than for ≤ 0.5 (see, e.g., the scatter-plot in Figure 9). To check this hypothesis, we have calculated the RMSE of the fitted curves in the considered intervals. The RMSE for ≤ 0.5 equals to 2.34 whilst it is equal to 1.21 for 0.5, i.e., our assumption is valid. Then, one can carry out quite an accurate prediction in the interval 0.5 where it is especially important to undertake decisions on the OOP's existence and Q setting.

Factors Affecting the Accuracy of Prediction
There are also other factors affecting the accuracy of prediction. A first factor is noise realization. It is clear that, for a given test image and AWGN variance, the ∆ , ∆ − − , and even QOOP can vary from one noise realization to another. To study this aspect, we have analyzed three images of different complexity corrupted by AWGN with three different values. The values of ∆ and ∆ − − have been measured for QOOP and the variances of these parameters have been calculated. It has One can argue that the prediction is not accurate for ∆PSNR − HVS − M when P 2σ is about 0.2. Consequently, in this case, ∆PSNR − HVS − M can be from −16 dB to −7 dB depending on the image at hand whilst the predicted values are about −10.5 dB (see Figure 13). Such an accuracy of prediction seems low since the errors can be up to 5.5 dB; however, in practice, this is not a problem for several reasons. First, such a small P 2σ usually corresponds to images corrupted by low intensity noise which is invisible. Then, no positive effect of noise filtering can be seen anyway. Second, such ∆PSNR − HVS − M values usually correspond to PSNR − HVS − M n about 55...60 dB. Then, after compression, one has a PSNR − HVS − M tc about 45...50 dB, i.e., the introduced distortions are invisible.
Analyzing the scatter-plots for ∆PSNR − HVS − M, one can expect that the prediction accuracy for P 2σ > 0.5, which is of more interest, is considerably better than for P 2σ ≤ 0.5 (see, e.g., the scatter-plot in Figure 9). To check this hypothesis, we have calculated the RMSE of the fitted curves in the considered intervals. The RMSE for P 2σ ≤ 0.5 equals to 2.34 whilst it is equal to 1.21 for P 2σ > 0.5, i.e., our assumption is valid. Then, one can carry out quite an accurate prediction in the interval P 2σ > 0.5 where it is especially important to undertake decisions on the OOP's existence and Q setting.

Factors Affecting the Accuracy of Prediction
There are also other factors affecting the accuracy of prediction. A first factor is noise realization. It is clear that, for a given test image and AWGN variance, the ∆PSNR, ∆PSNR − HVS − M, and even Q OOP can vary from one noise realization to another. To study this aspect, we have analyzed three images of different complexity corrupted by AWGN with three different values. The values of ∆PSNR and ∆PSNR − HVS − M have been measured for Q OOP and the variances of these parameters have been calculated. It has been established that variances of these parameters are about 0.001; even in the worst case the maximal variance was equal to 0.0033 for ∆PSNR and 0.0046 for ∆PSNR − HVS − M. This means that the influence of realization can be neglected (recall that the RMSE of fitting is about 0.4 for ∆PSNR and about 1.2 for ∆PSNR − HVS − M in the area under interest). If the Q OOP was different for different realizations, the differences were equal to 1 (e.g., an OOP was observed for Q equal to either 33 or 34) and the difference in values of ∆PSNR or ∆PSNR − HVS − M in the neighbor points was negligible (e.g., analyze the dependences in Figure 3 near their peaks). Thus, in practice, the influence of noise realization on the parameters of the OOP can be ignored.
One more parameter that can influence the prediction is a possible variation of the input parameter. Having an approximation ∆Metr(Q) and variance of the input parameter σ 2 P , a variance of ∆Metr(Q) can be estimated as are large for a large P 2σ or, equivalently, a small P 2.7σ ; however, we are more interested in places where the approximation curves cross the zero level. The absolute values of the derivatives there are about 50; thus, we need to understand what the values are of σ 2 P . It is possible to expect that σ 2 P might depend on several factors, namely, the number of blocks M, the properties of images and noise variance, and how the blocks are positioned in a considered image. To avoid possible problems with the images with some regular structures, we propose to apply a random positioning of blocks where indices of their left upper corner are (rounded-off) random variables in the limits from i = 1 till i = I Im − 7 and from j = 1 till j = J Im − 7, with a uniform distribution. Within this approach, we have first analyzed three test images of different complexity with three different noise variance values. A set of realizations of the random block positions have been generated for M = 500. A variance σ 2 P for the input parameter P 2σ was in the limits from 5 × 10 −6 till 8 × 10 −5 . There is no obvious dependence on the image complexity but there is a tendency of σ 2 P decreasing if the AWGN variance increases (σ 2 P is about 0.00001 for an AWGN variance about 200 that usually corresponds to P 2σ ≥ 0.5). The latter is a positive factor since d∆Metr(Q) dQ 2 is larger just for a large P 2σ . Then, σ 2 P d∆Metr(Q) dQ 2 is about 2.5 × 10 −2 (the standard deviation is about 0.16 and it is sufficiently smaller than the RMSE of prediction due to fitting).
We have also carried out a similar study for the input parameter P 2.7σ . The limits of its variance variation are similar-from 3 × 10 −6 till 8 × 10 −5 . The σ 2 P values are smaller for a larger AWGN variance.
In addition, we have also fixed the positions of the blocks and calculated the variances σ 2 P for a set of noise realizations. This time the variances were even smaller (from 2 × 10 −6 till 6 × 10 −6 ) and, hence, we can neglect this factor.

Other Practical Aspects
Note that it is quite easy and fast to carry out a prediction. A calculation of the DCT in 8 × 8 blocks is a standard operation in image processing [48] that can be performed efficiently using both hardware and software. Then, easy comparisons and elementary arithmetic operations are needed to calculate the input and output parameters.
One can also be interested in whether or not the lossy compression by BPG in the OOP produces some benefits compared to other lossy compression techniques applied to noisy images. Table 2 contains data that allow for comparing BPG to the coder AGU [49] for some test images and some noise variances. First, a comparison has been completed for two images used in obtaining the approximating dependences (fitted curves), namely, the images, Frisco and Fr01. As one can see, the BPG encoder provides benefits in three senses: (1) it produces about a 0.8 dB better PSNR nc in the OOP; (2) it provides a better visual quality according to the visual quality metric MS − SSI M nc , and (3) a slightly (by a few percent) larger CR is usually ensured. Meanwhile, it can also be interesting to check if this happens for images not used in obtaining the scatter-plots (for image processing approaches based on learning, a verification stage is obligatory). For this purpose, we have obtained simulation data for two test images-Aerial and Airfield-that have not been used in previous analysis. The results are presented in Table 2. Their analysis shows the following. First, again the results for the BPG coder are better for both metrics and, at least, not worse according to the CR. An interesting situation has been observed for the test image, Aerial, corrupted by AWGN with a variance equal to 64 and the image, Airfield, corrupted by AWGN with a noise variance equal to 100. For the encoder AGU, an OOP is absent whilst for the encoder BPG, an OOP exists for both the considered metrics.
Thus, we can state that the BPG encoder outperforms AGU and this takes place for both images that have been used in obtaining scatter-plots and images that have been employed for verification.

Decision-Making and Other Practical Cases
Supposing that prediction has been carried out for a given image, one obtains ∆PSNR or ∆PSNR − HVS − M or both. A question is what to do further? There can be different options-to rely only on one parameter (rather than one of two) or on both. The choice is very heuristic without an obvious preference (e.g., special studies with customers assessing the compressed image quality can be carried out to obtain a reliable answer). We propose the following practical algorithm: 1.
If ∆PSNR + ∆PSNR − HVS − M ≤ −1 dB, use Q = 28 to provide invisibility of introduced distortions and the possibility of noise removal in it by applying postfiltering to the decompressed images.
An example in Figure 13 corresponds to case 3. An example for situation 1 is presented in Figure 15. The positive effect of a lossy compression is seen, especially in an image's homogeneous regions where the noise removal is obvious. Finally, an example for situation 2 is demonstrated in Figure 16. In this case, the positive effect of noise suppression can be noticed as well; however, edge/detail smearing takes place too.  If situation 1 is falsely recognized as situation 2 or vice versa, practically nothing happens since the recommended Q values differ only by unity. If situation 2 is falsely recognized as situation 3, it is not a problem since a "careful" compression is carried out (by the expense of a smaller CR). If situation 3 is falsely recognized as situation 2, a slightly more smearing of the compressed image can be observed (however, the CR is larger). Finally, situations 1 and 3 are practically never misclassified. As it has been stated in Section 2, AWGN can be a simplified noise model for which the noise variance or standard deviation can be estimated in a blind manner (automatically) [50,51]. A more general case is the signal-dependent noise model [52]. Let us demonstrate that the OOP is also possible for images corrupted by Poisson noise compressed by BPG. Note that two approaches to such a compression are possible. The first one is to apply BPG directly. The second one is to apply a proper VST (Anscombe transform in the considered case [53]) and some pre-normalization before the compression with inverse operations at the decompression stage. In this case, the signal-dependent noise converts to an almost pure additive and we arrive at the image/noise model studied above; thus, let us concentrate on the first approach.
The analysis has been carried out for six RS test images. The obtained rate-distortion curves − − ( ) and − ( ) are given in Figure 17. As one can see, the OOP exists for the metric MS-SSIM for the images, Frisco and Fr04, and for the Thus, the fully automatic procedure for a given noisy image is as follows: 1. Estimate the noise variance by some blind method of a noise variance assessment (if the noise variance is not known in advance); 2.
Calculate ∆PSNR and ∆PSNR − HVS − M using the expressions and parameters in Table 1; 4.
Make a decision on the recommended Q OOP according to the algorithm described above; 5.
Carry out a compression using the recommended Q OOP .
As it has been stated in Section 2, AWGN can be a simplified noise model for which the noise variance or standard deviation can be estimated in a blind manner (automatically) [50,51]. A more general case is the signal-dependent noise model [52]. Let us demonstrate that the OOP is also possible for images corrupted by Poisson noise compressed by BPG. Note that two approaches to such a compression are possible. The first one is to apply BPG directly. The second one is to apply a proper VST (Anscombe transform in the considered case [53]) and some pre-normalization before the compression with inverse operations at the decompression stage. In this case, the signal-dependent noise converts to an almost pure additive and we arrive at the image/noise model studied above; thus, let us concentrate on the first approach.
The analysis has been carried out for six RS test images. The obtained rate-distortion curves PSNR − HVS − M tc (Q) and MS − SSI M tc (Q) are given in Figure 17. As one can see, the OOP exists for the metric MS-SSIM for the images, Frisco and Fr04, and for the metric PSNR-HVS-M for the image, Frisco (similar situations have been earlier observed for the AWGN case, see the plots for the test image Fr02 in Figure 7). In all three cases, the OOP is observed for Q = 37. According to PSNR tc , the OOP exists as well and this happens for Q = 37, while ∆PSNR can reach 3 dB; thus, we can expect that automatic procedures for lossy compression can be designed for BPG for images corrupted by signal-dependent noise, including the speckle typical for synthetic aperture radar images.
for Q = 37, while ∆ can reach 3 dB; thus, we can expect that automatic procedures for lossy compression can be designed for BPG for images corrupted by signal-dependent noise, including the speckle typical for synthetic aperture radar images.
It might also be true that two, three or more image components are corrupted by the noise and these images have to be compressed in a lossy manner. Certainly, a componentwise compression for which all the results obtained above are valid is possible; however, the joint compression of several component noisy images is possible as well. For example, available BPG software allows for compressing color, i.e., three-channel images; thus, it is easily possible to test this practical situation. As an initial case, let us assume that all the component images are corrupted by AWGN with the same noise variance. Since the results can be of interest for both three-channel and color images, four test images have been considered: the RS three-channel images, Frisco and Diego, and the widely used color test images, Lena and Baboon. The original images of size 512 × 512 pixels were presented in RGB, an AWGN with variance equal to 100 was independently added to each component image, a 4:2:2 version of BPG was applied, and the metrics were calculated independently for each component image. The obtained dependences are presented in Figure 18.
It follows from the preliminary analysis that the OOP can also exist and it takes place practically for the same Q for all component images. Meanwhile, there are also some specific results, such as more obvious OOPs for the G component; thus, additional studies are needed.  It might also be true that two, three or more image components are corrupted by the noise and these images have to be compressed in a lossy manner. Certainly, a componentwise compression for which all the results obtained above are valid is possible; however, the joint compression of several component noisy images is possible as well. For example, available BPG software allows for compressing color, i.e., three-channel images; thus, it is easily possible to test this practical situation. As an initial case, let us assume that all the component images are corrupted by AWGN with the same noise variance. Since the results can be of interest for both three-channel and color images, four test images have been considered: the RS three-channel images, Frisco and Diego, and the widely used color test images, Lena and Baboon. The original images of size 512 × 512 pixels were presented in RGB, an AWGN with variance equal to 100 was independently added to each component image, a 4:2:2 version of BPG was applied, and the metrics were calculated independently for each component image. The obtained dependences are presented in Figure 18.
It follows from the preliminary analysis that the OOP can also exist and it takes place practically for the same Q for all component images. Meanwhile, there are also some specific results, such as more obvious OOPs for the G component; thus, additional studies are needed.

Conclusions
The task of the lossy compression of images corrupted by AWGN by the BPG coder is considered. The main attention is paid to the single-channel image case. It is shown that an OOP might exist according to the standard criteria as PSNR (and MSE). Moreover, an OOP might exist for the visual quality metrics such as MS-SSIM and PSNR-HVS-M, although, this occurs more rarely. It is demonstrated that OOP existence depends on the noise variance and image complexity where an OOP exists with a higher probability for simpler structure images and/or a higher intensity noise. With a noise intensity increase, the QOOP increases as well, and the corresponding expression is obtained. The approach to predicting the OOP's existence and the metric values in it is proposed. This is based on obtaining the approximating dependences in advance using scatter-plots and curve fitting. Then, for a given image, a simple statistical parameter has to be estimated, its value

Conclusions
The task of the lossy compression of images corrupted by AWGN by the BPG coder is considered. The main attention is paid to the single-channel image case. It is shown that an OOP might exist according to the standard criteria as PSNR (and MSE). Moreover, an OOP might exist for the visual quality metrics such as MS-SSIM and PSNR-HVS-M, although, this occurs more rarely. It is demonstrated that OOP existence depends on the noise variance and image complexity where an OOP exists with a higher probability for simpler structure images and/or a higher intensity noise. With a noise intensity increase, the Q OOP increases as well, and the corresponding expression is obtained. The approach to predicting the OOP's existence and the metric values in it is proposed. This is based on obtaining the approximating dependences in advance using scatter-plots and curve fitting. Then, for a given image, a simple statistical parameter has to be estimated, its value has to be used as an approximator input and the prediction with decision-making needs to be completed. Its accuracy is analyzed and shown to be appropriate for practice. The recommendations on an automatic Q setting for different practical situations are given and illustrated.
The possibility of OOP existence for signal-dependent noise and three-channel images corrupted by AWGN is shown. This can be the direction of the future research. Another direction deals with improving the prediction accuracy for visual quality metrics by the joint processing of several input parameters that can be performed by a trained neural network.