Next Article in Journal
Adaptive Trajectory Tracking Safety Control of Air Cushion Vehicle with Unknown Input Effective Parameters
Previous Article in Journal
A Multi-Image Encryption with Super-Lager-Capacity Based on Spherical Diffraction and Filtering Diffusion
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Adaptive Spatial Filtering Method in the Wavelet Domain for Medical Images

by
Maria Simona Răboacă
1,2,3,*,
Cătălin Dumitrescu
4,*,
Constantin Filote
1 and
Ioana Manta
2,5
1
Faculty of Electrical Engineering and Computer Science, “Stefan cel Mare” University of Suceava, 720229 Suceava, Romania
2
National R & D Institute for Cryogenic and Isotopic Technologies, 240050 Rm. Valcea, Romania
3
Faculty of Building Services, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania
4
Faculty of Transports, “Polytechnic” University of Bucharest, 060042 Bucharest, Romania
5
Faculty of Power Engineering, “Polytechnic” University of Bucharest, 060042 Bucharest, Romania
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2020, 10(16), 5693; https://doi.org/10.3390/app10165693
Submission received: 5 June 2020 / Revised: 6 August 2020 / Accepted: 14 August 2020 / Published: 17 August 2020

Abstract

:
Although there are many methods in the literature to eliminate noise from images, finding new methods remains a challenge in the field and, despite the complexity of existing methods, many of the methods do not reach a sufficient level of applicability, most often due to the relatively high calculation time. In addition, most existing methods perform well when the processed image is adapted to the algorithm, but otherwise fail or results in significant artifacts. The context of eliminating noise from images is similar to that of improving images and for this reason some notions necessary to understand the proposed method will be repeated. An adaptive spatial filter in the wavelet domain is proposed by soft truncation of the wavelet coefficients with threshold value adapted to the local statistics of the image and correction based on the hierarchical correlation map. The filter exploits, in a new way, both the inter-band and the bandwidth dependence of the wavelet coefficients, considering the minimization of computational resources.

1. Introduction

1.1. State of the Art

Several statistical models for images have been proposed in the wavelet specialized literature. The simplest and most widespread model considers wavelet coefficients to be independent and identically distributed according to a generalized Gaussian distribution [1,2,3,4], a model successfully used in noise removal and image restoration applications. This model approximates the first-order statistics of wavelet coefficients, but does not take into account higher-order statistics, which leads to some limitations. The existing dependencies between the wavelet coefficients have been studied in order to implement efficient compression algorithms and they were explicitly formulated in [5] or implicitly in [6]. Most statistical models in the wavelet domain of images can be grouped into two categories: models that exploit inter-scale dependencies and models that exploit intra-scale dependencies.
Despite the processing complications, it has been shown [7,8,9,10] that using both types of dependencies leads to better results in white noise removal, compared to the cases where only one of the two dependencies are operated.
Adaptive spatial filters are easy to be hardware implemented as proven by Xuejun and Zeulu [11]. They used this type of filter in order to enhance block coded images during the decoding stage.
Nagaoka et al., used in their work [12] the same type of filter combined with PCA (Principal Component Analysis) in order to reduce noise in photo-acoustic signals. The noise sources are considered to be coming from auxiliary equipment, such as a stepping motor or a power supply. Adaptive spatial filter was, in this case, significantly more effective than, for example, a band-pass one.
Two variants of adaptive spatial filtering with selectable ROIs (Region of Interest) were studied and compared by Liefhold et al., in [13]: one where the ROIs were chosen based on known information and one where the ROIs were optimized numerically. The filters were applied for Electro-Encephalo-Graphy (EEG) data type and the latter one achieved an efficiency of 97.7% compared to 93.7% obtained by applying the first variant.
The implementation of this filter in the medical and biological field was also of interest for Ostlund et al., in their study [14] related to electromyogram signals. For example, in the case of weak contact between skin and electrode, this filter achieved 19 dB improvement, compared to other filters. Applications related to spinal cord injury were also studied in [15], giving clear information about muscle paralysis and neuromuscular changes.
In [16], Bissmeyer and Goldswothy took advantage of the filter’s capability and used it for noise reduction in speech recognition for hearing-deficient persons, with a good preservation of the original signal and without distortions.
The application fields of the adaptive spatial filter are numerous. Morin used such techniques in his work [17] in order to automatically detect targets in infrared images. Again, like in [12], the filter is compared against a band-pass filter which does not prove to be as efficient as the adaptive spatial filter.
Delisle-Rodriguez et al., conducted in [18] a study related to the use of the filter in EEG (Electro-Encephalo-Graphy) signals for the preservation of useful neural information during the processing stage. Thus, they were able to achieve accuracies of over 86%. The use of the adaptive spatial filter in the neurology domain was also deeply studied by Sekihara and Nagarajan in their book [19]. They cover topics like brain imaging, neural signals, and the effects that different parameters may have on the filter’s efficiency. They consider adaptive spatial filters to be a real breakthrough for the neural imaging techniques.
In [20], Saleem and Razak emphasize the importance of image enhancement in order to obtain a suitable image for certain usage. In this manner, they conduct a comparative study on various techniques used for the noise reduction in images, reaching the conclusion that spatial adaptive filters are not only the most efficient ones, but are also the easiest to implement.
Yuksel and Olmez implement in [21] an optimal spatial filter design for the classification of motor images derived from BCI (Brain Computer Interface). They propose an innovative extraction of features and consider an additional classification layer linked to the filter through mapping functions, leading to a higher classification accuracy. Spatial filtering was also implemented by Mourino et al., in [22] for the training process of a BCI.
Another comparison of spatial filters is conducted by Cohen in [23] with the purpose of detecting oscillatory patterns in multichannel data from EEG signals. His innovative method is based on eigen decomposition and achieves a maximization of the SNR (signal to noise ratio). As a proof of its vast field of application, eigenvector adaptive spatial filtering was also studied by McCord et al., in [24] for the estimation of a house price, in which they analyzed the spatial heterogeneity and the local variation. Similar techniques were also used in [25] for the trade sector, using a Poisson type regression or in [26] for the unemployment data in Germany.
Wu et al., even performed a MATLAB simulation of various adaptive spatial filtering algorithms in [27] for solving the problem of weak satellite signal power subject to strong interferences. They compared three types of adaptive spatial filtering algorithms: LMS (Least Mean Square), RLS (Recursive Least Square) and NLMS (Normalized Least Mean Square). In [28], Zhang et al., designed an optimal spatial filter for the preprocessing stage of array signals. They considered a total of five methods, amongst which the one based on the Lagrange multiplier theory proved to be the optimal one.
Spatial filtering was also applied in lidar technologies by Tang et al. [29]. Their objective was to remove background solar noise which the lidar instruments are sensitive to, and, therefore, their solution was to implement a voxel-based spatial filtering algorithm. Image capturing equipment can also be affected by transmission errors, atmospheric conditions or camera settings. Rajamani and Krishnaveni solve these issues in [30] by applying spatial filtering algorithms.
The many applications of adaptive filtering techniques are further discussed in more depth by Patuelli et al., in [31]. One of them is related to pattern recognition studied also by Gorr and Olligschlaeger in [32]. They propose a weighted spatial adaptive filtering in which the parameters are automatically detected and estimated.
Another situation suitable for using temporal and spatial filtering methods is related to assistive communication and the sensorimotor cortex [33]. Impaired persons can control a prosthetic limb through EEG signals which, of course, are subject to noise and need to be filtered [34]. Through appropriate spatial filtering techniques of the EEG signals, mental workload can be estimated and classified [35].
Al-Khayyat proposes [36] an innovative solution for solving Maxwell’s equation by implementing a spatial filtering frequency dependent model for 1D, 2D and 3D dimensions.
Medical imaging is also applied in ophthalmology domain, in the study of the retinal fundus [37]. This type of image is mostly affected by Gaussian or Salt & Pepper noise which, can of course be removed through appropriate filtering methods. The most suitable one is concluded to be the adaptive median filter.
Roy and Shukla conduct an in-depth review [38] related to image denoising using spatial filtering methods. They reach the conclusion that the best method for de-noising, but also preserving image details, is by using the wavelet transform domain.
Another type of spatial filtering method was proposed by Saa et al., in [39]. Their filter is coherence-based and it is used to predict stimulus features extracted from brain signals. The parameters used are phase, frequency and spatial distribution of features.
Electronics and telecommunications domain can also benefit on the adaptive spatial filtering applications. In [40], an analog spatial filtering method is proposed in order to improve the power consumption of a cognitive radio.
Reiss et al., used the wavelet domain in [41] in order to generate predictive methods based on brain imaging. Their method is able to successfully predict if ADHD (attention deficit hyperactivity disorder) is present or absent. Wavelet transform domain can be used in de-noising both signals and images and this was thoroughly studied in [42] by Xu et al., They compared spatial filtering to Wiener filters and obtained a higher performance of the first one.
Document security and privacy can be ensured through watermarking. There are several algorithms for embedding a watermark, but not all of them generate a perfect solution. Based on the wavelet domain, an optimum watermark can be obtained, having high imperceptibility and good robustness [43]. This was also studied by Venkat et al., in [44] in the case of blind digital watermarking; also, wavelet domain proved to be a suitable scheme for generating a robust watermark.
Super resolution images used in the medical field can be obtained by estimating wavelet coefficients at given high resolutions [45]. These coefficients are then fed into a convolutional neural network in order to reconstruct an image with improved quality.
Cheng et al., proposed in [46] a hybrid Frequency Wavelet Domain Deconvolution for retrieving the impulse function from terahertz reflection imaging, which is further used to extract the spectroscopy features of the image.
B-spline wavelet domain was used in [47] by Zhang et al., in order to extract useful exon information from eukaryotes’ DNA. Inter-scale correlation of coefficients is used in order eliminate background noise.
During decompression of an image, unwanted noise may appear. Hill et al., solve this issue in [48] by using a wavelet-based approximate message passing structure. Using wavelet domain leads to a twice as fast method compared to methods not based on the wavelet domain. Thresholding is also of great importance in the wavelet transforms, especially for de-noising signals. Alfaouri and Daqrouq propose [49] a method of choosing the optimal threshold value for de-noising an electrocardiogram (ECG) signal.
X-ray imaging can also benefit from the advantages of wavelet domain. In [50], Purisha et al., use a controlled orthonormal wavelet domain for the reconstruction of a tomographic image.
Unwanted noise in clinical images and operations can be measured using several techniques [51,52]. These measurements are useful in assessing the potential denoising power when applying different denoising methods.
Dabov et al., extended the applications of denoising algorithms to color images through the use of a collaborative Wiener filter [53], based on block-matching fragments of the image into 3-dimensional group arrays (BM3D), thus obtaining very good PSNR (peak signal to noise ratio) results. A fast method for denoising images is the Non-Local Means (NLM) strategy, which was presented in [54], taking into consideration three aspects: noise, quality and MSE (Mean Square Error). This algorithm performed best in the case of textured images.
The methods proposed in [53,54] are applied for noise reduction in images. The results obtained in [53,54] are good, with applicability for the analysis of macroscopic images, the noise, especially the white Gaussian noise, is significantly reduced, but the contour is degraded. The algorithm proposed in this article represents a first stage in medical image processing, and the results obtained will be used in the development of a sub-pixelation algorithm with neural networks, which will be developed in the second stage.
Currently, medical imaging analyzes tumors on a macroscopic level. The interest in medical imaging is the transition from macroscopic to microscopic analysis by using image sub-pixelation algorithms so that the exact tumor contours in medical images can be determined.
The algorithm proposed by us achieves a good ratio between noise reduction and conservation of contours in medical images, taking into account the conservation of Hounsfield coefficients (radiodensity) used in medical imaging. The results obtained with this algorithm will be used in the future to develop a method of sub-pixelation of tumor regions to develop a new algorithm called re-segmentation using neural networks.
Applying the described algorithm on some medical images obtained by CT and magnetic resonance, it is found that the noise was reduced by up to 80%, and the maintenance of the details in the images was at least 80%. The proposed filter is superior to the Wiener filter proposed in [53] and [54], due to the sensitivity to contours and other important features in the images, retaining some high frequency data in the image. The direct spatial correlation of the undecided wavelet transform on several adjacent scales obviously improves the important contours in the image, while the noise and non-essential (small) features in the image are suppressed.
The compromise that this filter makes is between suppressing noise and retaining smaller details in the image. Those characteristics that induce coefficients higher than the noise are easily retained, as they are not degraded. Characteristics that induce coefficients of the same magnitude as noise are suppressed because they are not distinct from noise.

1.2. Scope and Objectives

Imaging techniques for the human body have grown increasingly popular in recent times, with the development of equipment in this field, as well as with the development of hardware and software systems in the field, capable of managing the volumes of data involved in medical imaging. This has allowed today more and more advanced methods in the field of MR (Magnetic Resonance) imaging, CT (Computer Tomography) or PET (Positron Emission Tomography). All of these methods require advanced image processing in order to extract useful diagnostic information from the available volume of data. For example, 3D rendering techniques allow better visualization of three-dimensional volumes, feature extraction techniques from images allow better assisted diagnosis, image segmentation techniques allow the separation of different areas or tissues of interest, and noise elimination techniques from images allow the improvement of image quality [55]. Some of these image processing techniques can be used in a processing chain. For example, in order to segment certain parts of the human body and obtain good segmentation results, the noise from the images must be removed [56,57,58].
In this article we propose and characterize by simulations a novel algorithm used to reduce the white Gaussian additive noise in the images. The algorithm exploits both types of dependencies between wavelet coefficients. We characterized the algorithm through simulations. The proposed algorithm exploits the intra-scale dependence of wavelet coefficients by determining the local dispersion of the signal, and the inter-scale dependence by determining the contours based on the hierarchical correlation map. The results obtained by applying this algorithm show that the reduction of noise in images is achieved when the quality of the contours is maintained or even improved. At the same time, by choosing a parameter, it is possible to establish the compromise between the amount by which the noise is reduced and the quality of preserving the details in the image [59].
This article is divided in four sections, which include Introduction, Materials and Methods, Results and Discussions and Conclusions. In the Introduction, we introduced the subject of our research and presented the state of the art related to the topic. The second part is Materials and Methods where all the steps of the filtering technique are presented: Adaptive spatial filtering based on inter-band and intra-band correlation of wavelet coefficients and Adaptive spatial soft truncation based on the hierarchical correlation map and with the establishment of the threshold value based on local statistics. In the third part we presented our Results and Discussions in which we conducted studies both on usual images and also on CT images. Finally, in the last part, we presented the Conclusions of the obtained results.

2. Materials and Methods

This study was carried out using documentation based on papers and articles presented in journals or at conferences, scientific literature related to adaptive spatial filtering and wavelet domain topics, scientific on-line databases such as Google Scholar, Google Academic, MDPI, Scopus, Science Direct and other web pages or research platforms. Table 1 presents the types of images considered as inputs for the filter, together with their peak signal to noise ratios (PSNR) and the noise dispersion (sigma):
This article makes use and conducts an in-depth review on a great number of papers, official documents, information related to de-noising methods in the wavelet domain and also findings from other research institutes relevant to our topic [60,61,62,63], such as the National Research and Development Institute for Cryogenic and Isotopic Technologies ICSI, Ramnicu Valcea, Romania, Automotive Department.

2.1. Theoretical Background

Based on both inter-band and intra-band correlation, J. Liu and P. Moulin [64] proposed an algorithm for reducing white Gaussian additive noise in images, whose results are compared by the authors with those obtained by the following methods:
-
Hard truncation with universal threshold;
-
A posteriori maximum probability estimator (MAP), also proposed by the authors in [65];
-
A maximum probability estimator modeled by hidden Markov chains, proposed by J. Romberg in [66];
-
Adaptive Wiener filtering in the wavelet domain, proposed in [67,68].
The authors compared their proposed method with the maximum probability estimator and adaptive Wiener filtering in the wavelet domain, showing that, depending on the processed image, significant improvements were obtained both in terms of MSE and visually, appreciating them (in terms of MSE) between 8% and 26% for the highest probability estimator, and between 1% and 7% for the Wiener filter. The authors also compared the proposed method with the spatially adaptive method proposed by Chang [69] and presented in the previous paragraph, in which case the results are lower than this (by almost 15% in MSE terms).
In this method, the inter-band correlation is used to obtain a map containing the position of those wavelet coefficients that have a high informational content (contours and texture), called the significance map. The wavelet coefficients thus classified are processed by soft truncation using threshold values adapted to each sub-band, obtained by modeling their distribution by a Laplace distribution.
The intra-band correlation is used to estimate the local signal scatter in the case of coefficients that do not belong to the significance map, which are processed with a maximum probability estimator.
Considering a three-level orthogonal wavelet decomposition, the algorithm proposed by J. Liu and P. Moulin [70] can be described as follows:
-
As in many other noise reduction methods using dyadic wavelet transformation, the lowest resolution sub-band is not processed (represented in blue in Figure 1);
-
Each of the three low-resolution detail sub-bands (corresponding to wavelet decomposition level 3, red in Figure 1) is processed by soft truncation; the threshold value for each sub-band is determined by modeling the distribution of its coefficients by a Laplace distribution of zero mean and dispersion:
σ x , j 2 = m a x ( 0 , σ t 2 σ 2 )
where:
σ t 2 = 1 M i w 2 ( i )
is the Gaussian noise dispersion, M being the number of coefficients in the considered sub-band and w the wavelet coefficients of the noise-degraded image.
In this case, the threshold value corresponding to sub-band j will be:
λ j = 2 σ 2 σ x , j
-
in the case of the other detail sub-bands (corresponding to wavelet decomposition levels 2 and 1 and represented by the colors green and white in Figure 1), the significance map is drawn up based on the absolute value of the parent coefficient corresponding to each coefficient belonging to these sub-bands (in Figure 1, the relation descending coefficient—parent coefficient is also represented); thus, if the value of the parent coefficient is higher than a threshold value T, the coefficient considered is classified as significant, otherwise it is classified as insignificant; the two classes of coefficients, having different informational significance, have different statistical properties, therefore they are processed differently:
o
the significant classified coefficients are soft truncated, the threshold value used is determined based on the modeling of their distribution with a Laplace distribution of zero mean and dispersion:
σ x , s i g 2 = m a x ( 0 , σ t , s 2 σ 2 )
where:
σ t , s 2 = 1 M s i s i g w 2 ( i )
sig representing the significance map, and M s the number of coefficients in this map.
o
insignificant classified coefficients have small values and represent smooth regions; for each such coefficient the dispersion is estimated using a window of dimensions 5 × 5, but from which the significant coefficients are eliminated; if σ ^ i 2 represents the thus estimated dispersion of the signal, then the a posteriori maximum probability estimator will be:
w ^ ( i ) = σ ^ i 2 σ ^ i 2 + σ 2 w ( i )
As it can be seen, the algorithm, just like other algorithms that determine the contours, introduces as a parameter a T-threshold value.

2.2. Algorithm Proposed in This Work

In the article we propose a spatially adaptive soft Truncation algorithm based on the hierarchical correlation map and with the establishment of the threshold value based on local statistics. The proposed algorithm exploits the intra-scale dependence of the wavelet coefficients by determining the local dispersion of the signal, and the inter-scale dependence is exploited by determining the contours based on the hierarchical correlation map. Results obtained by applying this algorithm show that the reduction of noise in images is achieved and the quality of the contours is maintained or even improved. At the same time, by choosing a parameter, it is possible to establish the compromise between the amount by which the noise is reduced and the quality of preserving the details in the image.
The filtering technique proposed by us is based on the fact that the abrupt variations of the pixel intensity in the image determine wavelet coefficients that have high absolute values in several scales, while the noise induces wavelet coefficients that decrease in absolute value with increasing scale.
We define the direct spatial correlation of the undecided wavelet transformation on several adjacent scales to accurately determine the locations of contours or other important features in the image:
C o r r l ( m , n ) = i = 0 l 1 w ( m + 1 , n ) ,       n = 1 , 2 , , N
where l is the number of scales considered in direct multiplication, M being the total number of scales, N is the number of samples in the signal, and w ( j , n ) are the wavelet coefficients of the degraded noise signal corresponding to scale j.
The absence of contours or other significant features in a particular region will allow background noise to be removed from that region. The direct spatial correlation on several adjacent scales of the data will lead to the precise location of the important contours in the image.
If we consider an undecided wavelet decomposition on three levels (so we have three scales), then the direct spatial correlation will be:
C o r r 2 ( 1 , n ) = w ( 1 , n ) w ( 2 , n )
Data representing { C o r r 2 ( 1 , n ) } are rescaled as a function of { w ( 1 , n ) } .
The most important contours are identified by comparing the absolute values of C o r r 2 ( 1 , n ) and w ( 1 , n ) . It is assumed that the position n belongs to a contour if:
| C o r r 2 ( 1 , n ) | > | w ( 1 , n ) |
The w ( 1 , n ) coefficient corresponding to position n, identified as belonging to the contour, is stored in a new matrix w n e w ( 1 , n ) , the contour position is also stored and the value of w ( 1 , n ) is cancelled in the initial matrix. Finally, all determined contours are extracted from C o r r 2 ( 1 , n ) and w ( 1 , n ) by cancelling their values in the locations identified as belonging to contours. We thus obtain two new data sets C o r r 2 ( 1 , n ) and w ( 1 , n ) which form the basis of the next round of contour determination. We proceed further to a new data scaling { C o r r 2 ( 1 , n ) } and a new contour identification.
This procedure, consisting of normalization, identification of contour locations, retention and cancellation, is performed iteratively several times until the power of the remaining coefficients in { w ( 1 , n ) } is less than a given value.
All contour information contained in the original data that is extracted from w ( 1 , n ) during the iterative process is stored in a new w n e w ( 1 , n ) vector. By replacing w ( 1 , n ) with w n e w ( 1 , n ) we will have the spatially filtered version of the first scale of the undecided wavelet transformation, a scale that actually contains most of the noise. This type of filter can be seen as a spatially dependent filter.
The block diagram of the proposed algorithm is presented further in Figure 2 and is discussed in more detail in the subsequent paragraphs.
The whole procedure can be described with the following algorithm:
  • For each scale, from l = 1 M ;
  • We determine the data set w n e w ( l , n ) , with the following itterative process:
    B1.
    We compute C o r r l ( l , n ) and w ( l , n )
    B2.
    We then compute the correlation and wavelet factors:
    P C o r r l ( m ) = n C o r r l 2 ( l , n )
    P W ( l ) = n w 2 ( l , n )
    B3.
    We perform the scaling:
    C o r r l ( m , n ) = C o r r l ( m , n ) P W ( m ) P C o r r l ( m )     for   n = 1 N .
    B4.
    We identify the contour location by verifying if the following condition is met:
    | C o r r l ( m , n ) | > | w ( m , n ) | ,
    Additionally, for each location identified as a contour, we execute:
    { w n e w ( m , n ) = w ( m , n ) w ( m , n ) = 0 C o r r l ( m , n ) = 0
    B5.
    We then test the stopping condition of the iterative process:
    P W ( m ) < δ
In case of non-fulfillment of this condition the process is resumed from step B2, and in case of fulfillment of the condition it is passed to the next scale (step B).
At the end of this procedure, in w n e w ( m , n ) , we have the filtered values of the wavelet coefficients, corresponding to the undecimal wavelet transform of the estimated signal.
We tested the proposed procedure for noise reduction from images obtained from medical applications, using a three-level wavelet development (l = 2), which corresponds to 8 sub-bands. The results obtained by this method were compared with results obtained by Wiener filtration. It was thus observed that the images filtered by the proposed method are affected by less noise, better contoured and do not show artifacts generated by the Gibbs effect or other artifacts and have a better visual quality compared to images filtered with Wiener filter.
Maintaining the idea of punctually establishing the threshold value and exploiting inter-band dependence by determining a map of accounts using the concept of hierarchical, we propose a new spatial adaptive filter.
The proposed algorithm has 3 stages. In the first stage, the threshold values for each coefficient, necessary for the soft truncation of the wavelet coefficients, are determined based on the local statistics. The difference from the method set out in the previous paragraph is primarily in simplifying the determination of local signal dispersion. For this we discard the selection of the coefficients used to determine the signal dispersion based on the random variable z, taking into account all wavelet coefficients contained in a given window, but which has much smaller dimensions and we discard the use of the parent coefficient used to evaluate local dispersion.
In this case, the local dispersion corresponding to the w ( i , j ) coefficient will be given by:
σ ^ x 2 ( i , j ) = m a x ( σ s 2 σ 2 , 0 )
where:
σ s 2 ( i , j ) = 1 M k , l B ( i , j ) w 2 ( k , l )
B ( i , j ) being the set of indices of the coefficients in the current window, and M is the number of coefficients contained in the window.
The current threshold value will be given by:
λ ^ ( i , j ) = σ 2 σ ^ x ( i , j )
Furthermore, in this stage, the contour map is determined based on the hierarchical correlation corresponding to the degraded image.
In the second stage, all wavelet coefficients of the detail sub bands are subjected to a soft truncation operation, determining the coefficients:
w s ( i , j ) = η s ( w ( i , j ) , λ ^ i , j )
η s being the threshold soft truncation operator.
In the third stage, a correction is made on the w s ( i , j ) coefficients obtained in the second stage, depending on the contour map. Thus, the coefficients that were established as belonging to a contour are estimated by the arithmetic mean between the initial value and the value obtained in the second processing stage, the rest of the coefficients remaining unchanged:
w ^ ( i , j ) = { ( w s ( i , j ) + w ( i , j ) ) 2 , if   ( i , j )   to   a   contour w s ( i , j ) , otherwise
We call this correction switched correction based on the hierarchical correlation map.
Another proposed possibility of correction on the w s ( i , j ) coefficients according to the hierarchical correlation map, this time a continuous one, is given by:
w ^ ( i , j ) = ρ ( i 2 , j 2 ) w ( i , j ) + ( 1 ρ ( i 2 , j 2 ) ) w s ( i , j )
where ρ ( k , l ) represents the normed value of the hierarchical correlation corresponding to the coordinate coefficient ( k , l ) .

3. Results and Discussions

Table 2 compares the results obtained in the case of processing the details sub bands corresponding to the first scale of a wavelet transformation by the method described by Chang and Vetterli [71] (adaptive soft truncation with contextual modeling), which was presented in the previous paragraph, and applying the first step of the proposed method.
As it can be seen from the data presented in Table 3, even in the case of applying only the first two steps of the proposed algorithm, better results are obtained both in terms of PSNR and in terms of contour conservation, given by the C coefficient. This coefficient is computed based on the reduction factor and represents how much of the contours were preserved from the original image.
Regarding the proposed method, it can be found that, as the window size increases, the results are weaker both in terms of PSNR and in terms of the C coefficient. This can be justified by the fact that those signal-induced wavelet coefficients are grouped, and with the increase in the window, their influence on the local dispersion decreases, approaching the noise dispersion. A first parameter of the proposed algorithm is the length of the window. The effect of this parameter, as well as of the number of wavelet decomposition levels used on the image quality obtained after the second processing step is presented in Table 4, compared to the results obtained by soft truncation with optimal threshold values. A third parameter that influences the performance of the algorithm is the threshold value used to obtain the contour map from the hierarchical correlation map, whose effect on the whole filtering process, is presented in Appendix A.
It can be observed that in all the considered cases, better results are obtained in PSNR terms than in the case of soft truncation with optimal threshold values, in many cases obtaining even higher C coefficients than the initial ones, hence an improvement of the contours.
Considering an ideal contour map, determined based on the image which is not degraded with noise, the simulations show (Appendix A) that, by the proposed method, an improvement of the contours can be obtained, expressed by higher values of the C coefficient, in the conditions in which the noise reduction is not affected, which is expressed by constant PSNR values. In the real case, only the noise-degraded image is available, the contour map being determined based on it.
As it can be seen in Figure 3, false contour points appear in this case, resulting in a smaller amount of noise removed. Therefore, on the hierarchical correlation map, before determining the contours, we proceed to process it with a median-hybrid filter, described in [72,73,74]. As it can be seen in Figure 3 and from the data in the Appendix A, this filtering operation leads to images characterized by PSNR values closer to those of the images obtained after the first two processing steps of the proposed algorithm, in the conditions under which an improvement of the contours takes place.
The proposed algorithm was tested on different images with dimensions of 256 × 256 pixels and 512 × 512 pixels, under white Gaussian additive dispersion noise degradation conditions: σ = 0.05 and σ = 0.1 . The results are presented in Figure 4.
The analysis of the data presented in the Appendix A shows that the effect of the threshold value used to obtain the contour map from the hierarchical correlation map does not depend on the processed image (as in the case of optimal threshold values in the case of soft truncation), but only on dispersion of the disturbing noise. Thus, for a noise dispersion σ = 0.05 , the optimal value is between 0.1 and 0.15, and for σ = 0.1 , between 0.15 and 0.175.
The interest in such filters is determined by the fact that much of the existing information in the images is provided by contours, and most of the noise removal methods cause them to fade. Such filters can be designed starting from the finding that near the contours the noise is less noticeable than in the smooth regions, thus being possible to change the filter parameters depending on the level of activity of the processed region. In this case, at the cost of removing a smaller amount of noise in the regions containing contours, they can be better preserved. There is thus a compromise between the amount of noise removed and the quality of preserving the contours in the image.
In Table 4, we exemplify the results obtained for the filtration method proposed in two working variants. The first variant proceeds to segment the LL (low level) sub-band of the lowest resolution from a dyadic wavelet decomposition, and depending on the result of the segmentation operation, it chooses a threshold value necessary for the soft truncation of the wavelet coefficients. The method does not require a large computational effort, but the performance obtained by using it is not special. The second method also uses soft truncation of the coefficients, but this time the threshold value is calculated for each coefficient based on local statistics in the wavelet field. On the wavelet coefficients thus obtained, a correction is made according to a contour map determined on the basis of a hierarchical correlation map.
Two variants of correction are proposed: a switched correction, in which case either the truncated coefficient is chosen, or the average between the truncated and the unprocessed coefficient, depending on whether or not the respective coefficient belongs to a contour and a continuous correction, in which case a weighted sum of the two wavelet coefficients is performed, the values of the weights being calculated based on the hierarchical correlation coefficient. In the second case, images are obtained with higher PSNR values, especially in conditions of higher noise level, but which have more blurred contours.
We approached this algorithm in invariant translation form in two ways:
(A)
By integrally applying the proposed algorithm in translation invariant form, mediating the results obtained by applying all three processing steps on the cyclic displacements of the input image;
(B)
By partially applying the proposed algorithm (only steps 1 and 2) in the translation invariant form and performing the correction according to the contour map only on the result thus obtained.
It can be observed from the data presented in Table 5, that the invariant approach to translation of the proposed algorithm leads to good results both in terms of PSNR and in terms of contour conservation. Only in one of the cases presented in Table 5 does the initial image have a higher C coefficient than the processed image, otherwise the processed images have a higher C coefficient. This means that with the removal of the noise, an improvement of the contours is obtained.
It is also found that variant A leads to better overall results than in variant B, but the calculation effort is higher in this case.
The proposed algorithm leads both in the invariant approach to translation and in the simple one to good results both in terms of PSNR and C coefficient, as well as visually, as it can be seen in Figure 5 (Comparative Performance in Standard Images).
Table 6 compares the two correction methods based on the hierarchical correlation map. It can be seen that in the case of continuous type correction, better results are obtained in terms of PSNR than in the case of switched type correction, especially for higher noise levels. However, this is done at the cost of poor preservation of the contours in the image, which is highlighted by lower values of the C coefficient.
Currently, imaging is present in all branches of science, and the acquisition of images that are not disturbed by external factors arouses interest in the field of scientific research. Digital images unaffected by disturbing elements, such as noise, contrast and uneven lighting, allow a correct understanding of its content.
Following the research of the specialized literature and the implemented applications, in this article we will confer a benefit in the extraction of useful information from medical images.
The objectives of the research are to model, process and extract useful information from digital medical images using image segmentation and fusion methods. These techniques generate higher quality images than the original ones.
To analyze the useful informational content in medical images, using hemodynamic modeling and imaging processing to improve diagnosis, we propose and will achieve the following objectives:
  • Improving image quality by selectively eliminating disturbing information, such as noise, and eliminating other defects caused by the acquisition device by using adaptive filters.
  • Highlighting areas of interest by adjusting light intensity and contrast and accentuating contours and textures.
  • Improving the ability to detect edges by using the image fusion method based on the wavelet algorithm.
  • Developing new methods for evaluating segmentation.
  • Evaluating the performance of edge detection methods using operators that analyze the structural similarity of images.
  • The use of high-performance algorithms of Artificial Intelligence to perform sub-pixelation calculations and the transition from macroscopic analysis currently used to microscopic analysis for the analysis of tumors generated by cancer by analyzing medical images.
The innovative nature of the research will be based on the development of a high-performance algorithm to improve the results of medical image filtering using Gaussian functions, multidimensional wavelet decompositions and image fusion. We will also implement a high-performance algorithm by using a matched filtering that correlates a known signal, or a pattern, with an unknown signal to detect the presence of the pattern in the unknown signal, in order to improve the analyzed images. We will focus on using the method of merging wavelet images and neural networks, as it exceeds the limits of classical edge detection methods and provides a much more accurate map of decompositions at the sub-pixel level. Last but not least, we will develop an algorithm for the automatic selection of the detection threshold for the classification of brain medical images obtained from CT and MRI, using clustering algorithms.
From the research carried out in this article we propose an algorithm that responds to the first three points mentioned above.
The proposed image enhancement method using spatial filtering algorithm can be successfully applied to low contrast CT (Computer Tomography) images. Figure 6 shows the results obtained for such images.
As it can be seen, the proposed method provides robust results in eliminating noise from images [75], as well as improving image contrast and contour detection. Obtaining images in which the contours are best preserved is done primarily in conditions of increasing computational complexity, with a decrease in the amount of noise removed from the image taking place.
Figure 7, Figure 8, Figure 9, Figure 10 and Figure 11 show the noise elimination results obtained in the case of real CT images, using Daubechies scale filter order 2, order 4 and Haar wavelet transform [76]:
As it can be seen, the proposed method and the determined operators offer a better efficiency in terms of eliminating noise and preserving the contours of an image, allowing the implementation of the proposed method on existing hardware structures.

4. Conclusions

The interest in wavelet filters is determined by the fact that much of the existing information in the images is provided by contours, and most of the noise removal methods cause them to fade. Such filters can be designed starting from the finding that near the contours the noise is less noticeable than in the smooth regions, thus being possible to change the filter parameters depending on the level of activity of the processed region. In this case, at the cost of removing a smaller amount of noise in the regions containing contours, they can be better preserved. There is a trade-off between the amount of noise removed and the quality of preserving the contours in the image.
The article classifies the methods of selective filtering in the wavelet domain according to the statistical modeling of the wavelet coefficients on which these methods are based. Thus, selective filtering methods in the wavelet domain are divided into three groups: methods based on the intra-band dependence of the wavelet coefficients, methods based on the inter-band dependence of the wavelet coefficients and methods based on both the intra-band dependence of the wavelet coefficients and the inter-band one.
The proposed method uses soft truncation of coefficients, where the threshold value is calculated for each coefficient separately based on local statistics in the wavelet domain. On the wavelet coefficients thus obtained, a correction is made according to a contour map determined on the basis of a hierarchical correlation map.
The proposed algorithm uses noise dispersion as a parameter, but the switched version needs another parameter, namely the threshold value to obtain the contour map from the hierarchical correlation map.
Unlike the methods presented in the literature, which in the case of very rich in details images leads to weaker results than soft truncation with optimal threshold values, the proposed method, by soft truncation of wavelet coefficients with threshold value adapted based on local statistics and correction based on the hierarchical correlation map, always leads to better results both in terms of PSNR and in terms of contour conservation.
Filtration methods mentioned in the literature prove to be very effective in terms of the amount of noise removed from the image, generally leading to high values of PSNR, but this is done at the cost of significant contour degradation. The proposed method leads to close values, although lower in terms of PSNR than other methods, but ensures a much better preservation of contours.

Author Contributions

Conceptualization, M.S.R., C.D. and I.M.; methodology, C.D. and I.M.; software, C.D. and I.M.; validation, M.S.R. and C.F.; formal analysis, C.D.; investigation, I.M. and C.F.; resources, M.S.R.; data curation, M.S.R., C.D., I.M. and C.F.; writing—original draft preparation, C.D.; writing—review and editing, I.M.; visualization, C.D.; supervision, M.S.R. and C.F.; project administration, M.S.R. and C.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

This work was supported by a grant from the Romanian Ministry of Research and Innovation, CCCDI-UEFISCDI, project number PN-III-P1-1.2-PCCDI-2017-0776/No. 36 PCCDI/15.03.2018, within PNCDI III.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Dependence on the threshold value used to obtain the contour map from the hierarchical correlation map, in the case of the proposed method. Results obtained by using the ideal, real and filtered hierarchical correlation map are presented.
Table A1. Dependence on the threshold value used to obtain the contour map from the hierarchical correlation map, in the case of the proposed method. Results obtained by using the ideal, real and filtered hierarchical correlation map are presented.
a. Boat image, 256 × 256 pixels.
Threshold Value σ = 0.05 σ = 0.1
Initial ImagePSNR =
26.07 dB
C = 64.61%Initial ImagePSNR =
20.03 dB
C = 47.09%
Second Processing StagePSNR =
30.88 dB
C = 64.29%Second Processing StagePSNR =
27.10 dB
C = 49.53%
Ideal Contour MapReal Contour MapFiltered Contour MapIdeal Contour MapReal Contour MapFiltered Contour Map
PSNR (dB)C
(%)
PSNR (dB)C
(%)
PSNR (dB)C
(%)
PSNR (dB)C
(%)
PSNR (dB)C
(%)
PSNR (dB)C
(%)
0.05030.9365.1730.2964.3830.3965.0326.9451.7525.0451.7524.8551.15
0.07530.9464.8730.6864.9030.7964.9827.1250.1625.4750.2125.4350.00
0.10030.9364.9030.8265.1230.8464.7527.1651.3725.9650.1926.2449.90
0.15030.9064.6630.8664.2730.8664.3027.1649.5626.7449.7627.0050.23
0.17530.8964.5930.8764.5830.8764.3227.1449.2426.9350.2627.0650.42
0.20030.8864.5130.8764.4530.8764.4127.1449.0327.0249.6627.0750.03
0.22530.8864.3830.8764.3230.8764.3827.1349.1127.0650.1027.0849.81
0.25030.8864.4330.8764.4030.8764.3027.1248.9527.0749.2927.0849.00
b. Calendar image, 256 × 256 pixels.
Threshold Value σ = 0.05 σ = 0.1
Initial ImagePSNR =
26.00 dB
C = 70.01%Initial ImagePSNR =
21.11 dB
C = 58.73%
Second Processing StagePSNR =
28.15 dB
C = 74.42%Second Processing StagePSNR =
23.11 dB
C = 58.53%
Ideal Contour MapReal Contour MapFiltered Contour MapIdeal Contour MapReal Contour MapFiltered Contour Map
PSNR (dB)C
(%)
PSNR (dB)C
(%)
PSNR (dB)C
(%)
PSNR (dB)C
(%)
PSNR (dB)C
(%)
PSNR (dB)C
(%)
0.05028.2471.2328.0370.5928.0370.8223.5658.7023.1358.3723.0958.15
0.07528.2575.2228.1471.0628.1571.1923.5558.5123.2357.6023.2458.07
0.10028.2475.4028.1775.4028.1875.2023.5058.3223.3157.5423.3357.34
0.15028.2174.9928.1875.5528.1875.4923.4057.6123.3357.3823.3157.35
0.17528.2074.7228.1775.2328.1775.1523.3557.4423.3057.5223.2757.34
0.20028.1974.7228.1774.6628.1774.5923.3157.2523.2757.3523.2257.30
0.22528.1874.7228.1774.7528.1674.7323.2756.5523.2257.0523.1957.08
0.25028.1774.7528.1674.7728.1674.6523.2356.5823.1956.6723.1656.92
c. Wheel image, 256 × 256 pixels.
Threshold Value σ = 0.05 σ = 0.1
Initial ImagePSNR =
26.22 dB
C = 73.81%Initial imagePSNR =
20.40 dB
C = 58.49%
Second Processing StagePSNR =
28.44 dB
C = 72.63%Second Processing StagePSNR =
24.61 dB
C = 57.18%
Ideal Contour MapReal Contour MapFiltered Contour MapIdeal Contour MapReal Contour MapFiltered Contour Map
PSNR (dB)C
(%)
PSNR (dB)C
(%)
PSNR (dB)C
(%)
PSNR (dB)C
(%)
PSNR (dB)C
(%)
PSNR (dB)C
(%)
0.05028.5173.9028.2973.8028.2973.4424.5360.4923.9159.3523.8359.48
0.07528.5373.9828.3773.5428.3973.5324.7060.3724.0859.0724.0459.47
0.10028.5173.4928.4273.5428.4373.1324.7459.8224.2559.4524.3359.87
0.15028.4872.8328.4472.9728.4472.7324.7458.9324.5159.2524.5859.18
0.17528.4772.8328.4472.7128.4472.5424.7158.8824.5658.9324.6058.39
0.20028.4672.5428.4472.6228.4472.5424.6958.7624.6058.0724.6057.50
0.22528.4572.8328.4472.5428.4472.5724.6758.4724.6158.1924.6057.36
0.25028.4572.6828.4472.6228.4472.5724.6658.2024.6157.9724.6057.48
d. Aerial image, 256 × 256 pixels.
Threshold Value σ = 0.05 σ = 0.1
Initial ImagePSNR =
26.04 dB
C = 70.12%Initial ImagePSNR =
20.15 dB
C = 54.02%
Second Processing StagePSNR =
28.16 dB
C = 67.43%Second Processing StagePSNR =
24.23 dB
C = 49.76%
Ideal Contour MapReal Contour MapFiltered Contour MapIdeal Contour MapReal Contour MapFiltered Contour Map
PSNR (dB)C
(%)
PSNR (dB)C
(%)
PSNR (dB)C
(%)
PSNR (dB)C
(%)
PSNR (dB)C
(%)
PSNR (dB)C
(%)
0.05028.1969.3227.9668.2327.9468.2924.1253.9023.5052.9923.4353.65
0.07528.3268.2428.0568.2328.0668.7624.2953.7523.6452.8823.6153.66
0.10028.2267.6928.1167.8628.1367.9024.3453.4523.8052.8623.8553.01
0.15028.2067.4028.1567.1228.1567.9724.3452.2724.0752.1024.1651.93
0.17528.1967.0728.1566.9228.1566.7524.3251.5524.1451.7824.2051.21
0.20028.1868.1128.1566.7328.1667.7724.3050.9724.1951.4924.2251.04
0.22528.1768.1028.1568.0428.1667.6724.2850.6524.2150.9824.2250.59
0.25028.1768.0028.1568.0228.1667.5224.2650.5124.2150.4924.2250.11
e. Camera image, 256 × 256 pixels.
Threshold Value σ = 0.05 σ = 0.1
Initial ImagePSNR =
26.20 dB
C = 65.80%Initial ImagePSNR =
20.43 dB
C = 51.32%
Second Processing StagePSNR =
30.77 dB
C = 64.97%Second Processing StagePSNR =
26.74 dB
C = 48.90%
Ideal Contour MapReal Contour MapFiltered Contour MapIdeal Contour MapReal Contour MapFiltered Contour Map
PSNR (dB)C
(%)
PSNR (dB)C
(%)
PSNR (dB)C
(%)
PSNR (dB)C
(%)
PSNR (dB)C
(%)
PSNR (dB)C
(%)
0.05030.7966.3930.2466.5130.3367.0226.7553.2925.2352.6025.0652.79
0.07530.8066.6330.5866.0630.6666.7726.8751.6325.5952.3025.9552.14
0.10030.8066.3730.7065.5830.7266.0426.8851.4226.0451.3826.2652.01
0.15030.7865.7830.7365.5830.7465.4326.8350.3326.5650.6926.6950.28
0.17530.7865.7030.7465.6230.7565.4526.8150.0226.6650.4326.6949.92
0.20030.7765.6030.7565.4930.7565.4726.7949.9426.7249.6326.7149.53
0.22530.7765.2930.7565.4130.7565.3526.7849.8026.7249.8026.7149.29
0.25030.7665.3330.7565.3530.7565.3726.7649.9226.7149.8026.7049.47
f. Goldhill image, 256 × 256 pixels.
Threshold Value σ = 0.05 σ = 0.1
Initial ImagePSNR =
26.03 dB
C = 57.42%Initial ImagePSNR =
20.11 dB
C = 42.43%
Second Processing StagePSNR =
29.37 dB
C = 58.92%Second Processing StagePSNR =
25.89 dB
C = 40.30%
Ideal Contour MapReal Contour MapFiltered Contour MapIdeal Contour MapReal Contour MapFiltered Contour Map
PSNR (dB)C
(%)
PSNR (dB)C
(%)
PSNR (dB)C
(%)
PSNR (dB)C
(%)
PSNR (dB)C
(%)
PSNR (dB)C
(%)
0.05029.4060.7429.0260.4429.0560.4825.7244.0624.5042.6124.3742.81
0.07529.4259.7329.2259.3929.2759.2825.9243.5924.8041.4124.7841.94
0.10029.4158.6629.3058.7029.3458.3125.9542.3625.1342.1625.3143.27
0.15029.3958.5829.3558.4429.3658.5925.9441.2425.7641.6425.8341.31
0.17529.3858.4329.3658.4029.3758.4425.9240.6325.7641.6425.8840.85
0.20029.3758.5029.3658.4829.3658.5025.9140.4225.8241.4225.8840.57
0.22529.3758.4629.3658.5129.3658.5325.9040.4325.8540.6925.8940.39
0.25029.3758.4229.3658.4629.3658.5025.8940.5225.8640.4525.8940.43
g. Peppers image, 256 × 256 pixels.
Threshold Value σ = 0.05 σ = 0.1
Initial ImagePSNR =
26.08 dB
C = 63.72%Initial ImagePSNR =
20.16 dB
C = 49.46%
Second Processing StagePSNR =
30.52 dB
C = 64.25%Second Processing StagePSNR =
26.74 dB
C = 47.43%
Ideal Contour MapReal Contour MapFiltered Contour MapIdeal Contour MapReal Contour MapFiltered Contour Map
PSNR (dB)C
(%)
PSNR (dB)C
(%)
PSNR (dB)C
(%)
PSNR (dB)C
(%)
PSNR (dB)C
(%)
PSNR (dB)C
(%)
0.05030.5464.6229.9365.2430.0165.5726.7551.3324.9450.4024.7050.34
0.07530.5564.1130.2964.5730.3964.1126.8750.2325.3350.1125.3050.34
0.10030.5663.9930.4464.1730.4763.8026.9149.3225.8250.9026.0850.90
0.15030.5463.8030.5063.5430.5063.5226.8448.6726.5049.7926.7048.12
0.17530.5363.7430.5063.6430.5163.6226.8248.2426.6748.9726.7547.84
0.20030.5363.9730.5163.8030.5163.8426.7947.9026.7148.0826.7447.78
0.22530.5263.9430.5163.7630.5163.7826.7947.9026.7447.6326.7347.51
0.25030.5263.8230.5163.7030.5163.7426.7747.9626.7347.6326.7347.33
h. Bridge image, 256 × 256 pixels.
Threshold Value σ = 0.05 σ = 0.1
Initial ImagePSNR =
26.07 dB
C = 63.70%Initial ImagePSNR =
20.14 dB
C = 47.13%
Second Processing StagePSNR =
27.65 dB
C = 62.68%Second Processing StagePSNR =
23.46 dB
C = 44.56%
Ideal Contour MapReal Contour MapFiltered Contour MapIdeal Contour MapReal Contour MapFiltered Contour Map
PSNR (dB)C
(%)
PSNR (dB)C
(%)
PSNR (dB)C
(%)
PSNR (dB)C
(%)
PSNR (dB)C
(%)
PSNR (dB)C
(%)
0.05027.6864.2927.5063.5827.5063.8523.4147.2222.9846.6422.9347.30
0.07527.6963.9227.5663.5227.5863.5823.5247.0123.0746.3023.0446.91
0.10027.6963.4727.6163.5827.6263.1423.5546.2023.1845.9923.2146.22
0.15027.6763.2627.6463.1627.6462.8223.5244.5723.3544.8223.4244.68
0.17527.6662.9027.6462.8827.6562.7923.5044.4223.3944.8523.4544.27
0.20027.6662.8327.6462.9627.6562.7723.4944.1123.4244.2523.4543.85
0.22527.6562.9027.6562.8927.6562.8023.4843.8323.4444.1323.4645.66
0.25027.6562.8427.6562.8527.6562.8123.4845.5023.4544.1123.4645.55
i. Lena image, 256 × 256 pixels.
Threshold Value σ = 0.05 σ = 0.1
Initial ImagePSNR =
26.04 dB
C = 62.97%Initial ImagePSNR =
20.08 dB
C = 47.54%
Second Processing StagePSNR =
30.30 dB
C = 61.79%Second Processing StagePSNR =
26.55 dB
C = 47.19%
Ideal Contour MapReal Contour MapFiltered Contour MapIdeal Contour MapReal Contour MapFiltered Contour Map
PSNR (dB)C
(%)
PSNR (dB)C
(%)
PSNR (dB)C
(%)
PSNR (dB)C
(%)
PSNR (dB)C
(%)
PSNR (dB)C
(%)
0.05030.3063.3829.7663.6529.8363.5326.4549.2324.8449.2924.6649.59
0.07530.3362.9730.0963.0230.1862.9126.6148.9825.2049.4325.1549.16
0.10030.3362.9530.2262.6230.2662.7526.6449.0525.6249.2725.8749.97
0.15030.3162.2130.2862.2130.2962.0326.6348.6926.2447.8326.4548.44
0.17530.3162.0130.2962.0830.2961.9226.6048.0826.4248.0426.5148.53
0.20030.3161.9430.2961.8330.3061.7926.5847.9726.4846.8926.5248.10
0.22530.3061.7930.2961.9930.3061.7626.5747.8126.5147.5426.5247.88
0.25030.3061.9230.2961.8330.2961.6926.5647.7926.5247.5926.5247.66
j. House image, 256 × 256 pixels.
Threshold Value σ = 0.05 σ = 0.1
Initial ImagePSNR =
26.01 dB
C = 64.95%Initial ImagePSNR =
20.15 dB
C = 49.17%
Second Processing StagePSNR =
31.68 dB
C = 65.26%Second Processing StagePSNR =
28.13 dB
C = 51.07%
Ideal Contour MapReal Contour MapFiltered Contour MapIdeal Contour MapReal Contour MapFiltered Contour Map
PSNR (dB)C
(%)
PSNR (dB)C
(%)
PSNR (dB)C
(%)
PSNR (dB)C
(%)
PSNR (dB)C
(%)
PSNR (dB)C
(%)
0.05031.6866.0730.8567.0331.0066.3828.0755.3925.6953.1325.4552.76
0.07531.7166.3331.3865.6031.5465.8928.1853.7526.2053.5726.2053.83
0.10031.7265.9931.5965.8631.6365.5528.2353.7526.8053.5427.2253.23
0.15031.7065.6531.6665.4731.6665.5528.2252.6827.7453.7528.0553.15
0.17531.6965.7631.6665.4231.6665.5228.1952.3727.9853.0228.1152.27
0.20031.6765.7031.6665.7331.6565.7028.1452.3428.0652.1428.1151.85
0.22531.6765.7631.6565.7331.6565.6828.1352.1928.0852.0828.1051.74
0.25031.6665.7331.6565.6531.6565.6828.1152.1428.0851.9528.0851.67
k. Boat image, 512 × 512 pixels.
Threshold Value σ = 0.05 σ = 0.1
Initial ImagePSNR =
26.06 dB
C = 52.62%Initial ImagePSNR =
20.13 dB
C = 39.68%
Second Processing StagePSNR =
29.66 dB
C = 53.43%Second Processing StagePSNR =
26.69 dB
C = 38.70%
Ideal Contour MapReal Contour MapFiltered Contour MapIdeal Contour MapReal Contour MapFiltered Contour Map
PSNR (dB)C
(%)
PSNR (dB)C
(%)
PSNR (dB)C
(%)
PSNR (dB)C
(%)
PSNR (dB)C
(%)
PSNR (dB)C
(%)
0.05029.6855.4329.3254.9729.3554.8526.4940.4525.0041.1624.8441.28
0.07529.7054.5829.5154.5029.5854.4526.6839.7425.3540.4325.3240.92
0.10029.69543229.6054.2829.6354.0326.7439.0125.7539.8426.0039.77
0.15029.6753.7929.6453.6729.6553.7426.7338.2726.3938.8226.6238.45
0.17529.6653.6829.6453.7029.6553.7426.7238.0426.5438.1626.6637.92
0.20029.6653.6029.6553.6429.6553.7126.7137.9026.6238.1326.6737.74
0.22529.6553.6029.6553.6129.6553.7226.7037.7626.6538.0826.6737.69
0.25029.6553.6229.6553.6529.6553.7326.6937.6226.6637.7326.6737.59
l. Bridge image, 512 × 512 pixels.
Threshold Value σ = 0.05 σ = 0.1
Initial ImagePSNR =
26.09 dB
C = 60.47%Initial ImagePSNR =
20.18 dB
C = 44.43%
Second Processing StagePSNR =
28.66 dB
C = 59.00%Second Processing StagePSNR =
24.88 dB
C = 41.13%
Ideal Contour MapReal Contour MapFiltered contour mapIdeal Contour MapReal Contour MapFiltered Contour Map
PSNR (dB)C
(%)
PSNR (dB)C
(%)
PSNR (dB)C
(%)
PSNR (dB)C
(%)
PSNR (dB)C
(%)
PSNR (dB)C
(%)
0.05028.6961.2428.4060.9628.4261.1724.7944.2923.9543.8623.8444.31
0.07528.7260.5828.5461.0628.5960.6724.9443.5924.1443.4724.1243.71
0.10028.7160.2728.6160.2628.6460.2224.9842.5324.3743.0324.4943.17
0.15028.6859.7028.6559.5328.6659.2824.9541.1524.7241.9224.8541.36
0.17528.6759.4628.6559.4528.6659.3524.9442.2224.8041.4624.8840.81
0.20028.6759.3628.6659.3428.6659.3024.9242.0324.8440.8924.8841.88
0.22528.6659.3628.6659.3128.6659.3024.9141.9024.8640.4924.8841.69
0.25028.6659.3428.6659.3028.6659.3224.9041.8424.8741.8724.8841.71
m. Einstein image, 512 × 512 pixels.
Threshold Value σ = 0.05 σ = 0.1
Initial ImagePSNR =
26.07 dB
C = 44.25%Initial ImagePSNR =
20.11 dB
C = 32.23%
Second Processing StagePSNR =
32.64 dB
C = 40.13%Second Processing StagePSNR =
29.17 dB
C = 27.92%
Ideal Contour MapReal Contour MapFiltered Contour MapIdeal Contour MapReal Contour MapFiltered Contour Map
PSNR (dB)C
(%)
PSNR (dB)C
(%)
PSNR (dB)C
(%)
PSNR (dB)C
(%)
PSNR (dB)C
(%)
PSNR (dB)C
(%)
0.05032.6742.8831.5943.6231.8143.4829.0531.1825.9930.4525.7331.22
0.07532.6842.0232.2842.7032.5442.1329.2230.3426.6229.5126.6529.79
0.10032.6741.3832.5541.7932.6141.4129.2529.9927.4132.0328.0131.31
0.15032.6540.9232.6341.1232.6340.8829.2329.2628.6730.2929.1129.25
0.17532.6540.8532.6340.8532.6440.7329.2229.2328.9729.3829.1528.84
0.20032.6440.7532.6440.7432.6440.6829.2129.1229.0929.1829.1628.94
0.22532.6440.6832.6440.6532.6440.6529.2029.0629.1429.1429.1628.99
0.25032.6440.6732.6440.6732.6440.6529.2029.0629.1529.0329.1628.93
n. Lena image, 512 × 512 pixels.
Threshold Value σ = 0.05 σ = 0.1
Initial ImagePSNR =
26.02 dB
C = 53.75%Initial ImagePSNR =
20.07 dB
C = 39.89%
Second Processing StagePSNR =
32.48 dB
C = 52.27%Second Processing StagePSNR =
29.13 dB
C = 37.85%
Ideal Contour MapReal Contour MapFiltered Contour MapIdeal Contour MapReal Contour MapFiltered Contour Map
PSNR (dB)C
(%)
PSNR (dB)C
(%)
PSNR (dB)C
(%)
PSNR (dB)C
(%)
PSNR (dB)C
(%)
PSNR (dB)C
(%)
0.05032.4754.3231.4553.8931.6854.1628.8641.6725.9541.1125.6941.40
0.07532.5153.5132.1053.7832.3353.7129.0541.0126.5640.3126.5640.81
0.10032.5053.3132.3753.5432.4353.1729.1140.3027.3139.6327.8639.48
0.15032.4852.8832.4453.0232.4452.7729.1239.3828.5139.2728.9739.13
0.17532.4752.9432.4552.8032.4552.7229.1239.0828.8139.0929.0338.87
0.20032.4752.8632.4552.7232.4552.7329.1038.8828.9638.8629.0438.64
0.22532.4652.7532.4552.6532.4552.7029.0938.6829.0238.7729.0538.46
0.25032.4652.7032.4552.7432.4552.6929.0838.5129.0538.7329.0538.41

References

  1. Mallat, S. A theory for multiresolution signal decomposition: The wavelet representation. IEEE Trans. Pattern Anal. Mach. Intell. 1989, 11, 674–693. [Google Scholar] [CrossRef] [Green Version]
  2. Donoho, D.L.; Johnstone, I.M. Ideal spatial adaptation via wavelet shrinkage. Biometrika 1994, 81, 425–455. [Google Scholar] [CrossRef]
  3. Raboaca, M.S.; Dumitrescu, C.; Manta, I. Aircraft Trajectory Tracking Using Radar Equipment with Fuzzy Logic Algorithm. Mathematics 2020, 8, 207. [Google Scholar] [CrossRef] [Green Version]
  4. Donoho, D.L.; Johnstone, I.M. Adapting to unknown smoothness via wavelet shrinkage. JASA 1995, 90, 1200–1224. [Google Scholar] [CrossRef]
  5. Borsdorf, A.; Raupach, R.; Flohr, T.; Hornegger Tanaka, J. Wavelet Based Noise Reduction in CT—Mages Using Correlation Analysis. IEEE Trans. Med. Imaging 2008, 27, 1685–1703. [Google Scholar] [CrossRef] [PubMed]
  6. Bhadauria, H.S.; Dewal, M.L. Efficient Denoising Technique for CT images to Enhance Brain Hemorrhage Segmentation. Int. J. Digit. Imaging 2012, 25, 782–791. [Google Scholar] [CrossRef] [Green Version]
  7. Patil, J.; Jadhav, S. A Comparative Study of Image Denoising Techniques. IJIR Sci. Eng. Technol. 2013, 2, 787–794, ISSN 2319-875. [Google Scholar]
  8. Bindu, C.H.; Sumathi, K. Denoising of Images with Filtering and Thresholding. In Proceedings of the International Conference on Research in Engineering Computers and Technology, Thiruchy, India, 8–10 September 2016; pp. 142–146, ISBN 5978-81-908388-7-0. [Google Scholar]
  9. Florian, L.; Blu, T. SURE-LET multichannel image denoising: Inter-scale orthonormal wavelet thresholding. Image Process. IEEE Trans. 2008, 17, 482–492. [Google Scholar]
  10. Motwani, M.; Gadiya, M.; Motwani, R.; Harris, F. Survey of Image Denoising Techniques. In Proceedings of the GSPX, Santa Clara, CA, USA, 27–30 September 2004; Available online: https://www.cse.unr.edu/~fredh/papers/conf/034-asoidt/paper.pdf (accessed on 17 April 2020).
  11. Xuejun, Y.; Zeulu, H. Adaptive spatial filtering for digital images. In Proceedings of the 9th International Conference on Pattern Recognition, Rome, Italy, 14–17 November 1988; Volume 2, pp. 811–813. [Google Scholar]
  12. Nagaoka, R.; Yamazaki, R.; Saijo, Y. Adaptive Spatial Filtering with Principal Component Analysis for Biomedical Photoacoustic Imaging. Phys. Procedia 2015, 70, 1161–1164. [Google Scholar] [CrossRef]
  13. Liefhold, C.; Grosse-Wentrup, M.; Gramann, K.; Buss, M. Comparison of Adaptive Spatial Filters with Heuristic and Optimized Region of Interest for EEG Based Brain-Computer-Interfaces. In Pattern Recognition; Hamprecht, F.A., Schnörr, C., Jähne, B., Eds.; Springer: Berlin/Heidelberg, Germany, 2007; Volume 4713, pp. 274–283. [Google Scholar]
  14. Ostlund, N.; Yu, J.; Roeleveld, K.; Karlsson, J.S. Adaptive spatial filtering of multichannel surface electromyogram signals. Med. Biol. Eng. Comput. 2004, 42, 825–831. [Google Scholar] [CrossRef]
  15. Zhang, X.; Li, X.; Tang, X.; Chen, X.; Chen, X.; Zhou, P. Spatial Filtering for Enhanced High-Density Surface Electromyographic Examination of Neuromuscular Changes and Its Application to Spinal Cord Injury. J. NeuroEng. Rehabil. 2020. [Google Scholar] [CrossRef]
  16. Bissmeyer, S.R.S.; Goldsworthy, R.L. Adaptive spatial filtering improves speech reception in noise while preserving binaural cues. J. Acoust. Soc. Am. 2017, 142, 1441. [Google Scholar] [CrossRef] [PubMed]
  17. Morin, A. Adaptive spatial filtering techniques for the detection of targets in infrared imaging seekers. In Proceedings of the AeroSense, Orlando, FL, USA, 24 April 2000; Volume 4025. [Google Scholar]
  18. Delisle-Rodriguez, D.; Villa-Parra, A.C.; Bastos-Filho, T.; Delis, A.L.; Neto, A.F.; Krishnan, S.; Rocon, E. Adaptive Spatial Filter Based on Similarity Indices to Preserve the Neural Information on EEG signals during On-Line Processing. Sensors 2017, 17, 2725. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Sekihara, K.; Nagarajan, S. Adaptive Spatial Filters for Electromagnetic Brain Imaging; Series in Biomedical Engineering; Springer: Berlin, Germany, 2008; ISSN 1864-5763. [Google Scholar]
  20. Saleem, S.M.A.; Razak, T.A. Survey on Color Image Enhancement Techniques using Spatial Filtering. Int. J. Comput. Appl. 2014, 94, 39–45. [Google Scholar]
  21. Yuksel, A.; Olmez, T. A neural Network-Based Optimal Spatial Filter Design Method for Motor Imagery Classification. PLoS ONE 2015, 10, e0125039. [Google Scholar] [CrossRef] [Green Version]
  22. Mourino, J.; Millan, J.D.R.; Cincotti, F.; Chiappa, S.; Jane, R. Spatial Filtering in the Training Process of a Brain Computer Interface. In Proceedings of the 23rd Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Istanbul, Turkey, 25–28 October 2001; pp. 639–642. [Google Scholar]
  23. Cohen, M.X. Comparison of linear spatial filters for identifying oscillatory activity in multichannel data. J. Neurosci. Methods 2017, 278, 1–12. [Google Scholar] [CrossRef]
  24. McCord, M.J.; McCord, J.; Davis, P.T.; Haran, M.; Bidanset, P. House price estimation using an eigenvector spatial filtering approach. Int. J. Hous. Mark. Anal. 2019. ahead-of-print. [Google Scholar] [CrossRef]
  25. Metulini, R.; Patuelli, R.; Griffith, D. A Spatial-Filtering Zero-Inflated Approach to the Estimation of the Gravity Model of Trade. Econometrics 2018, 6, 9. [Google Scholar] [CrossRef] [Green Version]
  26. Patuelli, R.; Griffith, D.; Tiefelsdorf, M.; Nijkamp, P. Spatial Filtering and Eigenvector Stability: Space-Time Models for German Unemplyment Data. Int. Reg. Sci. Rev. 2011, 34, 253–280. [Google Scholar] [CrossRef]
  27. Wu, F.; Li, C.; Li, X. Research of spatial filtering algorithms based on MATLAB. In AIP Conference Proceedings; AIP Publishing LLC: Melville, NY, USA, 2017; Volume 1890. [Google Scholar]
  28. Zhang, H.; Han, D.; Ji, K.; Ren, Z.; Xu, C.; Zhu, L.; Tan, H. Optimal Spatial Matrix Filter Design for Array Signal Preprocessing. J. Appl. Math. 2014, 2014, 1–8. [Google Scholar] [CrossRef]
  29. Tang, H.; Swatantran, A.; Barrett, T.; DeCola, P.; Dubayah, R. Voxel-Based Spatial Filtering Method for Canopy Height Retrieval from Airborne Single-Photon Lidar. Remote Sens. 2016, 8, 771. [Google Scholar] [CrossRef] [Green Version]
  30. Rajamani, A.; Krishnaveni, V. Survey on Spatial Filtering Techniques. Int. J. Sci. Res. 2014, 3, 153–156. [Google Scholar]
  31. Patuelli, R.; Griffith, D.; Tiefelsdorf, M.; Nijkamp, P. The use of Spatial Filtering Techniques: The Spatial and Space-Time Structure of German Unemployment Data. SSRN Electron. J. 2006. [Google Scholar] [CrossRef] [Green Version]
  32. Gorr, W.; Olligschlaeger, A. Weighted Spatial Adaptive Filtering: Monte Carlo Studies and Application to Illicit Drug Market Modeling. Geogr. Anal. 2010, 26, 67–87. [Google Scholar] [CrossRef]
  33. McFarland, D.; McCane, L.; David, S.; Wolpaw, J. Spatial filter selection for EEG-based communication. Electroencephalogr. Clin. Neurophysiol. 1997, 103, 386–394. [Google Scholar] [CrossRef]
  34. Ramoser, H.; Muller-Gerking, J.; Pfurtscheller, G. Optimal Spatial Filtering of Single Trial EEG during Imagined Hand Movement. IEEE Trans. Rehabil. Eng. 2000, 8, 441–446. [Google Scholar] [CrossRef] [Green Version]
  35. Roy, R.; Bonnet, S.; Charbonnier, S.; Jallon, P.; Campagne, A. A Comparison of ERP Spatial Filtering Methods for Optimal Mental Workload Estimation. In Proceedings of the 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, Italy, 25–29 August 2015. [Google Scholar]
  36. Al-Khayyat, A.N.M. Accelerating the Frequency Dependent Finite-Difference Time-Domain Method Using the Spatial Filtering and Parallel Computing Techniques. Ph.D. Thesis, University of Manchester, Manchester, UK, 2018. [Google Scholar]
  37. Elseid, A.A.G.; Elmanna, M.E.; Hamza, A.O. Evaluation of Spatial Filtering Techniques in Retinal Fundus Images. Am. J. Artif. Intell. 2018, 2, 16–21. [Google Scholar]
  38. Roy, V.; Shukla, S. Spatial and Transform Domain Filtering Method for Image De-noising: A Review. Int. J. Mod. Educ. Comput. Sci. 2013, 7, 41–49. [Google Scholar] [CrossRef] [Green Version]
  39. Saa, J.; Christen, A.; Martin, S.; Pasley, B.N.; Knight, R.T.; Giraud, A.-L. Using Coherence-based spectro-spatial filters for stimulus features prediction from electro-corticographic recordings. Sci. Rep. 2020, 10, 7637. [Google Scholar]
  40. Heuvel, J.H.C.; Cabric, D. Spatial filtering approach for dynamic range reduction in cognitive radios. In Proceedings of the 21st Annual IEEE International Symposium on Personal, Indoor and Mobile Radio Communications, Istanbul, Turkey, 26–30 September 2010; Available online: https://ieeexplore.ieee.org/document/5671790 (accessed on 4 March 2020).
  41. Reiss, P.T.; Huo, L.; Zhao, Y.; Kelly, C.; Ogden, R.T. Wavelet-Domain Regression and Predictive Inference in Psychiatric Neuroimaging. Ann. Appl. Stat. 2015, 9, 1076–1101. [Google Scholar] [CrossRef]
  42. Xu, Y.; Weaver, J.; Healy, D.; Lu, J. Wavelet Transform Domain Filters: A Spatially Selective Noise Filtration Technique. IEEE Trans. Image Process. 1994, 3, 747–758. [Google Scholar] [PubMed] [Green Version]
  43. Bhowmik, D.; Abhayaratne, C. Embedding Distortion Analysis in Wavelet-domain Watermarking. ACM Trans. Multimed. Comput. Commun. Appl. 2019, 15, 108. [Google Scholar] [CrossRef] [Green Version]
  44. Nagarjuna Venkat, P.; Bhaskar, L.; Ramachandra Reddy, B. Non-decimated wavelet domain based robust blind digital image watermarking scheme using singular value decomposition. In Proceedings of the 2016 3rd International Conference on Signal Processing and Integrated Networks (SPIN), Noida, India, 11–12 February 2016; pp. 383–393. Available online: https://ieeexplore.ieee.org/document/7566725 (accessed on 18 April 2020).
  45. Kumar, N.; Verma, R.; Sethi, A. Convolutional Neural Networks for Wavelet Domain Super Resolution. Pattern Recognit. Lett. 2017, 90, 65–71. [Google Scholar] [CrossRef]
  46. Chen, Y.; Huang, S.; Pickwell-MacPherson, E. Frequency Wavelet domain deconvolution for terahertz reflection imaging and spectroscopy. Opt. Express 2010, 18, 1177–1190. [Google Scholar] [CrossRef]
  47. Zhang, X.; Zhang, G.; Yu, Y.; Pan, G.; Deng, H.; Shi, X.; Jiao, Y.; Wu, R.; Chen, Y.; Zhang, G. Multiscale Products in B-spline Wavelet Domain: A new method for Short Exon Detection. Curr. Bioinform. 2018, 13, 553–563. [Google Scholar] [CrossRef]
  48. Hill, P.; Kim, J.H.; Basarab, A.; Kouame, D.; Bull, D.R.; Achim, A. Compressive imaging using approximate message passing and a Cauchy prior in the wavelet domain. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP 2016), Phoenix, AZ, USA, 25–28 September 2016. [Google Scholar]
  49. Alfaouri, M.; Daqrouq, K. EEG Signal Denoising by Wavelet Transform Thresholding. Am. J. Appl. Sci. 2008, 5, 276–281. [Google Scholar]
  50. Purisha, Z.; Rimpelainen, J.; Bubba, T.; Siltanen, S. Controlled wavelet domain sparsity for x-ray tomography. Meas. Sci. Technol. 2017, 29, 014002. [Google Scholar] [CrossRef] [Green Version]
  51. Ria, F.; Davis, J.T.; Solomon, J.B.; Wilson, J.M.; Smith, T.B.; Frush, D.P.; Samei, E. Expanding the concept of Diagnostic Reference Levels to Noise and Dose Reference Levels in CT. AJR Am. J. Roentgenol. 2019, 213, 889–894. [Google Scholar] [CrossRef]
  52. Christianson, O.; Winslow, J.; Frush, D.P.; Samei, E. Automated Technique to Measure Noise in Clinical CT Examinations. AJR Am. J. Roentgenol. 2015, 205, W93–W99. [Google Scholar] [CrossRef]
  53. Dabov, K.; Foi, A.; Katkovnik, V.; Egiazarian, K. Image denoising by sparse 3D transform-domain collaborative filtering. IEEE Trans. Image Process. 2007, 16, 2080–2095. [Google Scholar] [CrossRef]
  54. Buades, A.; Coll, B.; Morel, J.M. A non-local algorithm for image denoising. IEEE Comput. Vis. Pattern Recognit. 2005, 2, 60–65. [Google Scholar]
  55. Gopi Krishna, S.; Sreenivasulu Reedy, T.; Rajini, G.K. Removal of High-Density Salt and Pepper Noise through Modified Decision Based Unsymmetric Trimmed Median Filter. Int. J. Eng. Res. Appl. 2012, 2, 90–94. [Google Scholar]
  56. Mondal, A.K.; Dolz, J.; Desrosiers, C. Few-shot 3D multi-modal medical image segmentation using generative adversarial learning. arXiv 2018, arXiv:1810.12241. [Google Scholar]
  57. Wang, Z.; Ziou, D.; Armenakis, C.; Li, D.; Li, Q. A comparative analysis of image fusion methods. IEEE Trans. Geosci. Remote Sens. 2005, 43, 1391–1402. [Google Scholar] [CrossRef]
  58. Sharmila, K.; Rajkumar, S.; Vijayarajan, V. Hybrid method for multimodality medical image fusion using Discrete Wavelet Transform and Entropy concepts with quantitative analysis. In Proceedings of the International Conference on Communications and Signal Processing (ICCSP), Melmaruvathur, India, 3–5 April 2013; pp. 489–493. [Google Scholar]
  59. Gupta, S.; Rajkumar, S.; Vijayarajan, V.; Marimuthu, K. Quantitative Analysis of various Image Fusion techniques based on various metrics using different Multimodality Medical Images. Int. J. Eng. Technol. 2013, 5, 133–141. [Google Scholar]
  60. Mocanu, D.A.; Badescu, V.; Bucur, C.; Stefan, I.; Carcadea, E.; Raboaca, M.S.; Manta, I. PLC Automation and Control Strategy in a Stirling Solar Power System. Energies 2020, 13, 1917. [Google Scholar] [CrossRef] [Green Version]
  61. Raboaca, M.S.; Felseghi, R.A. Energy Efficient Stationary Application Supplied with Solar-Wind Hybrid Energy. In Proceedings of the 2019 International Conference on Energy and Environment (CIEM), Timisoara, Romania, 17–18 October 2019; pp. 495–499. [Google Scholar]
  62. Răboacă, M.S.; Băncescu, I.; Preda, V.; Bizon, N. Optimization Model for the Temporary Locations of Mobile Charging Stations. Mathematics 2020, 8, 453. [Google Scholar] [CrossRef] [Green Version]
  63. Raboaca, M.S.; Filote, C.; Bancescu, I.; Iliescu, M.; Culcer, M.; Carlea, F.; Lavric, A.; Manta, I. Simulation of A Mobile Charging Station Operational Mode Based On Ramnicu Valcea Area. Prog. Cryog. Isot. Sep. 2019, 22, 45–54. [Google Scholar]
  64. Jain, A.K. Fundamentals of Digital Image Processing; Prentice-Hall: Englewood Cliffs, NJ, USA, 1989. [Google Scholar]
  65. Sahu, S.; Singh, A.K.; Ghrera, S.P.; Elhoseny, M. An approach for de-noising and contrast enhancement of retinal fundus image using CLAHE. Opt. Laser Technol. 2018, 110, 87–98. [Google Scholar]
  66. Romberg, J.; Choi, H.; Baraniuk, R.G. Bayesian wavelet domain image modeling using hidden Markov models. IEEE Trans. Image Process. 2001, 10, 1056–1068. [Google Scholar] [CrossRef] [Green Version]
  67. Hamza, A.B.; Luque-Escamilla, P.L.; Martínez-Aroza, J.; Román-Roldán, R. Removing Noise and Preserving Details with Relaxed Median Filters. J. Math. Imaging Vis. 1999, 11, 161–177. [Google Scholar] [CrossRef]
  68. Yang, R.; Yin, L.; Gabboul, M.; Astola, J.; Neuvo, Y. Optimal weighted median filters under structural constraints. IEEE Trans. Signal Process. 1995, 43, 591–604. [Google Scholar] [CrossRef] [Green Version]
  69. Chang, S.G.; Yu, B.; Vetterli, M. Multiple Copy Image Denoising via Wavelet Thresholding. Proc. Int. Conf. Image Process. 2000, 9, 1631–1635. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  70. Liu, J.; Moulin, P. Image Denoising Based on Scale-Space Mixture Modeling of Wavelet Coefficients. In Proceedings of the IEEE Transactions on Information Theory, Special Issue on Multiscale Analysis, Kobe, Japan, 24–28 October 1999. [Google Scholar]
  71. Chang, S.G.; Cvetkovic, Z.; Vetterli, M. Resolution Enhancement of Images Using Wavelet transform Extrema Extrapolation. In Proceedings of the 1995 International Conference on Acoustics, Speech, and Signal Processing, Detroit, MI, USA, 9–12 May 1995; Volume 4, pp. 2379–2382. [Google Scholar]
  72. Sima, S.; Singh, H.V.; Kumar, B.; Singh, A.K. A Bayesian multiresolution approach for noise removal in medical magnetic resonance images. J. Intell. Syst. 2018, 29, 198–201. [Google Scholar]
  73. Prakash, C.; Rajkumar, S.; Chandramouli, P.V.S.S.R. Medical Image Fusion based on Redundancy DWT and Mamdani type min sum mean-of-max techniques with Quantitative Analysis. In Proceedings of the 2012 International Conference on Recent Advances in Computing and Software Systems, Chennai, India, 25–27 April 2012; pp. 54–59. [Google Scholar]
  74. Yazdani, S.; Yusof, R.; Karimian, A.; Pashna, M.; Hematian, A. Image segmentation methods and applications in MRI brain images. IETE Tech. Rev. 2015, 32, 413–427. [Google Scholar] [CrossRef]
  75. Jain, P.; Tyagi, V. LAPB: Locally adaptive patch-based wavelet domain edge-preserving image denoising. J. Inform. Sci. 2015, 294, 164–181. [Google Scholar] [CrossRef]
  76. Sheng, Y. Wavelet Transform. In The Transforms and Applications Handbook, 2nd ed.; CRC Press LLC: Boca Raton, FL, USA, 2000. [Google Scholar]
Figure 1. Wavelet decomposition (blue—lowest resolution sub-band is not processed; red—low resolution sub-bands, corresponding to wavelet decomposition level 3, are processed by soft truncation; green, white—detail sub-bands, corresponding to wavelet decomposition level 2 and 1, are processed based on the significance map).
Figure 1. Wavelet decomposition (blue—lowest resolution sub-band is not processed; red—low resolution sub-bands, corresponding to wavelet decomposition level 3, are processed by soft truncation; green, white—detail sub-bands, corresponding to wavelet decomposition level 2 and 1, are processed based on the significance map).
Applsci 10 05693 g001
Figure 2. Block diagram of the proposed algorithm.
Figure 2. Block diagram of the proposed algorithm.
Applsci 10 05693 g002
Figure 3. Hierarchical correlation maps obtained by the method proposed by Y. Li and C. Moloney in the case of the House image, 256 × 256 pixels. (a) Hierarchical correlation map obtained in the case of the original image, not degraded by noise. (b) Hierarchical correlation map obtained in the case of the degraded image with Gaussian additive white noise with dispersion σ = 0.05, (PSNR = 26.00 dB). (c) Hierarchical correlation map obtained by processing the map from point b with a median-hybrid type filter.
Figure 3. Hierarchical correlation maps obtained by the method proposed by Y. Li and C. Moloney in the case of the House image, 256 × 256 pixels. (a) Hierarchical correlation map obtained in the case of the original image, not degraded by noise. (b) Hierarchical correlation map obtained in the case of the degraded image with Gaussian additive white noise with dispersion σ = 0.05, (PSNR = 26.00 dB). (c) Hierarchical correlation map obtained by processing the map from point b with a median-hybrid type filter.
Applsci 10 05693 g003
Figure 4. PSNR and C dependence on the threshold value and noise dispersion (according to data presented in Appendix A).
Figure 4. PSNR and C dependence on the threshold value and noise dispersion (according to data presented in Appendix A).
Applsci 10 05693 g004
Figure 5. Images processed by the translation invariant approach of the proposed algorithm in the two cases considered. Sixteen circular trips were considered. (a) Image with white additive Gaussian noise. (b) Image obtained by mediating the images resulting from moving the input image and processing according to the first two stages of the proposed algorithm. (c) Image obtained by applying the third stage of the proposed algorithm (correction depending on the contour map) on the image from point b (final result case B). (d) Image obtained by integrally applying the invariant to translation of the proposed algorithm.
Figure 5. Images processed by the translation invariant approach of the proposed algorithm in the two cases considered. Sixteen circular trips were considered. (a) Image with white additive Gaussian noise. (b) Image obtained by mediating the images resulting from moving the input image and processing according to the first two stages of the proposed algorithm. (c) Image obtained by applying the third stage of the proposed algorithm (correction depending on the contour map) on the image from point b (final result case B). (d) Image obtained by integrally applying the invariant to translation of the proposed algorithm.
Applsci 10 05693 g005
Figure 6. (a) Initial, low contrast image. (b) Scaled image. (c) Enhanced image.
Figure 6. (a) Initial, low contrast image. (b) Scaled image. (c) Enhanced image.
Applsci 10 05693 g006
Figure 7. Noise reduction from a Computer Tomography (CT) image. (a) Input noisy image. (b) Output image, using Daubechies scale filter order 2 (db2). (c) Output image, using Daubechies scale filter order 4 (db4). (d) Output image, using Haar wavelet [76].
Figure 7. Noise reduction from a Computer Tomography (CT) image. (a) Input noisy image. (b) Output image, using Daubechies scale filter order 2 (db2). (c) Output image, using Daubechies scale filter order 4 (db4). (d) Output image, using Haar wavelet [76].
Applsci 10 05693 g007
Figure 8. Noise reduction from a CT image, using Daubechies scale filter order 2 (db2) [76].
Figure 8. Noise reduction from a CT image, using Daubechies scale filter order 2 (db2) [76].
Applsci 10 05693 g008
Figure 9. Noise reduction from a CT image, using Daubechies scale filter order 4 (db4) [76].
Figure 9. Noise reduction from a CT image, using Daubechies scale filter order 4 (db4) [76].
Applsci 10 05693 g009
Figure 10. Noise reduction from a CT image, using Haar wavelet [76].
Figure 10. Noise reduction from a CT image, using Haar wavelet [76].
Applsci 10 05693 g010
Figure 11. Noise reduction from a CT image, using Daubechies scale filter order 4 (db4) [76].
Figure 11. Noise reduction from a CT image, using Daubechies scale filter order 4 (db4) [76].
Applsci 10 05693 g011
Table 1. The input images used in the algorithm and their peak signal to noise ratios (PSNR) and Sigma values.
Table 1. The input images used in the algorithm and their peak signal to noise ratios (PSNR) and Sigma values.
Original ImagePSNRSigma
Boat 256 × 25626.070.05
Calendar 256 × 25626.000.05
Wheel 256 × 25626.220.05
Aerial 256 × 25626.040.05
Camera 256 × 25626.200.05
Goldhill 256 × 25626.030.05
Peppers 256 × 25626.040.05
Bridge 256 × 25620.130.1
Lena 256 × 25626.000.05
House 256 × 25626.050.05
Boat 512 × 51220.180.1
Bridge 512 × 51226.090.05
Einstein 512 × 51226.070.05
Lena 512 × 51220.060.1
CT image (Figure 6) 256 × 25626.010.05
CT image (Figure 7) 256 × 25620.150.1
CT image (Figure 8) 256 × 25620.130.1
CT image (Figure 9) 256 × 25626.090.05
CT image (Figure 10) 256 × 25620.180.1
CT image (Figure 11) 256 × 25626.150.05
Table 2. Comparative results obtained by adaptive soft truncation with contextual modeling and applying the first two steps of our proposed method, in the case of processing only the details sub bands corresponding to the first scale of a wavelet transformation. The side of the analysis window is 2L + 1.
Table 2. Comparative results obtained by adaptive soft truncation with contextual modeling and applying the first two steps of our proposed method, in the case of processing only the details sub bands corresponding to the first scale of a wavelet transformation. The side of the analysis window is 2L + 1.
By Adaptive Soft Truncation with Contextual Modeling (Chang)By Applying the Simplified Algorithm, Based on Local Statistics
LPSNR (dB)C (%)PSNR (dB)C (%)
328.9864.7430.0067.14
528.9965.3129.9967.08
1029.0264.7429.9066.82
2029.2163.8529.8165.73
5029.3164.5329.7666.20
Table 3. Dependence of the results obtained in the first stage of the proposed algorithm, as a function of Table 2 ( L + 1 ). In all cases, better results are obtained when using local statistics compared to optimal threshold, for both PSNR and C (improvement of the contours).
Table 3. Dependence of the results obtained in the first stage of the proposed algorithm, as a function of Table 2 ( L + 1 ). In all cases, better results are obtained when using local statistics compared to optimal threshold, for both PSNR and C (improvement of the contours).
Image σ InitialSoft Truncation with Optimal Threshold ValuesSoft Truncation with Local Statistics n = 3, L = 3Soft Truncation with Local Statistics n = 3, L = 5Soft Truncation with Local Statistics n = 3, L = 7Soft Truncation with Local Statistics n = 4, L = 3Soft Truncation with Local Statistics n = 4, L = 5
PSNR (dB)C (%)PSNR (dB)C (%)PSNR (dB)C (%)PSNR (dB)C (%)PSNR (dB)C (%)PSNR (dB)C (%)PSNR (dB)C (%)
Lena0.0526.0063.9429.7461.9630.2765.8430.1865.5130.0865.7530.2765.6730.1865.48
0.120.0648.1026.1344.6526.5946.7426.5745.6626.4946.3826.0146.7826.5945.80
Bridge0.0526.0963.5927.4962.6727.6562.5627.6562.7427.6562.7627.6562.3627.6562.86
0.120.1348.0723.3844.7623.4543.7523.4844.0323.4943.8723.4543.9923.4843.96
House0.0526.0565.4731.1762.8931.7266.9331.5766.5931.4266.9331.7367.0331.5866.51
0.120.1849.0427.5651.5928.0753.7827.9852.5527.8052.6628.1253.7828.0252.81
Boat0.0526.0364.0330.4462.7230.8765.0630.7464.7530.6665.2230.8765.0030.7564.92
0.120.0347.0926.8346.1227.1049.5327.0848.1627.0047.7527.1449.9227.1149.22
Peppers0.0526.0463.5030.0860.7830.4462.3630.3263.4430.2263.3630.4362.4030.3263.40
0.120.1650.8826.5148.6526.7849.2826.7349.2826.6648.4726.7949.3026.7449.12
Table 4. Dependence of the results obtained by applying the filtering algorithm proposed, depending on the number of wavelet decomposition levels. The test images were 256 × 256 pixels and were degraded with white Gaussian additive noise. The threshold values used to obtain the contour map from the hierarchical correlation map are t c = 0.1 for σ = 0.05 and t c = 0.15 for σ = 0.1 .
Table 4. Dependence of the results obtained by applying the filtering algorithm proposed, depending on the number of wavelet decomposition levels. The test images were 256 × 256 pixels and were degraded with white Gaussian additive noise. The threshold values used to obtain the contour map from the hierarchical correlation map are t c = 0.1 for σ = 0.05 and t c = 0.15 for σ = 0.1 .
ImageInitial ValuesProcessing StageNumber of Wavelet Decomposition Levels N = 3Number of Wavelet Decomposition Levels N = 4
PSNR (dB)C (%)PSNR (dB)C (%)
Lena σ = 0.05
PSNR = 26.00 dB
C = 65.84%
After the second stage30.2765.8430.2765.67
Using the correlation map30.1966.0930.1966.31
Filtered correlation map30.2265.9830.2366.23
σ = 0.05
PSNR = 26.00 dB
C = 65.84%
After the second stage26.5946.7426.6146.78
Using the correlation map26.2948.1726.3047.99
Filtered correlation map26.5048.2026.5348.55
Bridge σ = 0.05
PSNR = 26.03 dB
C = 62.96%
After the second stage27.5961.2627.5961.15
Using the correlation map27.5462.8327.5362.96
Filtered correlation map27.5662.4927.5562.53
σ = 0.1
PSNR = 20.19 dB
C = 47.44%
After the second stage23.4844.6623.4944.81
Using the correlation map23.3844.8223.3844.97
Filtered correlation map23.4644.4923.4744.76
Aerial σ = 0.05
PSNR = 26.06 dB
C = 70.14%
After the second stage28.1568.3628.1568.39
Using the correlation map28.1069.5528.1069.37
Filtered correlation map28.1369.3928.1269.29
σ = 0.1
PSNR = 20.15 dB
C = 54.39%
After the second stage24.2149.9124.2149.86
Using the correlation map24.0752.7524.0653.13
Filtered correlation map24.1752.0224.1652.68
Boat σ = 0.05
PSNR = 26.03 dB
C = 64.53%
After the second stage30.8864.6130.8964.64
Using the correlation map30.8065.2230.8165.24
Filtered correlation map30.8464.7530.8564.92
σ = 0.1
PSNR = 20.07 dB
C = 47.50%
After the second stage27.0747.5427.0947.58
Using the correlation map26.7248.8426.7249.43
Filtered correlation map26.9648.1626.9948.76
House σ = 0.05
PSNR = 26.05 dB
C = 63.96%
After the second stage31.7165.9931.7169.91
Using the correlation map31.6467.1631.6767.24
Filtered correlation map31.6766.8231.7066.95
σ = 0.1
PSNR = 20.10 dB
C = 49.66%
After the second stage28.0353.2828.0753.26
Using the correlation map27.6653.8527.6954.32
Filtered correlation map27.9853.2628.0553.49
Table 5. Application of the proposed algorithm in invariant approach to displacement, in the two approaches discussed. Four circular trips were made. The test images were 256 × 256 pixels and were degraded with white Gaussian additive noise.
Table 5. Application of the proposed algorithm in invariant approach to displacement, in the two approaches discussed. Four circular trips were made. The test images were 256 × 256 pixels and were degraded with white Gaussian additive noise.
Image σ InitialMethod A
(Integral Application of the Algorithm in Invariant Form to Translation)
Method B
(Partial Application of the Algorithm in Invariant Form to Translation and Performing the Correction on This Result)
Average Results of Partial Travel ProcessingAfter Performing the Correction Using the Contour Map
PSNR (dB)C (%)PSNR (dB)C (%)PSNR (dB)C (%)PSNR (dB)C (%)
Lena0.0526.0063.9431.0766.7031.1466.4730.9666.83
0.120.0847.6427.5550.7727.5948.6927.3551.02
Bridge0.0526.0763.7328.0863.8928.1463.0328.0163.84
0.120.1748.0224.1746.4424.1745.2524.0346.66
Aerial0.0526.0169.4329.0070.5029.0969.4628.8870.55
0.120.1053.9025.2756.4725.3254.2325.0256.07
Boat0.0526.0364.1231.9767.2832.0367.1831.8267.18
0.120.0347.5828.1951.2628.2550.1527.9451.12
House0.0526.0565.4732.6868.7032.7568.2832.5468.41
0.120.1355.1329.1656.9529.2155.1328.9156.28
Table 6. Comparison between the correction methods based on the hierarchical correlation map. The test images are 256 × 256 pixels and have been degraded with white Gaussian additive noise.
Table 6. Comparison between the correction methods based on the hierarchical correlation map. The test images are 256 × 256 pixels and have been degraded with white Gaussian additive noise.
Image σ InitialSwitched Type CorrectionContinuous Type Correction
PSNR (dB)C (%)PSNR (dB)C (%)PSNR (dB)C (%)
Lena0.0526.0063.9430.1966.3830.2666.07
0.120.0648.1025.4748.5526.5847.05
Bridge0.0526.0362.9627.5462.7427.5961.67
0.120.1947.4423.1746.3223.5244.30
House0.0526.0565.4731.6667.5531.7267.40
0.120.1450.9626.4955.0828.0053.59
Camera0.0526.2466.2930.7365.9430.8265.72
0.120.4551.5925.8651.2826.7050.49
Boat0.0526.0364.0330.8065.8430.8665.66
0.120.0045.7825.6348.9227.0047.27

Share and Cite

MDPI and ACS Style

Simona Răboacă, M.; Dumitrescu, C.; Filote, C.; Manta, I. A New Adaptive Spatial Filtering Method in the Wavelet Domain for Medical Images. Appl. Sci. 2020, 10, 5693. https://doi.org/10.3390/app10165693

AMA Style

Simona Răboacă M, Dumitrescu C, Filote C, Manta I. A New Adaptive Spatial Filtering Method in the Wavelet Domain for Medical Images. Applied Sciences. 2020; 10(16):5693. https://doi.org/10.3390/app10165693

Chicago/Turabian Style

Simona Răboacă, Maria, Cătălin Dumitrescu, Constantin Filote, and Ioana Manta. 2020. "A New Adaptive Spatial Filtering Method in the Wavelet Domain for Medical Images" Applied Sciences 10, no. 16: 5693. https://doi.org/10.3390/app10165693

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop