1. Introduction
Anti-forgery techniques are a range of measures designed to protect businesses and consumers from unauthorized counterfeit products [
1]. With the increasing sophistication of high-quality counterfeiting techniques, digital image-based anti-counterfeiting techniques have attracted widespread attention. Quick response (QR) codes have emerged as a particularly effective medium, leveraging their high-density data encoding capacity and machine-readable efficiency. Recent advancements have expanded QR code functionality through multiple security-enhanced implementations: physically unclonable functions (PUF) [
2,
3], watermarks [
4], copy detection patterns (CDP) [
5], and patch-wise color patterns.
Figure 1 illustrates representative implementations of security patterns. However, in the process of capturing patterns, factors such as shooting habits, equipment, and environmental conditions often result in blurred images. This operational reality necessitates robust deblurring frameworks to ensure reliability. Current research developments in image deblurring can be broadly classified into the following categories:
(1) Model-based methods: Pan et al. [
6] proposed an L0 regularization method specifically for the text image deblurring problem. This method preserves the intensity and gradient information of the image by minimizing the L0 parameter and is particularly suitable for text images with clear edges. However, L0 regularization can lead to an increase in computational complexity. Anwar et al. [
7] proposed an image deblurring method with class-specific prior knowledge for different types of content. This method provides accurate results for specific image classes but requires a large amount of annotation data and in-depth knowledge of each class. Moreover, it may be ineffective for images with mixed or unspecified classes. Bai et al. [
8] proposed a single-image blind deblurring method. This method uses a multi-scale latent structure prior to help estimate the blur kernel and recover the image. It is good at dealing with complex blur situations. However, the method is complicated and requires many parameter adjustments. Bai et al. [
9] required only a single blur image to construct a graph model that characterizes the relationship between image content and the blur process, without additional information. This method can handle complex structures and blur situations. Nevertheless, it is sensitive to noise and computationally complex. Wen et al. [
10] proposed a simple blind deblurring method based on the local minimum pixel intensity prior. This method uses the local minimum pixel intensity to estimate the blur kernel. It is easy to implement and provides kernel estimation. However, the effect may be poor with complex scenes or for specific blur types, and it relies on the previous accuracy and universality. Zhang et al. [
11] explored the iterative optimization process and put forward a pixel screening method. This method is designed to correct intermediate images and filter out bad pixels, thereby facilitating more precise kernel estimation. Alexander et al. [
12] put forward a semi-blind image deblurring problem. By applying the proposed iterative schemes, they aimed to recover a clear image from a noisy and blurred one. Their motion deblurring schemes can compete with modern non-blind methods, despite using less information. Moreover, the proposed schemes are also applicable to inverting non-linear filters and can rival state-of-the-art black-box defiltering methods. Shao et al. [
13] rethought the regularization term in the image blind deblurring problem and proposed a more realistic regularization method, to provide a more accurate model for describing the characteristics of an image. However, their new regularization term may require a complex optimization strategy, which increases the computational complexity and time cost. Cai et al. [
14] proposed an a priori constraint that is founded on maximizing local high-frequency wavelet coefficients. This constraint is incorporated into a graph-based blind deblurring model. To handle the non-convex and nonlinear model, an alternating iteration method is utilized, which is combined with a simple thresholding technique. Huang et al. [
15] put forward a differential Lorentzian point spread function model. This model has the ability to represent the line spread function of a real image as a series of Lorentzian functions with variable pulse widths and combinations of their differential functions. After that, an algorithm for estimating the model parameters, which is based on the improved zeroing filter, was presented.
(2) Deep-learning-based methods: Ren et al. [
16] proposed a spatially varying neural network, which is composed of a recurrent neural network (RNN) and three deep convolutional neural networks (CNNs). The RNN functions as a deconvolution operator on the feature maps derived from the input image. Two CNNs are utilized for feature extraction and for generating pixel-level weights for the spatially varying RNN. During the image reconstruction phase, a CNN is employed to estimate the final deblurred image. Bono et al. [
17] noted that the basic RNN unit has difficulty capturing long-term temporal dependencies, due to the vanishing gradient problem. Both Long Short-Term Memory (LSTM) and the Gated Recurrent Unit (GRU) are variants of the simple recurrent unit and can effectively address time-related issues. Ma et al. [
18] proposed a deblurring network with defocus map estimation as an auxiliary task. This network improves the overall processing capability through dual-task learning. However, the extra task increases the computational burden and may affect the deblurring results if the estimation of the auxiliary task is inaccurate. Que et al. [
19] proposed a lightweight dynamic deblurring solution for IoT smart cameras that is suitable for resource-constrained environments. Nevertheless, it may require further optimization in high-complexity scenarios. Wang et al. [
20] proposed a special GAN architecture with multiple residual learning modules. This architecture effectively handles multiple blur situations by introducing residual structures. However, it requires a large amount of training data for optimization and may not be effective under extreme blur conditions. Li et al. [
21] proposed an image deblurring method using image blur, which enables the network to focus on the hard-to-recover high-frequency details and performs well under certain blur types, but requires specific tuning for different blur types. Sharif et al. [
22] proposed a deblurring method for single-frame images in low-light conditions and that solves the blur problem at night or in low-light environments, but further optimization is needed when dealing with dynamic blur or complex scenes. Mao et al. [
23] introduced a highly efficient single-image blind deblurring technique based on deep idempotent networks. This technique maintains processing stability through idempotent constraints, does not require other auxiliary information, and has strong generalization ability. However, it has high complexity and requires careful tuning of hyperparameters. Liang et al. [
24] seamlessly encoded both spatial details and contextual information. Their approach decouples the intricate deblurring task into two distinct subtasks: a primary feature encoder and a contextual encoder. Subsequently, the combined outputs of these encoders are merged by a decoder to reconstruct the sharp target image. Quan et al. [
25] utilized a pixel-wise Gaussian kernel mixture model to precisely and concisely parameterize the spatially-varying defocus point spread functions. Feng et al. [
26] made use of alternating iterations of shrinkage thresholds, which included updating blurring kernels and images. Moreover, they put forward a learnable blur kernel proximal mapping module to enhance the accuracy of the blur kernel reconstruction. This module also integrates an attention mechanism to adaptively learn the significance of prior information. Zhuang et al. [
27] put forward a multi-domain adaptive deblurring network consisting of two modules. One is the domain adaptation module, which makes use of domain-invariant features to ensure the stable performance of the network across multiple domains. The other is the meta deblurring module, which utilizes the auxiliary branch to improve the deblurring capacity.
(3) Security pattern deblurring: Given that methods formulated for natural images typically do not exhibit good generalization to patterns, researchers have devised several effective priors grounded in the characteristics of QR codes. For example, Shi et al. [
28] put forward a line search method grounded in binary property metrics with the aim of reducing the time consumed in searching for the optimal blur kernel. Nevertheless, this approach is solely applicable to linear blur in conveyor belt scenarios, and its applicability is severely restricted. Rioux et al. [
29] proposed a barcode blind deblurring method based on Kullback–Leibler scattering. The method recovers the code by minimizing the KL scattering between the blurred barcode and the clear barcode, making it particularly suitable for motion blur. However, it may require significant computational resources, and its performance may be degraded when dealing with low-quality or extremely blurred barcodes. Chen et al. [
30] introduced a fast QR code recovery method for blur caused by inaccurate focusing. The method incorporates edge prior information into the code sensing stage, resulting in faster recovery and improved quality. However, the accuracy is dependent on the prior information and parameter settings. Improper settings may negatively impact the recovery effect, particularly in cases of complex blurring or noise interference. Chen et al. [
31] presented a method that directly estimates image sharpness based on image gradient features. This method can halt multi-scale iteration estimation at a scale enabling QR code recognition. However, in practical applications where the scene changes frequently, its implementation proves to be challenging. Li et al. [
32] extracted the key features of the blurred QR codes, binarized them using an improved adaptive thresholding method, and finally decoded them. This method is effective in solving the blur caused by rapid camera or object movement, with a high recognition rate and computational efficiency able to meet the demands of real-time applications. However, further optimization may be necessary when dealing with complex blurring patterns. Zheng et al. [
33] proposed a blind deblurring method for anti-counterfeit 2D codes that combines intensity and gradient priors. This method is specifically designed for security QR codes with specific localization patterns, and it can accurately recover blurred QR codes by considering local intensity and gradient information. Xu et al. [
34] proposed an approach that capitalizes on local maximum and minimum intensity priors. This algorithm adaptively performs binarization of the intensity within the maximum a posteriori framework, resulting in a significant enhancement in computational efficiency. Eventually, latent image estimation and kernel estimation are alternately solved within the framework until convergence is achieved.
Although model-based deblurring methods show advantages in improving the recovery of image details and the sharpness of edges, their practical application is still restricted by several intrinsic drawbacks. These drawbacks include high computational complexity, sensitivity to noise, and the need for demanding parameter optimization. Deep-learning-based methods, even though they perform excellently in dealing with various types of blur and reconstructing fine image details, also face their own problems. These problems involve heavy reliance on a large amount of training data, high computational costs, limited generalization ability, and subpar performance under extreme blur situations. In the particular field of security pattern restoration, the existing deblurring techniques have been effective in enhancing practical robustness. However, they still have crucial limitations when dealing with severely damaged patterns or operating under resource limitations. Significantly, color security patterns bring both opportunities and challenges. Compared with traditional grayscale patterns, color security patterns have many advantages in aspects such as tamper resistance, fraud prevention, information density, visual attractiveness, and authentication reliability. But at the same time, they make the tasks of removing blur and restoring sharpness more complex.
To address these various challenges, this paper puts forward a comprehensive framework for security pattern restoration through fundamental theoretical innovations, as depicted in
Figure 2. Firstly, we present chromatic-adaptive coupled oscillation (CACO), a physics-inspired enhancement technique. This technique establishes channel-specific nonlinear transformations and cross-channel dynamic coupling. It can achieve excellent noise suppression, while maintaining color fidelity and structural details, without the need for training data. Secondly, we propose micro-feature degradation and directional energy concentration (MDDEC), a novel blur detection algorithm. This algorithm combines spectral whitening with adaptive morphological filtering and hierarchical classification of fused global-directional features, showing high accuracy in blur-type discrimination under various imaging conditions. Thirdly, we introduce an optimized blind deblurring framework with graph-guided partition for localized kernel estimation. Regarding defocus blur, the mathematical basis is derived from unifying minimum brightness difference (MBD) and edge gradient (EG) priors within a hybrid energy functional. This enables closed-form FFT solutions that converge more rapidly than iterative alternatives. For motion blur, we prove that the MBD–EG–DEC feature triplet forms a complete basis for kernel estimation, and the second-order moments of the YCbCr luminance channel provide sufficient information for coarse-to-fine restoration. Our main contributions are as follows:
The preprocessing technique employed is denoising with chromatic-adaptive coupled oscillation (CACO). This technique augments the contrast and details, thereby enhancing the overall quality of patterns. Micro-feature degradation and directional energy concentration (MDDEC) is introduced for blur detection, enabling the quantitative detection of the degree of blurring of patterns.
Extra priors, a minimum brightness difference (MBD) prior and an edge gradient (EG) prior, are integrated as constraints into maximum a posteriori (MAP) and maximum likelihood estimation (MLE) frameworks. These pattern-specific priors significantly enhance the recovery of structure and details.
By partitioning the pattern into several sub-regions and employing graphical guidance to group these sub-regions, the problem of pattern content variability and non-uniform blur has been effectively addressed.
The experimental results demonstrate the robustness of the proposed method and a significant reduction in runtime.
The paper is organized as follows: An efficient blur detection method for security patterns is proposed in
Section 2. A novel blind deblurring algorithm is developed in detail in
Section 3.
Section 4 gives the experimental results and a discussion. Finally, the conclusions and future improvements are presented in
Section 5.
2. Blur Detection of Security Patterns
2.1. Blur Degrading
The blur degrading process of security patterns can be conceptualized as the convolution of a clear pattern with a kernel function, accompanied by additive noise that is analogous to white Gaussian noise. Under the assumption that the blur is uniform and spatially invariant, the mathematical model of an individual blur pattern can be expressed as follows:
where
B,
k, ⊗,
C, and
n represent the blur pattern, blur kernel, convolution operation, clear pattern, and additive noise, respectively.
Progressive blur degradation simulation of security patterns is shown in
Figure 3. Blur patterns can be deblurred by blur-kernel-based deconvolution, while minimizing the effect of noise on the deconvolution process. However, since the blur kernel is usually unknown, deconvolution becomes an ill-posed problem with multiple possible solutions. Therefore, it is usually necessary to perform blur estimation beforehand by utilizing the information and features of the blur pattern. The potential blur kernels are identified, and the pattern is then deblurred through deconvolution. The efficacy of the deblurring process is contingent upon the precision of the blur kernel estimation and the selected deblurring methodology. Two common types of blur in image processing are defocus blur and motion blur. The essential difference between these two types of blur lies in the difference in their blur kernel functions. In order to investigate the blur parameters more accurately, it is necessary to categorize the different types of image blur and use corresponding processing methods according to the different types. To facilitate this classification and subsequent processing, we initially formalize the kernel functions associated with these two primary types of blur:
(1) The phenomenon of defocus blur is typically attributed to inaccurate focusing or a disparity in depth of field within the scene during the capturing process. With regard to security patterns, the primary cause of defocus blur is typically an inaccurate focus. Its blur kernel function can be expressed as follows:
where
R represents the radius of the defocus blur spot.
(2) Motion blur is caused by the relative displacement of the camera with respect to the capturing target during the exposure time. In the study of security patterns, global motion blur due to the shaking of the capturing equipment is the most prevalent cause. Since the exposure time is usually short, the relative motion of the camera and the pattern can be approximated as a uniform linear motion. Its blur kernel function can be expressed as follows:
where
L represents the length of uniform linear motion and
represents the angle between the direction of motion and the horizontal.
In practical imaging systems, blur degradation frequently coexists with various types of noise. To comprehensively mimic real-world image degradation, we take into account four common noise types. The mathematical formulations and statistical characteristics of each noise type are presented below. A noisy image is generated from the original image through the following expression:
(1) Brown noise degradation is modeled using a spatially varying additive noise process defined by
the variance field is defined as
where
represents a Gaussian-distributed random variable with a mean of zero and a position-dependent variance
. The parameter
is a scaling coefficient that determines the maximum noise intensity. Meanwhile,
denotes a uniformly distributed random field, which introduces spatial variability into the noise characteristics.
(2) The Gaussian noise degradation follows the classical additive formulation:
with the probability density function given by
where
represents the mean value that governs the direct current offset of the noise, while
denotes the variance parameter that controls the noise power and dispersion.
(3) Salt and pepper noise follows a impulsive model:
where
represents the total noise density. In this equation,
and
denote the occurrence probabilities of saturated white and black pixels, respectively. Meanwhile,
represents the maximum intensity value within the image’s dynamic range.
(4) Speckle noise degradation follows a multiplicative model:
where
denotes a zero-mean Gaussian random field with variance
. This variance determines the intensity of the multiplicative noise component.
2.2. Denoising
The chromatic-adaptive coupled oscillation (CACO, see Algorithm 1) method aims to overcome the common limitations of existing denoising approaches (such as edge blurring, detail loss, and color distortion) when dealing with complex real-world noise. The core design draws inspiration from two crucial mechanisms in biological vision systems: chromatic adaptation and coupled oscillation. The former imitates the human retina’s independent gain control ability across various color channels, while the latter replicates how neurons in the visual cortex enhance relevant features and suppress irrelevant signals through synchronous oscillatory activity. First of all, a channel-adaptive nonlinear transformation mechanism is established. It independently adjusts the enhancement parameters of the three RGB channels to intelligently optimize color information. Differently from the traditional fixed-parameter approach, this mechanism can make dynamic adjustments according to the noise distribution characteristics and color sensitivities of each channel. Secondly, a cross-channel dynamic coupling system is innovatively designed. By analyzing the correlation matrix among channels, an adaptive weight model is established. This effectively eliminates noise, while perfectly maintaining the color balance and the integrity of the edge structure of the image. Finally, a morphological–statistical hybrid processing procedure is developed. Through multi-scale analysis using elliptical structural elements and the integration of local statistical features, the inherent conflict between noise suppression and detail preservation in traditional methods is resolved. In contrast to deep-learning-based methods, this processing model, which is entirely based on the internal characteristics of the image and self-regulated, does not require pre-training data and is suitable for various imaging devices.
By formulating these mechanisms using a set of meticulously designed differential equations and parameters, the algorithm effectively differentiates noise from meaningful image structures. Each parameter plays a specific role in this process. The input and output ranges improve numerical stability. Meanwhile, the channel-specific gamma values () account for the differences in SNR across color channels. The membrane time constant () regulates potential decay, and the cross-channel coupling coefficient () enables color opponency. The adaptive threshold () and elliptical kernel () further enhance structural consistency and suppress noise. The overall workflow of the algorithm consists of three main stages:
Stage 1: Chromatic-adaptive mapping
This stage conducts independent preprocessing of each color channel. Instead of directly eliminating noise, its aim is to prepare enhanced and standardized input features for the subsequent oscillation process. It intelligently adjusts the enhancement parameters for each of the three RGB channels based on color information. Differently from traditional fixed-parameter methods, this mechanism dynamically adapts to the noise characteristics and color sensitivity of each channel. First, the input image is clipped to the range to exclude outliers. Then, a nonlinear mapping function is applied, where the channel-specific parameter adaptively enhances or reduces the contrast according to the noise level in each channel. Finally, the result is locally standardized using the mean and standard deviation from a neighborhood, followed by convolution with an elliptical kernel to preliminarily enhance the structural consistency.
Stage 2: Coupled oscillation suppression
This stage serves as the core of the denoising process. An oscillator is initialized for each pixel. Its state is defined by a membrane potential and an inhibitory signal . The dynamics are regulated using the following three terms: Decay term (): This term models the natural decay of the membrane potential over time, with the decay rate controlled by the time constant . Cross-channel coupling term (): Crucial for color-consistent denoising, this term calculates the covariance between the oscillation signals of different channels, modulated by . It enhances the response when the signals across channels vary synchronously (indicating genuine features) and suppresses the response when the variations are asynchronous (suggesting noise). Photoreceptor input term (): This term incorporates the normalized features from stage 1 as a continuous external input to drive the oscillation. The inhibitory signal is updated through morphological opening () and dynamic thresholding (). These operations together record the significant activity history and suppress persistent high-frequency oscillations. The final output represents the rate of change in the potential after the removal of inhibition and smoothing, thus highlighting prominent features such as edges and textures.
Stage 3: Perceptual reconstruction
In this stage, the temporal sequence of oscillation outputs
is aggregated to generate the final denoised image. Averaging over
iterations serves to stabilize the outcome and suppress random fluctuations. Subsequently, convolution with the elliptical kernel
is carried out to further enhance structural smoothness. Finally, channel-specific gain compensation is implemented through a diagonal matrix
. Here, each
represents the ratio between the mean intensity of the original and denoised images in channel
c. This measure ensures that the denoised image maintains perceptual consistency with the original in terms of overall brightness.
Algorithm 1 CACO: Chromatic-Adaptive Coupled Oscillation Algorithm for Denoising |
Require: Image Ensure: Denoised image Parameters: (input range); (output range); (channel-specific gamma); (membrane time constant); (cross-channel coupling); (adaptive threshold); (elliptical) Stage 1: Chromatic-Adaptive Mapping
- 1:
for each channel do - 2:
- 3:
- 4:
- 5:
end for
- 6:
Initialize: - 7:
- 8:
- 9:
for to do - 10:
- 11:
- 12:
- 13:
end for
- 14:
- 15:
- 16:
where
|
2.3. Blur Detection
Conventional image quality evaluation methodologies are inadequate for the detection of blur in security patterns, due to the unavailability of matching sample patterns on the server side. Furthermore, patterns differ significantly from natural images in terms of edges, texture, and statistical features. Analysis shows that blur degradation causes grayscale changes at the edge of the pattern to flatten. This change is manifested in the frequency domain as attenuation of the high-frequency component and enhancement of the low-frequency component. Specifically, the low-frequency information is derived primarily from the smooth grayscale change region within the color block, while the high-frequency information is derived primarily from the sudden grayscale change region at the edge of the color block. We performed Discrete Fourier Transform (DFT) on security patterns under varying degradation conditions (see
Figure 4). The spectrograms of the clear patterns and the defocus blur patterns exhibit both central and axial symmetry. As the degree of blur increases, the white region representing the high-frequency component gradually concentrates towards the center of the spectrum. Motion blur patterns exhibit changes in gradient values at specific angles, resulting in an elliptical shape of the spectrum in the high-frequency region. The long axis of the ellipse is perpendicular to the direction of the blur, while the short axis becomes progressively shorter as the length of the blur displacement increases. To enhance the efficiency of the blur detection method and to reduce the computational time required by the method, we chose to compute only the high-frequency components in the central region of the spectrogram. The spectrogram of the pattern displays clear direction and intensity characteristics.
Based on these observations, this paper proposes a novel micro-feature degradation and directional energy concentration algorithm (MDDEC) for blur detection in security patterns (see Algorithm 2).
Figure 5 presents a visualization of MDDEC for blur detection in security patterns. The algorithm ingeniously combines frequency domain analysis with adaptive spatial processing to achieve high-precision classification of blur types and severity levels. By devising the blur detection mechanism within a systematic spectral analysis framework and using a set of meticulously defined parameters, the algorithm effectively discriminates between different blur types and quantifies their severity. Each parameter plays a specific role in this process: the defocus blur threshold range (
) quantifies severity levels according to the degree of spectral feature degradation, while the motion blur threshold range (
) serves a similar purpose for motion-induced blur. The angle offset threshold (
) differentiates between directional motion blur and isotropic defocus blur by defining an angular tolerance for orientation detection. Meanwhile, the energy distribution ratio error (
) evaluates the symmetry in the spectral domain, to identify the characteristic patterns of defocus blur. The overall workflow of the algorithm consists of four main stages:
Algorithm 2 MDDEC: Micro-Feature Degradation and Directional Energy Concentration Algorithm for Blur Detecting |
Require: Image Ensure: Blur analysis: pair (type, degree)
Parameters: [,] (th_defocus range); [, ] (th-motion range); (angle offset); (error) Stage 1: Frequency Domain Analysis
- 1:
Convert to grayscale: - 2:
Apply Hanning window: - 3:
Compute centered DFT: - 4:
- 5:
Calculate log-magnitude spectrum: - 6:
- 7:
Apply spectral whitening:
- 8:
Define analysis region - 9:
Adaptive binarization: - 10:
- 11:
- 12:
Morphological filtering: - 13:
- 14:
- 15:
Micro-feature degradation: - 16:
- 17:
Directional energy concentration: - 18:
Clean image: - 19:
Get region stats: - 20:
Select main region: - 21:
Extract orientation: - 22:
- 23:
- 24:
|
- 25:
if < and then
|
▹ Defocus blur condition |
- 26:
if then - 27:
return (defocus, slight) - 28:
else if then - 29:
return (defocus, severe) - 30:
else - 31:
return (defocus, worst) - 32:
end if
|
- 33:
else if then
|
▹ Motion blur condition
|
- 34:
if then - 35:
return (motion, slight) - 36:
else if then - 37:
return (motion, severe) - 38:
else - 39:
return (motion, worst) - 40:
end if
|
- 41:
else
|
▹ Clear condition |
- 42:
return (clear, none) - 43:
end if
|
Stage 1: Frequency domain analysis
This stage converts the image into a normalized frequency representation, to enhance blur-sensitive features. Converting the image to grayscale () not only reduces the computational complexity, but also preserves the essential structural information. Applying a Hanning window () mitigates spectral leakage by attenuating edge discontinuities. The centered Discrete Fourier Transform (DFT) generates a frequency spectrum , in which blur patterns are more distinguishable. The log-magnitude spectrum compresses the dynamic range to improve the visibility of high-frequency components. Spectral whitening, achieved by normalizing with a Gaussian-smoothed version of the magnitude spectrum, suppresses variations in overall intensity and highlights spatial–frequency structures indicating blur.
Stage 2: Adaptive spectral processing
This stage isolates the salient frequency components related to blur. The analysis is confined to a central region R of the spectrum. This is done to avoid edge artifacts and take advantage of the fact that blur mainly impacts mid-to-high frequencies. Adaptive binarization, which employs Otsu’s method, effectively separates prominent spectral patterns from noise. Subsequently, morphological opening with an elliptical structuring element K refines the binary map . It achieves this by eliminating isolated noise pixels, smoothing region boundaries, and enhancing the connectivity of meaningful spectral structures.
Stage 3: Micro-feature degradation and directional energy concentration
Here, two discriminative features are extracted: Micro-feature degradation metric (A): This metric is calculated as the ratio of non-zero pixels in the binary spectral mask to the total number of pixels in region R. It serves to measure the sparsity of the remaining high-frequency components. A higher degree of blur severity leads to a more significant degradation of fine details, corresponding to lower values of A. Directional energy concentration: The binary map undergoes further processing to retain only the significant regions. The orientation of the largest connected component is obtained through ellipse fitting. This orientation indicates the dominant direction of spectral energy spread, which is a characteristic feature of motion blur. Additionally, the axis ratio is computed by projecting the binary map onto the x- and y-axes. This ratio provides a measure of spectral symmetry. Defocus blur demonstrates high symmetry (), while motion blur exhibits anisotropy.
Stage 4: Hierarchical classification
Utilizing the extracted features, a rule-based classifier differentiates blur conditions as follows: If the dominant orientation is close to horizontal or vertical (within ) and the energy distribution is symmetric (), the blur is categorized as defocus. The severity is determined by comparing A with the thresholds . If a significantly dominant orientation exists (), the blur is classified as motion blur, and the severity is similarly evaluated using . If neither of these conditions is satisfied, the image is classified as clear.
It is important to note that the effectiveness of deblurring algorithms varies depending on the level of blur in the images. While deblurring may improve the accuracy of identification for images with slight or severe blur, it may not result in a significant improvement and may even lead to identification errors for the worst blur patterns. Therefore, it is not recommended to deblur the worst blur patterns, for the sake of efficiency and accuracy. Instead, such problems should be reported directly to the authentication system.