Next Article in Journal
Frank Prioritized Aggregation Operators and WASPAS Method Based on Complex Intuitionistic Fuzzy Sets and Their Application in Multi-Attribute Decision-Making
Next Article in Special Issue
Representing Blurred Image without Deblurring
Previous Article in Journal
Efficient and Low Color Information Dependency Skin Segmentation Model
Previous Article in Special Issue
Two-Dimensional Exponential Sparse Discriminant Local Preserving Projections
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Review of Quaternion-Based Color Image Processing Methods

1
Department of Mathematics, The Chinese University of Hong Kong, Hong Kong, China
2
School of Communication & Information Engineering, Shanghai University, Shanghai 200444, China
3
Institute of Advanced Technology, Nanjing University of Posts and Telecommunications, Nanjing 210049, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(9), 2056; https://doi.org/10.3390/math11092056
Submission received: 30 March 2023 / Revised: 18 April 2023 / Accepted: 23 April 2023 / Published: 26 April 2023
(This article belongs to the Special Issue Representation Learning for Computer Vision and Pattern Recognition)

Abstract

:
Images are a convenient way for humans to obtain information and knowledge, but they are often destroyed throughout the collection or distribution process. Therefore, image processing evolves as the need arises, and color image processing is a broad and active field. A color image includes three distinct but closely related channels (red, green, and blue (RGB)). Compared to directly expressing color images as vectors or matrices, the quaternion representation offers an effective alternative. There are several papers and works on this subject, as well as numerous definitions, hypotheses, and methodologies. Our observations indicate that the quaternion representation method is effective, and models and methods based on it have rapidly developed. Hence, the purpose of this paper is to review and categorize past methods, as well as study their efficacy and computational examples. We hope that this research will be helpful to academics interested in quaternion representation.

1. Introduction

Image processing is a fundamental task in data science, and color image processing is particularly important because it contains more color information. Although many image processing methods have been proposed, mainstream works still represent images in the form of vectors and matrices. Despite these methods being able to achieve competitive results, in recent years, methods based on quaternion representation have been widely used and proven to yield better results [1,2,3,4], such as quaternion-based models for image segmentation [5], restoration [6,7], watermarking [8], face recognition [9,10,11], classification [12,13,14,15], super-resolution [16], etc.
With such a wide range of applications, one wonders what a quaternion is. In fact, the quaternion was invented by Hamilton in 1843 [17]. Similar to a complex number x = a + b i C ( a , b R , i is the imaginary unit, and x C is a complex number), the quaternion number can be denoted as x ˙ = x 0 + x 1 i + x 2 j + x 3 k H ( x 0 , x 1 , x 2 , x 3 R , i , j , k are imaginary units, and x ˙ H is a quaternion number). The quaternion number x ˙ can be understood as an extension of complex numbers, where a and b in x are complex numbers. For a more detailed explanation, please refer to the following reference:
a + b i = ( x 0 + x 2 j ) + ( x 1 x 3 j ) i = x 0 + x 2 j + x 1 i x 3 j i = x 0 + x 1 i + x 2 j + x 3 k ,
where a = x 0 + x 2 j , b = x 1 x 3 j are complex numbers, and j i = k , the detailed rule of the quaternion number, will be introduced in Section 2. In color image processing, we usually represent a color image as a matrix or vector. However, in this case, the color information between the color channels cannot be represented well due to a color image having three channels, i.e., red, green, and blue (RGB). How to denote a color image in a holistic way to avoid errors in handling color image processing is a challenge. Note that, mathematically, a color pixel can be denoted as ( u r , u g , u b ) . Considering the quaternion number with three imaginary parts may be a better way to represent the color image. Then we can represent a color pixel as a quaternion number, i.e., u ˙ = u 0 + u r i + u g j + u b k . Please see Figure 1 for a better understanding. As quaternions have a real part u0 and imaginary parts (ur, ug, ub), the quaternion representation for color pixels should be the pure quaternion u = uri + ugj + ubk. However, in basic tasks such as image denoising or other tasks with simple methods, e.g., [18], we do not affect the real part of the quaternion. Therefore, for better understanding, we use the quaternion representation u = u0 + uri + ugj + ubk. In some works, such as singular value decomposition, the real part of the quaternion is iterated to be nonzero; thus, a zero constraint is needed for the real part.
By representing color images holistically, the relationship between color channels can be preserved, and artifacts in the results can be avoided. Based on this conclusion, many models have been extended to the quaternion domain, including variation models [19,20], sparse representation-based models [21,22,23,24], and low-rank models [25,26,27,28,29]. Moreover, quaternion modules are widely used in deep convolutional neural networks (CNNs). The original convolution kernels merge the color channels by summing up the convolution results and outputting a single channel per kernel. However, with the quaternion representation, the complicated interrelationship between color channels and some important structural information can be well preserved. This can reduce the degrees of freedom to the learning of convolution kernels and the number of neural parameters, thus decreasing the risk of over-fitting. Based on these hypotheses, quaternion-based deep CNN (QCNN) and transformer (QTrans) models have been proposed, e.g., the QCNN [30,31,32,33,34,35,36], QTrans [37,38,39,40], etc.
With the use of quaternion representation, it is possible to better preserve the information and relationships between color channels, leading to more satisfactory results. This paper provides an overview of recent quaternion-based traditional and deep learning models with their applications in image processing, as well as the challenges that still need to be addressed. To provide a comprehensive overview of all related work on quaternion representation in color image processing, we searched literature databases, including journals, conferences, and book chapters in Web of Science. Since research is ongoing, without loss of generality, we set the search time to 1 March 2023. A summary of our findings can be seen in Figure 2, organized by task, method, and year.
There have already been several reviews of quaternion-based color image processing methods. Barthelemy et al. [41] provided an overview of traditional models and algorithms using quaternion sparse representation. In 2020, Garc’ıa-Retuerta et al. [1] discussed the challenges in QCNN models and quaternion applications in neural networks. Similarly, Parcollet et al. [2] provided a review of QCNN and its applications in various domains, with a more detailed description of QCNN fundamentals. In 2021, Eduardo [3] surveyed the quaternion applications from the aspect of quaternion algebra. One can learn geometric algebra and the rotation property of the quaternion number for applications such as kinematics, tracking, and the control of robotics. Moreover, some packages and links for the mentioned applications can be found in [3]. Each of these surveys provides a comprehensive review of quaternion-based models from different perspectives. However, none of them includes a detailed review of color image processing with quaternion representation. In contrast, we cover traditional, deep learning, and hybrid quaternion-based color image processing models. The contributions of this work include:
  • A detailed overview of color image processing with a quaternion representation;
  • A comprehensive survey of the different algorithms along with their benefits and limitations;
  • A summary of each algorithm in detail, including objectives, goals, and weaknesses, and a discussion of recent challenges and their possible solutions.
The main objective of this paper is to comprehensively analyze the potential applications of quaternion representation and provide an overview of recent research advancements. The rest of the paper is organized as follows: Section 2 provides a review of the basic theory of quaternion and its related definitions. In Section 3, we discuss the successful implementation of traditional variation models. Section 4 reviews the variants used to connect quaternion ideas within neural networks, with a focus on the significant breakthroughs. Promising research directions and their associated challenges are discussed in Section 5. Finally, the paper concludes in Section 6.

2. Basic Theory

In this section, we first present the fundamental of color image processing, then provide a brief introduction of quaternion and some related definitions.

2.1. Color Image Processing Model

Color image processing is generally an extension of gray image processing. Since the image will be corrupted by noise and blur, the general image degradation model is
f = A u + b ,
where f is the observation, u is the desired image, b is additive noise, and A is a linear operator. For the image deblurring task, A is the blur operator related to the blur kernel; for the image denoising task, A is the identical operator; for the image super-resolution task, A is the downsampling operator; for medical image reconstruction, A is the sampling operator; and for image inpainting, A is a projection operator. There are many methods that can restore the desired image u from f. One classical method is the total variation (TV) model [42,43,44]:
u = arg min u λ 2 A u f 2 2 + u 1 ,
where λ > 0 is a trade-off parameter and ∇ is the gradient operator. u 1 is the TV term defined by
( u ) j , k = ( u ) j , k x , ( u ) j , k y
with
( u ) j , k x = u j + 1 , k u j , k if j < n , 0 if j = n ,
( u ) j , k y = u j , k + 1 u j , k if k < n , 0 if k = n ,
for j , k = 1 , , n . Here, u j , k refers to the ( j n + k ) th entry of the vector u (it is the ( j , k ) th pixel location of the image). The discrete TV of u is defined by
u 1 : = 1 j , k n ( u ) j , k 2 = 1 j , k n ( u ) j , k x 2 + ( u ) j , k y 2
and | · | 2 is the Euclidean norm in R 2 .
We can obtain the solution to model (3) with many algorithms; the classical one is the alternating direction method of multipliers (ADMM) [45,46]. By introducing the auxiliary variable p, (3) can be reformulated as
u = arg min u λ 2 A u f 2 2 + p 1 , s . t . p = u .
The augmented Lagrangian function is given by attaching multiplier ξ
L ( u , p ; ξ ) = λ 2 A u f 2 2 + p 1 + β 2 p u 2 2 + ξ , p u ,
where β is the penalty parameter for linear constraints to be satisfied. From Algorithm 1, the final solution u k + 1 can be obtained. If we extend the whole algorithm and model into the quaternion system, at the very least, the norm, gradient, multiple, and division should be defined. In Section 2.2, we will provide these definitions in quaternion by referring to the quaternion theory.
Algorithm 1 ADMM for solving (3)
  • Initialization Let u 0 = f , ξ = 0 be the initial input data
  • for  k = 0 K   do
  •    Update p k + 1 with
  • p k + 1 = arg min p p 1 + β 2 p u k 2 2 + ξ k , p u k = arg min p p 1 + β 2 p u k + ξ k β 2 2 = max u k ξ k β 2 1 β , 0 u k ξ k β u k ξ k β 2
  •    Update u k + 1 with
  • u k + 1 = arg min u λ 2 A u f 2 2 + β 2 p k + 1 u 2 2 + ξ k , p k + 1 u = λ A * f + β * ( p k + 1 + ξ k β ) λ A * A + β *
  •    Update ξ k + 1 with ξ k + 1 = ξ k β ( p k + 1 u k + 1 )
  •     k = k + 1 .
  • end for
  • return  u k + 1
Owing to the powerful feature representation capabilities of deep learning, deep convolutional neural network (CNN)-based image processing methods have been developed and have shown remarkable performance [47,48,49]. The CNN learns by discovering intricate structures in training data, which are mainly composed of convolution layers, pooling layers, and fully connected layers. The convolution layer extracts features from high-dimensional data by using a set of convolution kernels. After obtaining these features, they are used for classification by first proceeding to the feature section in the pooling layer, dividing the features into disjoint regions, and taking the mean (or maximum) feature activation over these regions to obtain the pooled convolved features. The fully connected layer is then used for classification. For color images, there are also some quaternion-based CNN (QCNN) models that are considered better than real-valued CNNs in color preservation and parameter reduction [40,50,51,52]. The basic model in the quaternion system will be introduced in Section 2.3.

2.2. Quaternion

Quaternion was proposed by Hamilton in 1843 [17]. As mentioned before, the quaternion number system is an extension of complex numbers. A quaternion number x ˙ is usually represented as a linear combination of a real part and three imaginary parts, i.e.,
x ˙ = x 0 + x 1 i + x 2 j + x 3 k ,
where x 0 is the real part and x 1 , x 2 , x 3 are the imaginary parts of the quaternion number x ˙ , i , j , k are the fundamental quaternion units that satisfy
i 2 = j 2 = k 2 = i j k = 1
and
i j = k , j k = i , k i = j , i k = j , k j = i , j i = k .
The quaternion unit rules infer that multiplication is not commutative. The Hamilton product of two quaternions, x ˙ = x 0 + x 1 i + x 2 j + x 3 k and y ˙ = y 0 + y 1 i + y 2 j + y 3 k , is
x ˙ y ˙ = x 0 y 0 x 1 y 1 x 2 y 2 x 3 y 3 + ( x 0 y 1 + x 1 y 0 + x 2 y 3 x 3 y 2 ) i + ( x 0 y 2 x 1 y 3 + x 2 y 0 + x 3 y 1 ) j + ( x 0 y 3 + x 1 y 2 x 2 y 1 + x 3 y 0 ) k .
Physically, x ˙ y ˙ is rotation x ˙ followed by rotation y ˙ . The multiple of x ˙ y ˙ can also be written as
x ˙ y ˙ = x 0 x 1 x 2 x 3 x 1 x 0 x 3 x 2 x 2 x 3 x 0 x 1 x 3 x 2 x 1 x 0 y 0 y 1 y 2 y 3 .
The dot product of x ˙ and y ˙ is
x ˙ , y ˙ = x 0 y 0 + x 1 y 1 + x 2 y 2 + x 3 y 3 .
The conjugate of a quaternion number x ˙ is x ˙ * = x 0 x 1 i x 2 j x 3 k , the modulus is | x ˙ | = x ˙ ( x ˙ ) * = x 0 2 + x 1 2 + x 2 2 + x 3 2 (one can check by Equation (8)), the inverse is x ˙ 1 = x ˙ * | x ˙ | 2 , and the inverse of x ˙ and y ˙ is
( x ˙ y ˙ ) 1 = ( x ˙ y ˙ ) * | x ˙ y ˙ | 2 = y ˙ * x ˙ * | y ˙ | 2 | x ˙ | 2 = y ˙ * | y ˙ | 2 x ˙ * | x ˙ | 2 = y ˙ 1 x ˙ 1 .
If | x ˙ | = 1 , we call x ˙ a unit quaternion number. If ( x ˙ ) = x 0 = 0 , we call x ˙ a pure quaternion number, where ( · ) denotes the real part of a quaternion number. The required rules of quaternion matrix derivatives for image processing are listed in Table 1. We refer the reader to [53] for details. With the derivative rules of the quaternion function, we can solve the quaternion-based model directly. Suppose the nuclear norm model was extended to the quaternion domain as
min u ˙ λ 2 A ˙ u ˙ f ˙ 2 2 + u ˙ ,
where u ˙ is the desired image, A ˙ is the linear operator, f is the observation, and u ˙ is the nuclear norm of u ˙ , which sums the singular values of u ˙ . Before solving (11), we give the definition of the quaternion singular value decomposition (SVD) as follows. Let S ˙ H m × n , then there exist two unitary quaternion matrices U ˙ H m × m and V ˙ H n × n , such that U ˙ S ˙ V ˙ * = , where = diag σ 1 , σ 2 , , σ s , σ i 0 are the singular values of S ˙ and s = min ( m , n ) .
Let p ˙ = u ˙ , with the ADMM algorithm; one can obtain the augmented Lagrangian function, which is similar to (5)
L ( u ˙ , p ˙ ; ξ ˙ ) = λ 2 A ˙ u ˙ f ˙ 2 2 + p ˙ + β 2 p ˙ u ˙ 2 2 + ξ ˙ , p ˙ u ˙ .
Then we have
u ˙ k + 1 = min u ˙ λ 2 A ˙ u ˙ f ˙ 2 2 + β 2 u ˙ ( p ˙ k + ξ ˙ k β ) 2 2 , p ˙ k + 1 = min p ˙ p ˙ + β 2 p ˙ ( u ˙ k + 1 ξ ˙ k β ) 2 2 , ξ ˙ k + 1 = ξ ˙ k β ( p ˙ k + 1 u ˙ k + 1 ) .
According to Table 1, we have the optimization condition
λ 2 ( A ˙ u ˙ f ˙ ) * A ˙ 1 2 ( ( A ˙ * ( A ˙ u ˙ f ˙ ) ) * ) + β 4 u ˙ ( p ˙ k + ξ ˙ k β ) * = λ 2 ( A ˙ u ˙ f ˙ ) * A ˙ 1 2 ( ( A ˙ u ˙ f ˙ ) * A ˙ ) + β 4 u ˙ ( p ˙ k + ξ ˙ k β ) * = λ 4 ( A ˙ u ˙ f ˙ ) * A ˙ + β 4 u ˙ ( p ˙ k + ξ ˙ k β ) * = 0 ,
thus, the solution is
u ˙ k + 1 = λ A ˙ * f ˙ + β ( p ˙ k + ξ ˙ k β ) λ A ˙ * A ˙ + β .
For p ˙ -subproblem, the QSVD can directly give the closed-form solution. If the regularizer is the TV term, we first rewrite the p ˙ -subproblem as
min p ˙ p ˙ 1 + β 2 p ˙ u ˙ k + ξ ˙ k β 2 2 .
Let p ˙ i be the i-th element of p ˙ , then we have
E = p ˙ i 1 + β 2 p ˙ i ( u ˙ i k ξ ˙ i k β ) 2 2 ,
and
E p ˙ i = λ p ˙ i ( u ˙ i k ξ ˙ i k β ) 4 + p i ˙ ¯ 4 | p ˙ i | .
Let E p ˙ i = 0 and u ˙ i k ξ ˙ i k β = y ˙ i , then p i ˙ ¯ | p ˙ i | = y i ˙ ¯ | y ˙ i | . By discussing | y ˙ i | > λ or λ , we have
p ˙ i = y ˙ i y ˙ i · max y ˙ i λ , 0 .
The visual performance of the quaternion-based method is given in Figure 3 and Figure 4. We use the codes https://github.com/Huang-chao-yan/QWNNM (accessed on 11 December 2020) of [54]. We add the average blur with blur kernel 9 and Gaussian noise with noise level σ = 20 in Figure 3, and the Gaussian blur with blur kernel [25,1.6] and Gaussian noise with noise level 20 in Figure 4. The results show that color spots are still visible in the output of the real-valued weighted nuclear norm minimization (WNNM)-based method proposed in [55]. The quaternion-basedWNNM [54] can better preserve the detailed structure of the image.

2.3. Quaternion Modules

The convolution process is defined in a real-valued space by convolving a filter matrix with a vector. In QCNN, a quaternion filter matrix and quaternion vector are convoluted. We denote the quaternion weight filter matrix as w ˙ = w 0 + w 1 i + w 2 j + w 3 k and quaternion input as u ˙ = u 0 + u 1 i + u 2 j + u 3 k ; then the quaternion convolution is defined as the Hamilton product
w ˙ u ˙ = w 0 u 0 w 1 u 1 w 2 u 2 w 3 u 3 + w 0 u 1 + w 1 u 0 + w 2 u 3 w 3 u 2 i + w 0 u 2 w 1 u 3 + w 2 u 0 + w 3 u 1 j + w 0 u 2 + w 1 u 2 w 2 u 1 + w 3 u 0 k .
In CNN, the fully connected layer is defined as f = ϕ ( w u + b ) . In QCNN, the quaternion fully connected layer is usually defined as
f ˙ = ϕ ( w ˙ u ˙ + b ˙ ) ,
where b ˙ is the bias and ϕ ( · ) is any activation function. Suppose that with the rectified linear unit (ReLU) activation function, the final result is
f = ReLU ( z 0 ) + ReLU ( z 1 ) i + ReLU ( z 2 ) j + ReLU ( z 3 ) k
with z = z 0 + z 1 i + z 2 j + z 3 k = w ˙ u ˙ + b ˙ . Due to the linear combination property, the conditionality of w ˙ is 1 / 4 | w | ; then, QCNNs can be built with 1 / 4 of the parameters required by their real-valued counterparts. In this case, the design of a quaternion-based activation function and other general quaternion modules may help to further improve the QCNN models. Overall, the benefits of QCNNs can be summarized as follows:
  • Reduced network size: QCNNs can represent weights using fewer parameters than traditional CNNs, thereby reducing the overall size of the network.
  • Improved performance: QCNNs outperform traditional CNNs on numerous tasks, particularly those involving 3D data, such as video analysis and computer vision.
  • Efficient computation: Quaternion operations can be efficiently implemented using GPUs, resulting in fast training and inference times.

3. Traditional Methods

Based on the aforementioned quaternion rules and definitions, the quaternion representation is widely applied in color image processing. In this section, we will provide an overview of the main contributions in six aspects. Firstly, TV-based methods are discussed in Section 3.1; secondly, low-rank-based and sparse-based models are reviewed in Section 3.2; thirdly, moment-based models are introduced in Section 3.3; fourthly, decomposition-based models are presented in Section 3.4; fifthly, transformation-based models are reviewed in Section 3.5; finally, other significant models are summarized in Section 3.6.

3.1. TV-Based Models

As mentioned in Section 2.1, the total variation (TV)-based model is defined by Equation (3). Due to the model’s effectiveness, many works have improved it, such as non-local TV and TVp models. There are also references for quaternion-based image processing. For example, Liu et al. [56] extended the fractional order TV with l p norm to the quaternion domain for image super-resolution. The non-local TV was extended to the quaternion domain with unit transformation for image denoising [57]. Jia et al. [58] applied quaternion representation in the HSV color space and proposed a saturation value-TV (SV-TV) model for image denoising and deblurring. Voronin et al. [59] proposed an automated segmentation analysis based on the modified Chan and Vese method using the quaternion anisotropic TV algorithm under Merced data (https://vision.ucmerced.edu/datasets/, accessed on 1 March 2023). Wu et al. [18] extended the l 1 / l 2 regularizer to the quaternion domain for image segmentation under the Weizmann (https://www.wisdom.weizmann.ac.il/vision/Seg_Evaluation_DB/dl.html, accessed on 1 March 2023) and Berkeley (https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/segbench/, accessed on 1 Mar. 2023) datasets. Their works extended the original TV-based model into the quaternion domain, and some works explored the theoretical properties of their proposed quaternion models. Furthermore, the experimental results illustrated the superiority of the quaternion representation. By representing color images as a whole, the color information between color channels can be well-preserved.

3.2. Low-Rank-Based and Sparse-Based Models

The basic low-rank minimization problem is
min u u f 2 2 , s . t . rank ( u ) r ,
where rank ( u ) is the rank of matrix u and r is a positive number. The above constriction was reformulated to other rank functions to better represent the low-rank properties, such as the Schatten- γ norm, nuclear norm, logarithm, Laplace, and Geman function. In the quaternion representation, Chen, Xiao, and Zhou [60] extended the low-rank regularizer-based model (Laplace, Geman, weighted Schatten- γ ) into the quaternion domain, showing that the quaternion representation with low-rank is better than real-valued low-rank methods in color image in painting and denoising. In [54,61], the weighted nuclear norm was extended to the quaternion domain for image denoising and deblurring. Other low-rank versions of the quaternion representation can be found in [62,63].
Let D be the dictionary matrix and α be a sparse coefficient matrix. The core sparse representation problem involves finding the sparsest α that satisfies u = D α . Mathematically, the sparse decomposition problem becomes
min α α 0 , s . t . u D α 2 2 ϵ 2 ,
where α 0 = # { i : α i 0 , i = 1 , 2 , , k } is the l 0 norm of α and counts the non-zero number of α . Here, k denotes the number of elements. Based on this equation, the sparse representation theory was developed by Elad et al. [64] for learned dictionaries. To preserve color information, Yu et al. [65] proposed a quaternion online dictionary-learning model for image super-resolution. Reference [66] extended the collaborative representation-based classification (CRC) and sparse representation-based classification (SRC) to the quaternion domain for face recognition. Xu et al. [67] proposed a quaternion sparse representation with the dictionary learning algorithm K-QSVD (generalized k-means clustering for quaternion singular value decomposition) and QOMP (quaternion orthogonal matching pursuit) for color image reconstruction, denoising, inpainting, and super-resolution. Following this work, Wu et al. [68] improved the sparse representation by combining the TV regularizer for image denoising. Meanwhile, the TV term was replaced by a more efficient regularizer, SV-TV, in [69]. They also improved the sparse prior by training the dictionary with the DIV2K dataset (https://data.vision.ee.ethz.ch/cvl/DIV2K/, accessed on 1 March 2023). Moreover, Liu et al. [70] combined the quaternion-based total variation and sparse dictionary learning for super-resolution in the Infrared LTIR dataset (http://www.cvl.isy.liu.se/research/datasets/ltir/version1.0/, accessed on 1 March. 2023) and IRData (http://www.dgp.toronto.edu/nmorris/data/IRData/, accessed on 1 March 2023). Furthermore, the block sparse representation was extended into the quaternion domain for face recognition in [71]. A sparse quaternion Welsch estimator was introduced to measure the quaternion residual error [72]; the estimator in the quaternion domain can largely suppress the impact of large data corruption and outliers.
On the other hand, principal component analysis (PCA) is another popular technique used for sparse and low-rank minimization problems, which can analyze large datasets containing a large number of dimensions. In the PCA theory, the observed image f is generated as f = u + s , where u is the target low-rank matrix and s is a sparse matrix that usually acts as the corruption datum. Under the above assumption, one can recover u by solving
min u , s u + λ s 1 , s . t . u + s = f ,
where · is the nuclear norm, · 1 is the l 1 norm, and λ is a positive parameter. Equation (25) was extended to the quaternion domain in [73] for image inpainting with a theoretical guarantee. Shi and Funt [74] extended the PCA to the quaternion domain and then derived a low-dimensional basis for color texture segmentation. Their model demonstrated the advantage of representing and analyzing an image as a single entity. Due to the competitive performance of quaternion-based PCA and the theoretical guarantee, there are many related works [75]. For example, Wang et al. [76] proposed a robust subspace learning method with PCA for face recognition under the Georgia Tech face dataset (https://computervisiononline.com/dataset/1105138700, accessed on 1 Mar. 2023) and the color FERET (https://www.nist.gov/itl/iad/imagegroup/color-feret-database, accessed on 1 Mar. 2023) dataset. Sun et al. [77] suggested modified two-dimensional principal component analysis (2DPCA) and bidirectional principal component analysis (BDPCA) methods based on the quaternion matrix to recognize and reconstruct face images. Jia et al. [78] presented the quaternion-based two-dimensional principal component analysis (2DPCA) for face recognition.

3.3. Moment-Based Models

Owing to their image descriptions and invariance properties, moments are scalar quantities widely used in image processing. Various types of moment functions have been constructed, such as orthogonal moments, Zernike moments, exponent moments, and Chebyshev–Fourier moments. Due to the effectiveness of moment-based models, they have been extended to the quaternion domain as well [79]. For example, Wang et al. [80] proposed a robust watermarking model with local quaternion exponent moments. In [81], the Chebyshev moment was extended to the quaternion domain by designing a quaternion radial-substituted Chebyshev moment. Moreover, the quaternion-weighted spherical Bessel–Fourier moment (QSBFM) was proposed in [82] and the application of color image reconstruction and object recognition in the CVG-UGR dataset (http://decsai.ugr.es/cvg/dbimagenes/index.php, accessed on 1 March 2023), Amsterdam Library (https://ccia.ugr.es/cvg/dbimagenes/index.php, accessed on 1 March 2023), and Columbia Library (https://www.cs.columbia.edu/CAVE/software/softlib/coil-100.php, accessed on 1 March 2023) illustrated the effectiveness of the quaternion representation. In [83], the discrete orthogonal moment was applied to neural networks for color face recognition. Other moment-based models, such as the quaternion Fourier–Mellin moment [84] and the quaternion radial moment [85], demonstrated the competitiveness of quaternion-based models. However, the computational costs of quaternion moments are high. Deriving a fast quaternion moment algorithm may be a significant challenge.

3.4. Decomposition-Based Models

For color image processing, one effective method is to regard the color image as a matrix. With the help of the matrix analysis, the desired image can be solved. The QR decomposition was used to handle the linear least squares problem. The QR decomposition is a matrix factorization technique that decomposes an m × n matrix A into the product of two matrices: an orthogonal matrix Q and an upper triangular matrix R. The QR decomposition of A is given by
A = Q R
where Q is an m × n orthogonal matrix (i.e., QTQ = I) and R is an m × n upper triangular matrix. For color image processing, QR decomposition is also widely used. Later, the quaternion QR decomposition was extended for better handling of color images, where the matrices in (26) are in the quaternion domain. The main difference between quaternion QR decomposition and real-valued QR decomposition is that quaternion QR decomposition factorizes a quaternionic matrix into the product of an orthogonal quaternion matrix and an upper triangular quaternion matrix. Generally, the quaternion QR decomposition is more computationally expensive than the real-valued QR decomposition due to the additional complexity introduced by working with quaternion numbers. However, the quaternion QR decomposition has applications in color image processing and is usually better than the real-valued QR decomposition. In [86], the QR decomposition in the quaternion domain was applied in watermarking. The blind watermarking in the quaternion domain with QR decomposition was proposed in [87]. Similarly, the Schur decomposition and singular value decomposition (SVD) are also extended to quaternion [88,89,90]. In particular, He et al. [91] applied the matrix decomposition for quaternion in the control system and for watermarking. A face recognition method using wavelet decomposition and quaternion correlation filters was proposed in [92]. Kumar et al. [93] proposed a medical image super-resolution model with quaternion wavelet transform (QWT) and SVD. Miao et al. [94] proposed a quaternion higher-order SVD method for image fusion and denoising, and demonstrated its effectiveness on the MFFW (https://www.researchgate.net/profile/Xu-Shuang-3/publication/350965471_MFFW/data/607d2a6d881fa114b411103c/MFFW.zip, accessed on 1 March 2023) and Lytro (http://clim.inria.fr/IllumDatasetLF/index.html, accessed on 1 March 2023) datasets.

3.5. Transformation-Based Models

One of the classical quaternion representations is the quaternion unit transform [95]. A unit vector p ˙ is defined as
p ˙ = cos θ + 1 3 μ sin θ = cos θ + 1 3 [ ( sin θ ) i + ( sin θ ) j + ( sin θ ) k ]
where μ = i + j + k is the pure imaginary axis. Then the unit transform for a color image u ˙ = u r i + u g j + u b k  is defined as
t ˙ = p ˙ u ˙ p ˙ * = cos θ + 1 3 sin θ ( i + j + k ) ( u r i + u g j + u b k ) cos θ 1 3 sin θ ( i + j + k ) = cos 2 θ ( u r i + u g j + u b k ) + 2 3 μ sin 2 θ ( u r + u g + u b ) + 1 3 sin 2 θ [ ( u b u g ) i + ( u r u b ) j + ( u g u r ) k ] = Y R G B + Y Δ + Y I ,
where Y R G B = cos 2 θ ( u r i + u g j + u b k ) represents the RGB space component, Y I = 2 3 μ sin 2 θ ( u r + u g + u b ) is the intensity, and Y Δ = 1 3 sin 2 θ [ ( u b u g ) i + ( u r u b ) j + ( u g u r ) k ] denotes the color difference. With the quaternion unit transformation (28), many excellent image processing works have been proposed. Geng, Hu, and Xiao [96] proposed a quaternion-switching filter for impulse noise reduction. They employed the color difference to detect whether the center pixel in a filtering window is noisy or not. Later, a two-stage method with the quaternion unit transform was proposed for removing impulse noise [97]. In 2019, Li, Zhou, and Zhang [57] extended the classical non-local total variation to the quaternion domain with the unit transform for color image denoising. A similar representation was also applied in [98] for color image enhancement.
With the quaternion Fourier transform (QFT), Bas, Bihan, and Chassery [99] presented an image watermarking scheme. Later, a blind color image watermarking method based on the quaternion Fourier transform and least squares support vector machine was proposed in [100]. An image watermarking approach based on quaternion discrete Fourier transform and an improved uniform log-polar mapping was introduced in [101]. Combining the superpixel image segmentation and QWT, Niu et al. [102] proposed a novel image watermarking approach. Grigoryan and Agaian [103] proposed an image restoration model with the Wiener filter and quaternion Fourier transform, which can handle denoising and deblurring tasks. Wang et al. [104] applied the QWT for a no-reference stereoscopic image quality assessment. Other transforms, such as the quaternion polar harmonic transform [105] and discrete wavelet transform [106], were also extended to the quaternion domain with better performance.

3.6. Other Models

Instead of the above models, there are other standard works in the quaternion domain. For example, the quaternion Gabor filter (QGF) [107,108] was introduced to extract the local orientation information. Later, Li et al. [109] improved a multiscale QGF to describe texture attributes. Zou et al. [110] utilized the linear regression classification and collaborative representation in the quaternion domain for face recognition on SCface (https://www.scface.org/, accessed on 1 March 2023), AR (https://www2.ece.ohio-state.edu/aleix/ARdatabase.html, accessed on 1 March 2023), and Caltech (https://www.vision.caltech.edu/datasets/caltech_10k_webfaces/, accessed on 1 March 2023) datasets. Liu et al. [111] proposed a quaternion-based maximum margin criterion (QMMC) algorithm for face recognition.

4. Deep Learning

Deep convolutional neural networks (CNNs) have shown great potential in computer vision. Quaternion-based convolutional neural networks (QCNNs) have also shown great potential. In [112], the authors proposed a quaternion-based approach for unsupervised feature learning that enables the joint encoding of intensity and color information. Later, they introduced unsupervised learning of quaternion feature filters and feature encoding [113]. Combining the traditional PCA theory, Zeng et al. [114] proposed a quaternion PCA network (QPCANet) for color image classification. In [50], the basic modules, such as the convolution layer and fully connected layer, were designed in the quaternion domain, which helped establish fully-quaternion convolutional neural networks. Later, a quaternion weight initialization scheme and algorithms for quaternion batch normalization were introduced [115]. Yin et al. [116] derived quaternion batch normalization and pooling operations, and incorporated the attention mechanism to boost the performance of QCNNs.
The classification and forensics results of the Uncompressed Colour Image Database (UCID) (https://qualinet.github.io/databases/image/uncompressed_colour_image_database_ucid/, accessed on 1 March 2023) illustrates the efficiency of a quaternion-based network. A similar idea was shown in [117]. By independently learning both internal and external relations, and with fewer parameters than a real-valued convolutional encoder–decoder, Reference [118] investigated the impact of the Hamilton product on a color image reconstruction task. The results on the Kodak dataset (https://r0k.us/graphics/kodak/, accessed on 1 March 2023) showed that the quaternion convolutional encoder–decoder can perfectly reconstruct unseen color information. Jin et al. [119] incorporated deformable quaternion Gabor filters into the convolutional neural network and applied the proposed model in facial expression recognition on Oulu-CASIA https://www.v7labs.com/opendatasets/oulu-casia, accessed on 1 March 2023), MMI (https://mmifacedb.eu/, accessed on 1 March 2023) and SFEW (https://cs.anu.edu.au/few/AFEW.html, accessed on 1 March 2023) datasets. At the same time, Zhou et al. [120] proposed a deep CNN with a Gabor attention module for facial expression recognition. Later, quaternion representations were added to attention networks for classification [38]. More specifically, axial-attention modules were supplemented with quaternion input representations to improve image classification accuracy in the ImageNet300k dataset (https://deepai.org/machine-learning-glossary-and-terms/imagenet, accessed on 1 Mar. 2023). Considering different types of noise, Cao et al. [121] proposed a convolutional attention-denoising network to remove random-valued impulse noise. Classical real-valued CNN models were also extended to the quaternion domain. For example, EdgeNet [122] was modified by proposing an end-to-end trainable quaternion-based super-resolution network (QSRNet) [123]. The experiment on image super-resolution demonstrates that the local and global interrelationships between the channels can be better maintained with fewer parameters.
In [124], a quaternion residual unit was employed to capture the interdependencies in a multidimensional input in the DCASE19 (https://zenodo.org/record/2589280#.ZBLAuOxBzJx, accessed on 1 March 2023) and DCASE20 (https://zenodo.org/record/3670167#.ZBLAEOxBzJw, accessed on 1 March 2023) datasets. As a result, quaternion encoding can increase accuracy with fewer parameters. Frants et al. [125] proposed a quaternion-based multi-stage multiscale neural network with a self-attention module for rain streak removal. They replaced all convolutional layers with the quaternion convolution layer and replaced the ReLU activation layer with its quaternion split version. Later, they proposed a single-image dehazing model based on quaternion neural networks [126]. EI et al. [83] added quaternion discrete orthogonal moments to the deep neural network to extract compact and pertinent features. Their recognition performance on datasets, such as Faces94 (https://cmp.felk.cvut.cz/spacelib/faces/faces94.html, accessed on 1 Mar. 2023), Faces95 (https://cmp.felk.cvut.cz/spacelib/faces/faces95.html, accessed on 1 Mar. 2023), Faces96 (https://cmp.felk.cvut.cz/spacelib/faces/faces96.html, accessed on 1 March 2023), Grimace (https://cmp.felk.cvut.cz/spacelib/faces/grimace.html, accessed on 1 March 2023), Georgia Tech Face (https://computervisiononline.com/dataset/1105138700, accessed on 1 March 2023), and FEI (https://fei.edu.br/cet/facedatabase.html, accessed on 1 March 2023) showed the superior performance of the quaternion-based deep neural network. Zhou et al. [127] designed a non-iterative quaternion routing algorithm to integrate quaternion-valued capsule networks. Xu et al. [128] proposed a plug-and-play model for image denoising and inpainting by combing the FFDNet [129] and low-rank (Laplace) function in the quaternion domain. In order to solve the proposed hybrid non-convex model, the ADMM and difference of the convex algorithm (DCA) were used. Moreover, the generative adversarial networks were extended to quaternion in [130].

5. Discussion

The aforementioned quaternion-based methods are classical and representative, and some typical methods are listed in Table 2. Overall, quaternion-based algorithms can be more memory-efficient than traditional methods, which is important when dealing with large datasets. In some cases, quaternion-based methods can provide more accurate results than traditional methods, especially for problems that require rotation-invariant features. Although the quaternion-based models show better performance, there are disadvantages. First of all, the previous description shows that most quaternion-based methods directly extend real-valued methods to the quaternion domain. A model that can better reflect the advantages of the quaternion should be proposed, such as the unit transform-based model, in which an image can be divided into two parts according to the properties of the quaternion. Secondly, the quaternion representation denotes the color image as an entirety, which can keep the interrelationships between color channels and reduce the parameters in CNN-based models. However, this could slow the computation and make it more costly, which could limit their use in real-time applications. An accelerated algorithm should be proposed to optimize the quaternion-based methods. Thirdly, while quaternions are well-suited for representing three dimensions, they may not be as useful in higher dimensions.

6. Conclusions

In this paper, we reviewed classical and representative quaternion-based methods in image processing according to their model types. We divided these models into two types: traditional and deep learning. Specifically, we introduced TV-based, low-rankbased, sparse-based, moment-based, decomposition-based, transformation-based, and deep learning-based models. We believe that this survey can help academics better understand quaternion-based models and further advance this topic.
Furthermore, quaternion representation has several potential future research directions in color image processing. Firstly, developing new color spaces based on quaternion representation could improve the accuracy and efficiency of color image processing algorithms. For instance, a quaternion-based color space may better capture the spatial and chromatic information in an image compared to traditional color spaces. Secondly, data augmentation techniques, such as rotation and scaling, are commonly used in deep learning to increase the size of training datasets. Using quaternion representations to perform these transformations could improve the robustness and generalization of models trained on color image datasets. Thirdly, QCNNs can use quaternion representations to learn features from color images more effectively than traditional CNNs. Future research could explore the performance of QCNNs by designing the quaternion modules.

Author Contributions

Conceptualization, C.H., J.L. and G.G.; investigation, C.H. and J.L.; resources, C.H. and J.L.; writing—original draft preparation, C.H.; writing—review and editing, C.H., J.L. and G.G.; visualization, C.H. and J.L.; supervision, J.L. and G.G.; funding acquisition, J.L. and G.G. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported in part by the Natural Science Foundation of Shanghai under grant 23ZR1422200, in part by the Shanghai Sailing program under grant 23YF1412800, in part by the Six Talent Peaks Project in Jiangsu Province under grant RJFW-011, and in part by the Open Fund Project of the Provincial Key Laboratory for Computer Information Processing Technology (Soochow University) under grant KJS2274.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
R real space
C complex space
H quaternion space
areal/complex number
a ˙ quaternion number
Areal/complex matrix
A ˙ quaternion matrix
Qquaternion
TVtotal variation
QNLTVquaternion non-local total variation
SV-TVsaturation-value total variation
LRQRlow-rank quaternion approximation
QWNNMquaternion weighted nuclear norm minimization
QFTquaternion Fourier transform
QWTquaternion wavelet transform
CNNconvolutional neural network
QCNNquaternion-based convolutional neural network
QTransquaternion-based transformer model
ADMMalternating direction method of multipliers
FISTAfast iterative shrinkage thresholding algorithm
DCAdifference of convex algorithm

References

  1. García-Retuerta, D.; Casado-Vara, R.; Martin-del Rey, A.; De la Prieta, F.; Prieto, J.; Corchado, J.M. Quaternion neural networks: State-of-the-art and research challenges. In Proceedings of the Intelligent Data Engineering and Automated Learning–IDEAL 2020: 21st International Conference, Guimaraes, Portugal, 4–6 November 2020; pp. 456–467. [Google Scholar]
  2. Parcollet, T.; Morchid, M.; Linarès, G. A survey of quaternion neural networks. Artif. Intell. Rev. 2020, 53, 2957–2982. [Google Scholar] [CrossRef]
  3. Bayro-Corrochano, E. A survey on quaternion algebra and geometric algebra applications in engineering and computer science 1995–2020. IEEE Access 2021, 9, 104326–104355. [Google Scholar] [CrossRef]
  4. Voronin, V.; Semenishchev, E.; Zelensky, A.; Agaian, S. Quaternion-based local and global color image enhancement algorithm. In Proceedings of the Mobile Multimedia/Image Processing, Security, and Applications 2019, Baltimore, MD, USA, 15 April 2019; Volume 10993, pp. 13–20. [Google Scholar]
  5. Wang, H.; Wang, X.; Zhou, Y.; Yang, J. Color texture segmentation using quaternion-Gabor filters. In Proceedings of the 2006 International Conference on Image Processing, Atlanta, GA, USA, 8–11 October 2006; pp. 745–748. [Google Scholar]
  6. Yuan, S.; Wang, Q.; Duan, X. On solutions of the quaternion matrix equation AX = B and their applications in color image restoration. Appl. Math. Comput. 2013, 221, 10–20. [Google Scholar] [CrossRef]
  7. Li, Y. Color image restoration by quaternion diffusion based on a discrete version of the topological derivative. In Proceedings of the 2014 7th International Congress on Image and Signal Processing, Dalian, China, 14–16 October 2014; pp. 201–206. [Google Scholar]
  8. Wang, C.; Wang, X.; Xia, Z.; Zhang, C.; Chen, X. Geometrically resilient color image zero-watermarking algorithm based on quaternion exponent moments. J. Vis. Commun. Image Represent. 2016, 41, 247–259. [Google Scholar] [CrossRef]
  9. Rizo-Rodríguez, D.; Méndez-Vázquez, H.; García-Reyes, E. Illumination invariant face recognition using quaternion-based correlation filters. J. Math. Imaging Vis. 2013, 45, 164–175. [Google Scholar] [CrossRef]
  10. Bao, S.; Song, X.; Hu, G.; Yang, X.; Wang, C. Colour face recognition using fuzzy quaternion-based discriminant analysis. Int. J. Mach. Learn. Cybern. 2019, 10, 385–395. [Google Scholar] [CrossRef]
  11. Ranade, S.K.; Anand, S. Color face recognition using normalized-discriminant hybrid color space and quaternion moment vector features. Multimed. Tools Appl. 2021, 80, 10797–10820. [Google Scholar] [CrossRef]
  12. Gai, S.; Yang, G.; Zhang, S. Multiscale texture classification using reduced quaternion wavelet transform. AEU-Int. J. Electron. Commun. 2013, 67, 233–241. [Google Scholar] [CrossRef]
  13. Guo, L.; Dai, M.; Zhu, M. Quaternion moment and its invariants for color object classification. Inf. Sci. 2014, 273, 132–143. [Google Scholar] [CrossRef]
  14. Kumar, R.; Dwivedi, R. Quaternion domain k-means clustering for improved real time classification of E-nose data. IEEE Sens. J. 2015, 16, 177–184. [Google Scholar] [CrossRef]
  15. Lan, R.; Zhou, Y. Quaternion-Michelson descriptor for color image classification. IEEE Trans. Image Process. 2016, 25, 5281–5292. [Google Scholar] [CrossRef] [PubMed]
  16. Liu, L.; Chen, C.P.; Li, S. Learning quaternion graph for color face image super-resolution. In Proceedings of the 2019 IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, 22–25 September 2019; pp. 2846–2850. [Google Scholar]
  17. Hamilton, W.R. Theory of quaternions. Proc. R. Ir. Acad. (1836–1869) 1844, 3, 1–16. [Google Scholar]
  18. Wu, T.; Mao, Z.; Li, Z.; Zeng, Y.; Zeng, T. Efficient color image segmentation via quaternion-based L1/L2 regularization. J. Sci. Comput. 2022, 93, 9. [Google Scholar] [CrossRef]
  19. Murali, S.; Govindan, V.; Kalady, S. Quaternion-based image shadow removal. Vis. Comput. 2022, 38, 1527–1538. [Google Scholar] [CrossRef]
  20. Wu, L.; Zhang, X.; Chen, H.; Zhou, Y. Unsupervised quaternion model for blind colour image quality assessment. Signal Process. 2020, 176, 107708. [Google Scholar] [CrossRef]
  21. Yu, L.; Xu, Y.; Xu, H.; Zhang, H. Quaternion-based sparse representation of color image. In Proceedings of the 2013 IEEE International Conference on Multimedia and Expo (ICME), San Jose, CA, USA, 15–19 July 2013; pp. 1–7. [Google Scholar]
  22. Xiao, X.; Zhou, Y. Two-dimensional quaternion PCA and sparse PCA. IEEE Trans. Neural Netw. Learn. Syst. 2018, 30, 2028–2042. [Google Scholar] [CrossRef]
  23. Gai, S.; Wang, L.; Yang, G.; Yang, P. Sparse representation based on vector extension of reduced quaternion matrix for multiscale image denoising. IET Image Process. 2016, 10, 598–607. [Google Scholar] [CrossRef]
  24. Xiao, X.; Chen, Y.; Gong, Y.J.; Zhou, Y. 2D quaternion sparse discriminant analysis. IEEE Trans. Image Process. 2019, 29, 2271–2286. [Google Scholar] [CrossRef]
  25. Xiao, X.; Zhou, Y. Two-dimensional quaternion sparse principle component analysis. In Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada, 15–20 April 2018; pp. 1528–1532. [Google Scholar]
  26. Bao, Z.; Gai, S. Reduced quaternion matrix-based sparse representation and its application to colour image processing. IET Image Process. 2019, 13, 566–575. [Google Scholar] [CrossRef]
  27. Yang, H.; Wang, Q.; Wang, Q.; Liu, P.; Huang, W. Facial micro-expression recognition using quaternion-based sparse representation. In Proceedings of the 2020 29th International Conference on Computer Communications and Networks (ICCCN), Honolulu, HI, USA, 3–6 August 2020; pp. 1–9. [Google Scholar]
  28. Ngo, L.H.; Luong, M.; Sirakov, N.M.; Viennet, E.; LeTien, T. Skin lesion image classification using sparse representation in quaternion wavelet domain. Signal Image Video Process. 2022, 16, 1721–1729. [Google Scholar] [CrossRef]
  29. Ngo, L.H.; Sirakov, N.M.; Luong, M.; Viennet, E.; LeTien, T. Image classification based on sparse representation in the quaternion wavelet domain. IEEE Access 2022, 10, 31548–31560. [Google Scholar] [CrossRef]
  30. Meng, B.; Liu, X.; Wang, X. Human action recognition based on quaternion spatial-temporal convolutional neural network and LSTM in RGB videos. Multimed. Tools Appl. 2018, 77, 26901–26918. [Google Scholar] [CrossRef]
  31. Shi, J.; Zheng, X.; Wu, J.; Gong, B.; Zhang, Q.; Ying, S. Quaternion Grassmann average network for learning representation of histopathological image. Pattern Recognit. 2019, 89, 67–76. [Google Scholar] [CrossRef]
  32. Wang, J.; Zhang, Y.; Ma, B.; Dui, G.; Yang, S.; Xin, L. Median filtering forensics scheme for color images based on quaternion magnitude-phase CNN. Comput. Mater. Contin. 2020, 62, 99–112. [Google Scholar] [CrossRef]
  33. Moya-Sánchez, E.U.; Xambo-Descamps, S.; Pérez, A.S.; Salazar-Colores, S.; Martinez-Ortega, J.; Cortes, U. A bio-inspired quaternion local phase CNN layer with contrast invariance and linear sensitivity to rotation angles. Pattern Recognit. Lett. 2020, 131, 56–62. [Google Scholar] [CrossRef]
  34. Singh, S.; Tripathi, B. Pneumonia classification using quaternion deep learning. Multimed. Tools Appl. 2022, 81, 1743–1764. [Google Scholar] [CrossRef] [PubMed]
  35. Wang, Z.; Xu, X.; Wang, G.; Yang, Y.; Shen, H.T. Quaternion relation embedding for scene graph generation. IEEE Trans. Multimed. 2023, 1–12. [Google Scholar] [CrossRef]
  36. Sfikas, G.; Giotis, A.P.; Retsinas, G.; Nikou, C. Quaternion generative adversarial networks for inscription detection in byzantine monuments. In Proceedings of the Pattern Recognition, ICPR International Workshops and Challenges, Virtual, 10–15 January 2021; pp. 171–184. [Google Scholar]
  37. Tay, Y.; Zhang, A.; Tuan, L.A.; Rao, J.; Zhang, S.; Wang, S.; Fu, J.; Hui, S.C. Lightweight and efficient neural natural language processing with quaternion networks. arXiv 2019, arXiv:1906.04393. [Google Scholar]
  38. Shahadat, N.; Maida, A.S. Adding quaternion representations to attention networks for classification. arXiv 2021, arXiv:2110.01185. [Google Scholar]
  39. Chen, W.; Wang, W.; Peng, B.; Wen, Q.; Zhou, T.; Sun, L. Learning to rotate: Quaternion transformer for complicated periodical time series forecasting. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Washington, DC, USA, 14–18 August 2022; pp. 146–156. [Google Scholar]
  40. Zhang, A.; Tay, Y.; Zhang, S.; Chan, A.; Luu, A.T.; Hui, S.C.; Fu, J. Beyond fully connected layers with quaternions: Parameterization of hypercomplex multiplications with 1/n parameters. arXiv 2021, arXiv:2102.08597. [Google Scholar]
  41. Barthélemy, Q.; Larue, A.; Mars, J.I. Color sparse representations for image processing: Review, models, and prospects. IEEE Trans. Image Process. 2015, 24, 3978–3989. [Google Scholar] [CrossRef] [PubMed]
  42. Huang, Y.; Ng, M.K.; Wen, Y. A new total variation method for multiplicative noise removal. SIAM J. Imaging Sci. 2009, 2, 20–40. [Google Scholar] [CrossRef]
  43. Ng, M.K.; Weiss, P.; Yuan, X. Solving constrained total-variation image restoration and reconstruction problems via alternating direction methods. SIAM J. Sci. Comput. 2010, 32, 2710–2736. [Google Scholar] [CrossRef]
  44. Wen, Y.; Ng, M.K.; Huang, Y. Efficient total variation minimization methods for color image restoration. IEEE Trans. Image Process. 2008, 17, 2081–2088. [Google Scholar] [CrossRef] [PubMed]
  45. Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends® Mach. Learn. 2011, 3, 1–122. [Google Scholar]
  46. Glowinski, R.; Le Tallec, P. Augmented Lagrangian and Operator-Splitting Methods in Nonlinear Mechanics; SIAM: Philadelphia, PA, USA, 1989. [Google Scholar]
  47. Gao, G.; Xu, G.; Li, J.; Yu, Y.; Lu, H.; Yang, J. FBSNet: A fast bilateral symmetrical network for real-time semantic segmentation. IEEE Trans. Multimed. 2022. [Google Scholar] [CrossRef]
  48. Gao, G.; Li, W.; Li, J.; Wu, F.; Lu, H.; Yu, Y. Feature distillation interaction weighting network for lightweight image super-resolution. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtually, 22 February–1 March 2022; Volume 36, pp. 661–669. [Google Scholar]
  49. Li, J.; Fang, F.; Zeng, T.; Zhang, G.; Wang, X. Adjustable super-resolution network via deep supervised learning and progressive self-distillation. Neurocomputing 2022, 500, 379–393. [Google Scholar] [CrossRef]
  50. Zhu, X.; Xu, Y.; Xu, H.; Chen, C. Quaternion convolutional neural networks. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 631–647. [Google Scholar]
  51. Parcollet, T.; Ravanelli, M.; Morchid, M.; Linarès, G.; Trabelsi, C.; Mori, R.D.; Bengio, Y. Quaternion Recurrent Neural Networks. 2019, 1–19. arXiv 2018, arXiv:1806.04418. [Google Scholar]
  52. Takahashi, K.; Isaka, A.; Fudaba, T.; Hashimoto, M. Remarks on quaternion neural network-based controller trained by feedback error learning. In Proceedings of the 2017 IEEE/SICE International Symposium on System Integration (SII), Taipei, Taiwan, 11–14 December 2017; pp. 875–880. [Google Scholar]
  53. Xu, D.; Mandic, D.P. The theory of quaternion matrix derivatives. IEEE Trans. Signal Process. 2015, 63, 1543–1556. [Google Scholar] [CrossRef]
  54. Huang, C.; Li, Z.; Liu, Y.; Wu, T.; Zeng, T. Quaternion-based weighted nuclear norm minimization for color image restoration. Pattern Recognit. 2022, 128, 108665. [Google Scholar] [CrossRef]
  55. Ma, L.; Xu, L.; Zeng, T. Low rank prior and total variation regularization for image deblurring. J. Sci. Comput. 2017, 70, 1336–1357. [Google Scholar] [CrossRef]
  56. Liu, X.; Chen, Y.; Peng, Z.; Wu, J.; Wang, Z. Infrared image super-resolution reconstruction based on quaternion fractional order total variation with Lp quasinorm. Appl. Sci. 2018, 8, 1864. [Google Scholar] [CrossRef]
  57. Li, X.; Zhou, Y.; Zhang, J. Quaternion non-local total variation for color image denoising. In Proceedings of the 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC), Bari, Italy, 6–9 October 2019; pp. 1602–1607. [Google Scholar]
  58. Jia, Z.; Ng, M.K.; Wang, W. Color image restoration by Saturation-Value total variation. SIAM J. Imaging Sci. 2019, 12, 972–1000. [Google Scholar] [CrossRef]
  59. Voronin, V.; Semenishchev, E.; Zelensky, A.; Tokareva, O.; Agaian, S. Image segmentation in a quaternion framework for remote sensing applications. In Proceedings of the Mobile Multimedia/Image Processing, Security, and Applications 2020, Online, 27 April–8 May 2020; Volume 11399, pp. 108–115. [Google Scholar]
  60. Chen, Y.; Xiao, X.; Zhou, Y. Low-rank quaternion approximation for color image processing. IEEE Trans. Image Process. 2019, 29, 1426–1439. [Google Scholar] [CrossRef]
  61. Yu, Y.; Zhang, Y.; Yuan, S. Quaternion-based weighted nuclear norm minimization for color image denoising. Neurocomputing 2019, 332, 283–297. [Google Scholar] [CrossRef]
  62. Yang, L.; Miao, J.; Kou, K.I. Quaternion-based color image completion via logarithmic approximation. Inf. Sci. 2022, 588, 82–105. [Google Scholar] [CrossRef]
  63. Miao, J.; Kou, K.I.; Liu, W. Low-rank quaternion tensor completion for recovering color videos and images. Pattern Recognit. 2020, 107, 107505. [Google Scholar] [CrossRef]
  64. Elad, M.; Aharon, M. Image denoising via sparse and redundant representations over learned dictionaries. IEEE Trans. Image Process. 2006, 15, 3736–3745. [Google Scholar] [CrossRef]
  65. Yu, M.; Xu, Y.; Sun, P. Single color image super-resolution using quaternion-based sparse representation. In Proceedings of the 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Florence, Italy, 4–9 May 2014; pp. 5804–5808. [Google Scholar]
  66. Zou, C.; Kou, K.I.; Wang, Y. Quaternion collaborative and sparse representation with application to color face recognition. IEEE Trans. Image Process. 2016, 25, 3287–3302. [Google Scholar] [CrossRef]
  67. Xu, Y.; Yu, L.; Xu, H.; Zhang, H.; Nguyen, T. Vector sparse representation of color image using quaternion matrix analysis. IEEE Trans. Image Process. 2015, 24, 1315–1329. [Google Scholar] [CrossRef]
  68. Wu, T.; Huang, C.; Jin, Z.; Jia, Z.; Ng, M.K. Total variation based pure quaternion dictionary learning method for color image denoising. Int. J. Numer. Anal. Model. 2022, 19, 709–737. [Google Scholar]
  69. Huang, C.; Ng, M.K.; Wu, T.; Zeng, T. Quaternion-based dictionary learning and saturation-value total variation regularization for color image restoration. IEEE Trans. Multimed. 2021, 24, 3769–3781. [Google Scholar] [CrossRef]
  70. Liu, X.; Chen, Y.; Peng, Z.; Wu, J. Infrared image super-resolution reconstruction based on quaternion and high-order overlapping group sparse total variation. Sensors 2019, 19, 5139. [Google Scholar] [CrossRef] [PubMed]
  71. Zou, C.; Kou, K.I.; Wang, Y.; Tang, Y.Y. Quaternion block sparse representation for signal recovery and classification. Signal Process. 2021, 179, 107849. [Google Scholar] [CrossRef]
  72. Wang, Y.; Kou, K.I.; Zou, C.; Tang, Y.Y. Robust sparse representation in quaternion space. IEEE Trans. Image Process. 2021, 30, 3637–3649. [Google Scholar] [CrossRef]
  73. Jia, Z.; Ng, M.K.; Song, G. Robust quaternion matrix completion with applications to image inpainting. Numer. Linear Algebra Appl. 2019, 26, e2245. [Google Scholar] [CrossRef]
  74. Shi, L.; Funt, B. Quaternion color texture segmentation. Comput. Vis. Image Underst. 2007, 107, 88–96. [Google Scholar] [CrossRef]
  75. Jia, Z.; Jin, Q.; Ng, M.K.; Zhao, X. Non-local robust quaternion matrix completion for large-scale color image and video inpainting. IEEE Trans. Image Process. 2022, 31, 3868–3883. [Google Scholar] [CrossRef]
  76. Wang, M.; Song, L.; Sun, K.; Jia, Z. F-2D-QPCA: A quaternion principal component analysis method for color face recognition. IEEE Access 2020, 8, 217437–217446. [Google Scholar] [CrossRef]
  77. Sun, Y.; Chen, S.; Yin, B. Color face recognition based on quaternion matrix representation. Pattern Recognit. Lett. 2011, 32, 597–605. [Google Scholar] [CrossRef]
  78. Jia, Z.; Ling, S.; Zhao, M. Color two-dimensional principal component analysis for face recognition based on quaternion model. In Proceedings of the Intelligent Computing Theories and Application: 13th International Conference, ICIC 2017, Liverpool, UK, 7–10 August 2017; pp. 177–189. [Google Scholar]
  79. Chen, B.; Shu, H.; Coatrieux, G.; Chen, G.; Sun, X.; Coatrieux, J.L. Color image analysis by quaternion-type moments. J. Math. Imaging Vis. 2015, 51, 124–144. [Google Scholar] [CrossRef]
  80. Wang, X.; Niu, P.; Yang, H.; Wang, C.p.; Wang, A.l. A new robust color image watermarking using local quaternion exponent moments. Inf. Sci. 2014, 277, 731–754. [Google Scholar] [CrossRef]
  81. Hosny, K.M.; Darwish, M.M. Resilient color image watermarking using accurate quaternion radial substituted Chebyshev moments. ACM Trans. Multimed. Comput. Commun. Appl. (TOMM) 2019, 15, 1–25. [Google Scholar] [CrossRef]
  82. Yang, T.; Ma, J.; Miao, Y.; Wang, X.; Xiao, B.; He, B.; Meng, Q. Quaternion weighted spherical Bessel-Fourier moment and its invariant for color image reconstruction and object recognition. Inf. Sci. 2019, 505, 388–405. [Google Scholar] [CrossRef]
  83. El Alami, A.; Berrahou, N.; Lakhili, Z.; Mesbah, A.; Berrahou, A.; Qjidaa, H. Efficient color face recognition based on quaternion discrete orthogonal moments neural networks. Multimed. Tools Appl. 2022, 81, 7685–7710. [Google Scholar] [CrossRef]
  84. Guo, L.Q.; Zhu, M. Quaternion Fourier—Mellin moments for color images. Pattern Recognit. 2011, 44, 187–195. [Google Scholar] [CrossRef]
  85. Tsougenis, E.; Papakostas, G.A.; Koulouriotis, D.E.; Karakasis, E.G. Adaptive color image watermarking by the use of quaternion image moments. Expert Syst. Appl. 2014, 41, 6408–6418. [Google Scholar] [CrossRef]
  86. Li, M.; Yuan, X.; Chen, H.; Li, J. Quaternion discrete fourier transform-based color image watermarking method using quaternion QR decomposition. IEEE Access 2020, 8, 72308–72315. [Google Scholar] [CrossRef]
  87. Chen, Y.; Jia, Z.; Peng, Y.; Peng, Y.; Zhang, D. A new structure-preserving quaternion QR decomposition method for color image blind watermarking. Signal Process. 2021, 185, 108088. [Google Scholar] [CrossRef]
  88. Li, J.; Yu, C.; Gupta, B.B.; Ren, X. Color image watermarking scheme based on quaternion Hadamard transform and Schur decomposition. Multimed. Tools Appl. 2018, 77, 4545–4561. [Google Scholar] [CrossRef]
  89. Chen, Y.; Jia, Z.; Peng, Y.; Peng, Y. Robust dual-color watermarking based on quaternion singular value decomposition. IEEE Access 2020, 8, 30628–30642. [Google Scholar] [CrossRef]
  90. Zhang, M.; Ding, W.; Li, Y.; Sun, J.; Liu, Z. Color image watermarking based on a fast structure-preserving algorithm of quaternion singular value decomposition. Signal Process. 2023, 208, 08971. [Google Scholar] [CrossRef]
  91. He, Z.H.; Qin, W.L.; Wang, X.X. Some applications of a decomposition for five quaternion matrices in control system and color image processing. Comput. Appl. Math. 2021, 40, 205. [Google Scholar] [CrossRef]
  92. Xie, C.; Savvides, M.; Kumar, B.V. Quaternion correlation filters for face recognition in wavelet domain. In Proceedings of the ICASSP’05—IEEE International Conference on Acoustics, Speech, and Signal Processing, Philadelphia, PA, USA, 23 March 2005; Volume 2, p. ii-85. [Google Scholar]
  93. Kumar, V.V.; Vidya, A.; Sharumathy, M.; Kanizohi, R. Super resolution enhancement of medical image using quaternion wavelet transform with SVD. In Proceedings of the 2017 Fourth International Conference on Signal Processing, Communication and Networking (ICSCN), Chennai, India, 16–18 March 2017; pp. 1–7. [Google Scholar]
  94. Miao, J.; Kou, K.I.; Cheng, D.; Liu, W. Quaternion higher-order singular value decomposition and its applications in color image processing. Inf. Fusion 2023, 92, 139–153. [Google Scholar] [CrossRef]
  95. Cai, C.; Mitra, S.K. A normalized color difference edge detector based on quaternion representation. In Proceedings of the 2000 International Conference on Image Processing (Cat. No. 00CH37101), Vancouver, BC, Canada, 10–13 September 2000; Volume 2, pp. 816–819. [Google Scholar]
  96. Geng, X.; Hu, X.; Xiao, J. Quaternion switching filter for impulse noise reduction in color image. Signal Process. 2012, 92, 150–162. [Google Scholar] [CrossRef]
  97. Chanu, P.R.; Singh, K.M. A two-stage switching vector median filter based on quaternion for removing impulse noise in color images. Multimed. Tools Appl. 2019, 78, 15375–15401. [Google Scholar] [CrossRef]
  98. Huang, C.; Fang, Y.; Wu, T.; Zeng, T.; Zeng, Y. Quaternion screened Poisson equation for low-light image enhancement. IEEE Signal Process. Lett. 2022, 29, 1417–1421. [Google Scholar] [CrossRef]
  99. Bas, P.; Le Bihan, N.; Chassery, J.M. Color image watermarking using quaternion Fourier transform. In Proceedings of the 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP’03), Hong Kong, China, 6–10 April 2003; Volume 3, p. III-521. [Google Scholar]
  100. Wang, X.; Wang, C.; Yang, H.; Niu, P. A robust blind color image watermarking in quaternion Fourier transform domain. J. Syst. Softw. 2013, 86, 255–277. [Google Scholar] [CrossRef]
  101. Ouyang, J.; Coatrieux, G.; Chen, B.; Shu, H. Color image watermarking based on quaternion Fourier transform and improved uniform log-polar mapping. Comput. Electr. Eng. 2015, 46, 419–432. [Google Scholar] [CrossRef]
  102. Niu, P.; Wang, L.; Shen, X.; Zhang, S.; Wang, X. A novel robust image watermarking in quaternion wavelet domain based on superpixel segmentation. Multidimens. Syst. Signal Process. 2020, 31, 1509–1530. [Google Scholar] [CrossRef]
  103. Grigoryan, A.M.; Agaian, S.S. Optimal color image restoration: Wiener filter and quaternion Fourier transform. In Proceedings of the Mobile Devices and Multimedia: Enabling Technologies, Algorithms, and Applications 2015, San Francisco, CA, USA, 10–11 February 2015; Volume 9411, pp. 215–226. [Google Scholar]
  104. Wang, H.; Li, C.; Guan, T.; Zhao, S. No-reference stereoscopic image quality assessment using quaternion wavelet transform and heterogeneous ensemble learning. Displays 2021, 69, 102058. [Google Scholar] [CrossRef]
  105. Xia, Z.; Wang, X.; Zhou, W.; Li, R.; Wang, C.; Zhang, C. Color medical image lossless watermarking using chaotic system and accurate quaternion polar harmonic transforms. Signal Process. 2019, 157, 108–118. [Google Scholar] [CrossRef]
  106. Wang, C.; Li, S.; Liu, Y.; Meng, L.; Zhang, K.; Wan, W. Cross-scale feature fusion-based JND estimation for robust image watermarking in quaternion DWT domain. Optik 2023, 272, 170371. [Google Scholar] [CrossRef]
  107. Subakan, Ö.N.; Vemuri, B.C. A quaternion framework for color image smoothing and segmentation. Int. J. Comput. Vis. 2011, 91, 233–250. [Google Scholar] [CrossRef]
  108. Subakan, Ö.N.; Vemuri, B.C. Color image segmentation in a quaternion framework. In Proceedings of the Energy Minimization Methods in Computer Vision and Pattern Recognition: 7th International Conference, EMMCVPR 2009, Bonn, Germany, 24–27 August 2009; pp. 401–414. [Google Scholar]
  109. Li, L.; Jin, L.; Xu, X.; Song, E. Unsupervised color—Texture segmentation based on multiscale quaternion Gabor filters and splitting strategy. Signal Process. 2013, 93, 2559–2572. [Google Scholar] [CrossRef]
  110. Zou, C.; Kou, K.I.; Dong, L.; Zheng, X.; Tang, Y.Y. From grayscale to color: Quaternion linear regression for color face recognition. IEEE Access 2019, 7, 154131–154140. [Google Scholar] [CrossRef]
  111. Liu, Z.; Qiu, Y.; Peng, Y.; Pu, J.; Zhang, X. Quaternion based maximum margin criterion method for color face recognition. Neural Process. Lett. 2017, 45, 913–923. [Google Scholar] [CrossRef]
  112. Risojević, V.; Babić, Z. Unsupervised learning of quaternion features for image classification. In Proceedings of the 2013 11th International Conference on Telecommunications in Modern Satellite, Cable and Broadcasting Services (TELSIKS), Nis, Serbia, 16–19 October 2013; Volume 1, pp. 345–348. [Google Scholar]
  113. Risojević, V.; Babić, Z. Unsupervised quaternion feature learning for remote sensing image classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 1521–1531. [Google Scholar] [CrossRef]
  114. Zeng, R.; Wu, J.; Shao, Z.; Chen, Y.; Chen, B.; Senhadji, L.; Shu, H. Color image classification via quaternion principal component analysis network. Neurocomputing 2016, 216, 416–428. [Google Scholar] [CrossRef]
  115. Gaudet, C.J.; Maida, A.S. Deep quaternion networks. In Proceedings of the 2018 International Joint Conference on Neural Networks (IJCNN), Rio de Janeiro, Brazil, 8–13 July 2018; pp. 1–8. [Google Scholar]
  116. Yin, Q.; Wang, J.; Luo, X.; Zhai, J.; Jha, S.K.; Shi, Y. Quaternion convolutional neural network for color image classification and forensics. IEEE Access 2019, 7, 20293–20301. [Google Scholar] [CrossRef]
  117. Waghmare, S.K.; Patil, R.B.; Waje, M.G.; Joshi, K.V.; Raut, V.P.; Shrivastava, P.C. Color image processing usingt modified quaternion neural network. J. Pharm. Negat. Results 2023, 13, 2954–2960. [Google Scholar]
  118. Parcollet, T.; Morchid, M.; Linarès, G. Quaternion convolutional neural networks for heterogeneous image processing. In Proceedings of the ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019; pp. 8514–8518. [Google Scholar]
  119. Jin, L.; Zhou, Y.; Liu, H.; Song, E. Deformable quaternion gabor convolutional neural network for color facial expression recognition. In Proceedings of the 2020 IEEE International Conference on Image Processing (ICIP), Abu Dhabi, United Arab Emirates, 25–28 October 2020; pp. 1696–1700. [Google Scholar]
  120. Zhou, Y.; Jin, L.; Liu, H.; Song, E. Color facial expression recognition by quaternion convolutional neural network with Gabor attention. IEEE Trans. Cogn. Dev. Syst. 2020, 13, 969–983. [Google Scholar] [CrossRef]
  121. Cao, Y.; Fu, Y.; Zhu, Z.; Rao, Z. Color random valued impulse noise removal based on quaternion convolutional attention denoising network. IEEE Signal Process. Lett. 2021, 29, 369–373. [Google Scholar] [CrossRef]
  122. Fang, F.; Li, J.; Zeng, T. Soft-edge assisted network for single image super-resolution. IEEE Trans. Image Process. 2020, 29, 4656–4668. [Google Scholar] [CrossRef] [PubMed]
  123. KM, S.K.; Rao, S.P.; Panetta, K.; Agaian, S.S. QSRNet: Towards quaternion-based single image super-resolution. In Proceedings of the Multimodal Image Exploitation and Learning 2022, Orlando, FL, USA, 3 April–12 June 2022; Volume 12100, pp. 192–205. [Google Scholar]
  124. Madhu, A.; Suresh, K. RQNet: Residual quaternion CNN for performance enhancement in low complexity and device robust acoustic scene classification. IEEE Trans. Multimed. 2023, 1–13. [Google Scholar] [CrossRef]
  125. Frants, V.; Agaian, S.; Panetta, K. QSAM-Net: Rain streak removal by quaternion neural network with self-attention module. arXiv 2022, arXiv:2208.04346. [Google Scholar]
  126. Frants, V.; Agaian, S.; Panetta, K. QCNN-H: Single-image dehazing using quaternion neural networks. IEEE Trans. Cybern. 2023. [Google Scholar] [CrossRef]
  127. Zhou, H.; Zhang, C.; Zhang, X.; Ma, Q. Image classification based on quaternion-valued capsule network. Appl. Intell. 2023, 53, 5587–5606. [Google Scholar] [CrossRef]
  128. Xu, T.; Kong, X.; Shen, Q.; Chen, Y.; Zhou, Y. Deep and low-rank quaternion priors for color image processing. IEEE Trans. Circuits Syst. Video Technol. 2023, 1–14. [Google Scholar] [CrossRef]
  129. Zhang, K.; Zuo, W.; Zhang, L. FFDNet: Toward a fast and flexible solution for CNN-based image denoising. IEEE Trans. Image Process. 2018, 27, 4608–4622. [Google Scholar] [CrossRef]
  130. Grassucci, E.; Cicero, E.; Comminiello, D. Quaternion generative adversarial networks. In Generative Adversarial Learning: Architectures and Applications; Springer: Berlin/Heidelberg, Germany, 2022; pp. 57–86. [Google Scholar]
Figure 1. Representation of the color pixel as a quaternion number. The yellow button is a pixel in the image.
Figure 1. Representation of the color pixel as a quaternion number. The yellow button is a pixel in the image.
Mathematics 11 02056 g001
Figure 2. Number of publications on quaternion-based image processing models between 2011 and 1 March 2023. (a) Number of publications on quaternion-based models. (b) Number of publications on quaternion-based CNN models. (c) Number of publications on different tasks. (d) Number of publications on different methods.
Figure 2. Number of publications on quaternion-based image processing models between 2011 and 1 March 2023. (a) Number of publications on quaternion-based models. (b) Number of publications on quaternion-based CNN models. (c) Number of publications on different tasks. (d) Number of publications on different methods.
Mathematics 11 02056 g002
Figure 3. Image deblurring results. From left to right: original image; input image with average blur kernel H = fspecial (‘average’,9) and Gaussian noise with noise level σ = 20; output of realvalue-based, low-rank, and total variation regularizers [55]; output of quaternion-based low-rank regularizer [54]. (a) Original. (b) Input. (c) Output of [55]. (d) Output of [54].
Figure 3. Image deblurring results. From left to right: original image; input image with average blur kernel H = fspecial (‘average’,9) and Gaussian noise with noise level σ = 20; output of realvalue-based, low-rank, and total variation regularizers [55]; output of quaternion-based low-rank regularizer [54]. (a) Original. (b) Input. (c) Output of [55]. (d) Output of [54].
Mathematics 11 02056 g003
Figure 4. Image deblurring results. From left to right: original image; input image with Gaussian blur kernel H = fspecial(‘Gaussian’,25,1.6) and Gaussian noise with noise level σ = 20; output of real-value-based, low-rank, and total variation regularizers [55]; output of quaternion-based low-rank regularizer [54]. (a) Original. (b) Input. (c) Output of [55]. (d) Output of [54].
Figure 4. Image deblurring results. From left to right: original image; input image with Gaussian blur kernel H = fspecial(‘Gaussian’,25,1.6) and Gaussian noise with noise level σ = 20; output of real-value-based, low-rank, and total variation regularizers [55]; output of quaternion-based low-rank regularizer [54]. (a) Original. (b) Input. (c) Output of [55]. (d) Output of [54].
Mathematics 11 02056 g004
Table 1. Derivatives of the functions of type f ( q ˙ ) .
Table 1. Derivatives of the functions of type f ( q ˙ ) .
f ( q ˙ ) D q ˙ f Note
q ˙ 1 q ˙ H
μ ˙ q ˙ μ ˙ μ ˙ H
q ˙ ν ˙ ( ν ˙ ) ν ˙ H , ( ν ˙ ) denotes the real part of ν ˙
μ ˙ q ˙ ν ˙ + τ ˙ μ ˙ ( ν ˙ ) μ ˙ , ν ˙ , τ ˙ H
q ˙ * 1 2 q ˙ * denotes the conjugation of q ˙
μ ˙ q ˙ * 1 2 μ ˙ μ ˙ H
q ˙ * ν ˙ 1 2 ν ˙ * ν ˙ H
μ ˙ q ˙ * ν ˙ + τ ˙ 1 2 μ ˙ ν ˙ * μ ˙ , ν ˙ , τ ˙ H
q ˙ 1 q ˙ 1 ( q ˙ 1 ) q ˙ 1 denotes the reciprocal of q ˙
( q ˙ * ) 1 1 2 | q ˙ | 2
( μ ˙ q ˙ ν ˙ + τ ˙ ) 2 g ˙ μ ˙ ( ν ˙ ) + μ ˙ ( ν ˙ g ˙ ) g ˙ = μ ˙ q ˙ ν ˙ + τ ˙
( μ ˙ q ˙ * ν ˙ + τ ˙ ) 2 1 2 g ˙ μ ˙ ν ˙ * 1 2 μ ˙ ( ν ˙ g ˙ ) * g ˙ = μ ˙ q ˙ * ν ˙ + τ ˙
| μ ˙ q ˙ ν ˙ + τ ˙ | g ˙ * 2 | g ˙ | μ ˙ ( ν ˙ ) 1 4 | g ˙ | ν ˙ * ( μ ˙ * g ˙ ) * g ˙ = μ ˙ q ˙ ν ˙ + τ ˙
| μ ˙ q ˙ * ν ˙ + τ ˙ | g ˙ 2 | g ˙ | ν ˙ * ( μ ˙ * ) 1 4 | g ˙ | μ ˙ ( ν ˙ g ˙ * ) * g ˙ = μ ˙ q ˙ * ν ˙ + τ ˙
| μ ˙ q ˙ ν ˙ + τ ˙ | 2 g ˙ * μ ˙ ( ν ˙ ) 1 2 ν ˙ * ( μ ˙ * g ˙ ) * g ˙ = μ ˙ q ˙ ν ˙ + τ ˙
| μ ˙ q ˙ * ν ˙ + τ ˙ | 2 g ˙ ν ˙ * ( μ ˙ * ) 1 2 μ ˙ ( ν ˙ g ˙ * ) * g ˙ = μ ˙ q ˙ * ν ˙ + τ ˙
Table 2. Typical quaternion-based image processing models. The model structure, model prior, the used algorithm, testing data, and tasks are listed.
Table 2. Typical quaternion-based image processing models. The model structure, model prior, the used algorithm, testing data, and tasks are listed.
MethodModel StructurePriorAlgorithmTesting DataTask
2018–2011QGmF [107]Gabor Filter + hypercomplex exponential basis functionsClosed Form SolutionCommon UsedDenoising/Inpainting/
Segmentation
Xu et al. [67]SparseDictionaryKQSVD/QOMPAnimal ImagesReconstruction /Denoising/Inpainting/Super-resolution
Zou et al. [66]SparseCRC + Sparse RCADMMAR/Caltech/SCface/
FERET/LFW
Face Recognition
Kumar et al. [93]Low-rankQWT + QSVDBiomedical ImagesSuper-resolution
QPCANet [114]Deep NetworkPrincipal Component Analysis NetworkDeep LearningCaltech-101/Georgia Tech face/UC Merced Land UseClassification
QCNN [50]Deep NetworkQuaternion RepresentationDeep LearningCOCO/Oxford flower102Classification/Denoising
2019QNLTV [57]Non-localNon-local Total VariationSplitting BargemanCommon UsedDenoising
LRQA [60]Low-rankLaplace/Geman/Weighted Schatten- γ Difference of ConvexCommon UsedDenoising/Inpainting
QWNNM [61]Low-rankNuclear NormQSVDBerkeleyDenoising
QPHTs [105]Chaotic System+Polar Harmonic Transform-Whole Brain AtlasWatermarking
QMC [73]Low-rankNuclear Norm/ 1 NormADMMBerkeleyInpainting
HOGS4 [70]SparseTotal Variation+High-order Overlapping Group SparseADMMInfrared LTIR/IRDataSuper-resolution
QSBFM [82]Orthogonal MomentWeighted Spherical Bessel–Fourier MomentQSBFMCVG-UGR/Amsterdam Library/Columbia LibraryReconstruction /Recognition
QCROC [110]Linear Regression ClassificationLinear Regression Classification+Collaborative RepresentationCollaborative Representation Optimized ClassificationSCface/AR/CaltechRecognition
Yin et al. [116]Deep NetworkQCNN+Attention MechanismDeep LearningUCIDClassification/Forensics
2020Li et al. [86]Discrete Fourier Transform+QR decompositionWavelet Transform+Just-noticeable DifferenceCommon UsedWatermarking
Voronin et al. [59]Modified Chan and Vese MethodAnisotropic Gradient CalculationMercedSegmentation
F-2D-QPCA [76]Low-rank+SparseF-normPrincipal Component AnalysisGeorgia Tech Face/FERETFace Recognition
DQG-CNN [119]Deep NetworkDeformable Gabor FilterDeep LearningOulu-CASIA/MMI/SFEWFacial Expression Recognition
Zhou et al. [120]Deep NetworkGabor AttentionDeep LearningOulu-CASIA/MMI/SFEWFacial Expression Recognition
2021QBSR [71]SparseBlock Sparse RepresentationADMMAR/SCfaceRecognition
Huang et al. [69]SparseDictionary+Total VariationADMMDIV2KDenoising/Deblurring
Shahadat et al. [38]Deep NetworkAxial-attention ModulesDeep LearningImageNet300kClassification
RQSVR [72]SparseWelsch EstimatorHQS/ADMMAR/SCfaceReconstruction/Recognition
He et al. [91]Low-rankMatrix DecompositionPSVDCommon UsedWatermarking
2022QSRNet [123]Deep NetworkEdge-NetDeep LearningDIV2K/Flickr2K /Set5/Set14/BSD100 /Urban100/UEC100Super-resolution
Yang et al. [62]Low-rankLogarithmic NormFISTA/ADMMCommon Used/BerkeleyInpainting
Wu et al. [18]Total Variation l 1 / l 2 -normspADMMWeizmann/BerkeleySegmentation
QSAM-Net [125]Deep NetworkQCNN+Self AttentionDeep LearningLOLRain Streak Removal
QDOMNN [83]Deep NetworkDiscrete Orthogonal MomentsDeep LearningFaces94/Faces95 /Faces96/Grimace /Georgia Tech Face/FEIRecognition
2023RQNet [124]Deep NetworkResidual CNNDeep LearningDCASE19/DCASE20Classification
QHOSVD [94]Low-rankHigher-order SVDMatrix DecompositionLytro/MFFWImage Fusion/Denoising
DLRQP [128]Plug-and-playFFDNet/LaplaceADMM/DCACommon UsedDenoising/Inpainting
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Huang, C.; Li, J.; Gao, G. Review of Quaternion-Based Color Image Processing Methods. Mathematics 2023, 11, 2056. https://doi.org/10.3390/math11092056

AMA Style

Huang C, Li J, Gao G. Review of Quaternion-Based Color Image Processing Methods. Mathematics. 2023; 11(9):2056. https://doi.org/10.3390/math11092056

Chicago/Turabian Style

Huang, Chaoyan, Juncheng Li, and Guangwei Gao. 2023. "Review of Quaternion-Based Color Image Processing Methods" Mathematics 11, no. 9: 2056. https://doi.org/10.3390/math11092056

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop