Active Contours in the Complex Domain for Salient Object Detection

: The combination of active contour models (ACMs) for both contour and salient object detection is an attractive approach for researchers in image segmentation. Existing active contour models fail when improper initialization is performed. We propose a novel active contour model with salience detection in the complex domain to address this issue. First, the input image is converted to the complex domain. The complex transformation gives salience cue. In addition, it is well suited for cyclic objects and it speeds up the iteration of the active contour. During the process, we utilize a low-pass ﬁlter that lets the low spatial frequencies pass, while attenuating, or completely blocking, the high spatial frequencies to reduce the random noise connected with favorable or higher frequencies. Furthermore, the model introduces a force function in the complex domain that dynamically shrinks a contour when it is outside of the object of interest and expands it when the contour is inside the object. Comprehensive tests on both synthetic images and natural images show that our proposed algorithm produces accurate salience results that are close to the ground truth. At the same time, it eliminates re-initialization and, thus, reduces the execution time. The parameter setting process plays an important role in the ultimate result of ACMs and it can produce a great performance improvement if the proper parameters are given to an active contour model. Our model uses a fixed parameter and it shows good performance when compared to state-of-the-art models. Salience detection can automatically discover salient regions, but it relies on post-processing for accurate boundary detection. The active contour model performs well at localizing boundaries, but it suffers from poor initialization. It requires extra effort to discover the significant regions in the image. The proposed method integrates the salience discovery unit into the active model and uses an active contour model in the complex domain for salient object detection, which eliminates the reinitialization of classic active contour models. We propose a novel active contour model in the complex domain for salient object detection to address the limitations of the existing models. Our proposed model improves detection accuracy with low computational cost. The main contributions of the proposed algorithm are as follows: First, we convert the image to the complex domain. The complex transformation provides salience cues. Salience detection serves as the initialization for our active contour model, which quickly converges to the edge structure of the input image. We use low-pass the of which serves the initialization step and eliminates reinitialization. we define a force function in complex a complex-force function, used distinguish for active contour. we combine salient object discovery and the complex force and implement the active contours in the complex domain for salient object detection. In the above, we discussed that previous research investigated different models to handle the limitation, such as improper initialization that influences on the final performance and expansive re- initialization. in different


Introduction
Salience detection is a key problem for computer vision and image processing. It plays a vital role in many applications, facilitating object tracking [1][2][3], image and video compression [4], remote sensing image segmentation [5], video surveillance [6], and image retrieval [7]. However, salience detection faces many challenges, such as noisy and complex backgrounds [8] and intensity inhomogeneity [9][10][11]. These challenges make the current solutions very uncertain. Researchers are working to address these limitations and improve the accuracy of salient object detection.
Salience detection [12][13][14] extracts important and meaningful regions from an image [15]. It finds the most informative and interesting regions. During the last few decades, salience detection has been widely investigated due to its similarity to the human visual system and it has been popularly applied in computer vision. It is closely related to selective processing in the human visual system. Humans can capture or select relevant information within a large visual field. The human visual system is sensitive to salient regions and can quickly identify saliency for additional cognitive processing. Salience detection mimics the human capability to automatically discover salient regions.
There are two types of motivated approaches in salience detection [16] to combine object information: bottom up and top down. Bottom-up methods only use the visual signal. They automatically estimate meaningful objects without any prior assumptions or knowledge [16]. Top-down methods work from some visual priors [17,18]. Our proposed model bis a bottom-up approach, which does not require any prior knowledge.
In recent years, many models have been proposed for salience detection. Inspired by the biological model, Koch and Ullman [19] proposed a salience model that shows the original concepts of salient objects, representing visual attention. Their model uses biological inspiration, which is, the center-surround strategy of color and intensity, for identifying the salient region in an image. Zhang et al. [20] presented a fuzzy growing method to squeeze out regions of interest and extracted saliency while using local contrast. The model for eye fixation that was proposed by Bruce et al. [21] introduced an attentive object maximization model from eye fixation. The model uses Shannon's self-information for salient region detection. Cheng et al. [22] extracted a saliency map by taking advantage of the global region contrast of the image and defined a spatial relation on the region for salience detection. The bottom-up approach that was used by Judd et al. [23] developed a saliency model that merges low-level features, mid-level features, and high-level features to extract the salient object. However, the local contrast cannot handle the global influence and, thus, only detects the boundaries of the object. Xie et al. [24] proposed a model that uses a Bayesian formwork to exploit low-level and mid-level cues. However, these methods are facing the limitation of computation cost and unsatisfactory results in object detection. We have proposed novel, automatic, and accurate active contour model to address and fixed these limitations. Our unit for salience detection utilizes a global mechanism. The global mechanism depends on the global context to explore salient cues.
Existing active contour models can be classified into two categories: region-based [25][26][27][28][29][30][31][32][33][34] and edge-based [35,36]. Region-based active contour models utilize the statistical information of image intensity in different subsets. Edge-based active contour models use edge indicators to localize the boundaries. Chan and Vese [29] proposed a region-based model that adds curve length as a penalty term. The method uses intensity statistics for object segmentation and fails when inhomogeneity is present. Chan and Vese [37] proposed a piecewise smooth-function approximation for separate regions. The model deals well with the inhomogeneity issue, but wastes time in iteration. Li [38] constructed a local binary fitting model by introducing a local kernel function into the Chan and Vese (CV) model. The model utilizes spatial connections among pixels and extracts local information using a Gaussian function. However, it fails when improper initialization is performed. A new level set introduced by Li [39] successfully reduces reinitialization and avoids numerical error. Zhang [40] suggests using local fitting energy to extract local image information. Local binary fitting decreases the dissimilarity of the image. Zhang [41] introduced an active contour model with selective local or global segmentation (ACSLGS). The ACSLGS model combines the ideas of the Chan-Vese and geodesic active contour (GAC) models and utilizes statistical information. A region-based function is used to control the direction of evolution. The signed pressure function creates pressure to shrink the contour when it is outside the object or expand it when the contour is inside the object, but this model fails on intensity inhomogeneity images.
The previous algorithms discussed have severe drawbacks. The novel active contours in the complex domain for salient object detection (ACCD_SOD) eliminates the drawbacks of existing state-of-the-art models, including initialization and parameter setting.

•
An important consideration is that the proposed algorithm can assist as a smooth-edge detector that draws strong edges and helps to preserve edges. This is beneficial in the implementation of the edge sensitivity contour method.

•
Initialization is a critical step that significantly influences the final performance. In a complex domain, the initialization process is seamlessly carried out, which is most suitable for salient object detection.

•
In practice, the reinitialization process can be quite complex and costly and have side effects. Our algorithm eliminates reinitialization.

•
The proposed ACCD_SOD algorithm has been applied to both simulated and real images and outputs precise results. In particular, it appears to perform robustly when there are weak boundaries.

•
The complex force function has a relatively extensive capture range and accommodates small concavities.

•
The parameter setting process plays an important role in the ultimate result of ACMs and it can produce a great performance improvement if the proper parameters are given to an active contour model. Our model uses a fixed parameter and it shows good performance when compared to state-of-the-art models.
Salience detection can automatically discover salient regions, but it relies on post-processing for accurate boundary detection. The active contour model performs well at localizing boundaries, but it suffers from poor initialization. It requires extra effort to discover the significant regions in the image. The proposed method integrates the salience discovery unit into the active model and uses an active contour model in the complex domain for salient object detection, which eliminates the reinitialization of classic active contour models.
We propose a novel active contour model in the complex domain for salient object detection to address the limitations of the existing models. Our proposed model improves detection accuracy with low computational cost. The main contributions of the proposed algorithm are as follows: • First, we convert the image to the complex domain. The complex transformation provides salience cues. Salience detection serves as the initialization for our active contour model, which quickly converges to the edge structure of the input image.

•
We then use a low-pass filter in the complex domain to discover the objects of interest, which serves as the initialization step and eliminates reinitialization. • Subsequently, we define a force function in the complex domain, resulting in a complex-force function, which is used to distinguish the object from the background for the active contour. • Finally, we combine salient object discovery and the complex force and implement the active contours in the complex domain for salient object detection.
In the above, we discussed that previous research investigated different models to handle the limitation, such as improper initialization that influences on the final performance and expansive re-initialization. Second intensity inhomogeneity occurs in many real images of different modalities, which is worse for object detection in higher field imaging and etc.
Our primarily aimed is introducing a model, which has the capability (1) to draw straight forward automatic active contour on the object boundary which in process leads to detect salient object. The model is also capable to handle (2) improper initialization, (3) eliminates reinitialization costly implementation, and (4) reducing the computational cost. Figure 1 concludes our aim and illustrates motivation results.
Therefore, our proposed model avails the advantages of complex transformations. Complex transformation creates hints of saliency that serve as the initialization and overcomes the difficulty of improper initialization. Complex transformation gives smooth curve to handles the inhomogeneity problem. Our model eliminates the re-initialization process while using low pass filter by the discovered object of interest. The model also reduces the computational cost in terms of iteration.
The rest of the paper is organized, as follows: In Section 2, we provide a brief overview of related works. Based on this, Section 3 illustrates our formulation of the active contour in the complex domain with its detailed implementation. In Section 4, we describe experiments conducted on both synthetic and natural images, which is followed by the conclusion in Section 5.
initialization. Second intensity inhomogeneity occurs in many real images of different modalities, which is worse for object detection in higher field imaging and etc.
Our primarily aimed is introducing a model, which has the capability (1) to draw straight forward automatic active contour on the object boundary which in process leads to detect salient object. The model is also capable to handle (2) improper initialization, (3) eliminates reinitialization costly implementation, and (4) reducing the computational cost. Figure 1 concludes our aim and illustrates motivation results.

Related Work
Recently, the active contour model has been widely used in applications, such as object tracking, edge detection, and shape recognition. This section consists of a brief review of related active contour models for image segmentation.

Geodesic Active Contour (GAC) Model
The standard active contour model (GAC) uses gradient information to detect object boundaries. Let the image be a function defined in the 2D domain, I : Ω ⊂ R 2 → R . The GAC model can be formulated by minimizing the energy functional as where C(q) is a closed curve parameterized by q ∈ [0, 1]. In fact, the natural parameterization q is arbitrary. It does not impose constraints on the curve, while the curve has a relation only with velocity defined by the image feature, as follows where g is the edge extractor and ∇G σ ⊗ I denotes the convolution of image I with the Gaussian derivative kernel, G σ , where σ represents the standard deviation of the Gaussian kernel. By employing the calculus of variation [31] on Equation (1), we obtain the Euler-Lagrange equation, as where k represents the curvature; the inward normal is represented by → N. Usually, we add a constant velocity C to increase the speed of propagation. Subsequently, Equation (3) can be formulated as where C plays the role of the balloon force, which regulates the shrinking and expending the dynamic contour. It also has the capability to increase the propagation speed of the dynamic contour. The gradient can capture the object with a high computational cost. The level set formulation of GAC is

Chan-Vese Model
Chan and Vese [29] define a region-based model under the framework of Mumford-Shah [42]. The Chan-Vese (CV) model is based on the level set. The contour of the object corresponds to the zero-level set of the Lipschitz function C = x ∈ Ω : φ(x) = 0 . Chan and Vese defined the energy minimization functional as where µ, υ, λ 1 , λ 2 are positive weights, L extracts the contour length and A counts the region enclosed by the contour. The constant values c 1 and c 2 are the average intensities for the foreground and background, respectively. The average intensity c 1 is used for the region inside the contour, In(C) = x : φ(x) > 0, x ∈ Ω and c 2 for the region outside the contour, Given the level set function, we can obtain c 1 and c 2 as then, the optimal solution for the CV model is as follows where ∇ is the gradient operator, H(φ) is the Heaviside function, and δφ is the Dirac delta function. λ 1 and λ 2 are fixed parameters, µ controls the smoothness of the zero-level set, and ν is used for speeding up the dynamic contour. The driven force is controlled by λ 1 and λ 2 . The Heaviside function H and Dirac measure δ are given by H(z) = 1, 0, z ≥ 0 z < 0 and δ(z) = d dz H(z), respectively. They are usually approximated by The CV model has constant intensities for the background and foreground. Although it is robust against noise, it fails when inhomogeneity is present.

Active Contours with Selective Local or Global Segmentation (ACSLGS)
The level set formulation of K. Zhang et al. [41] used the signed pressure force in [29], which assumes that object intensities are homogenous. ACSLGS [41] proposed a signed pressure function that is different from that in [29] and it plays an important role in segmentation. It acts as a global force that drags the corresponding pixels to the foreground or background. The force function makes the region Appl. Sci. 2020, 10, 3845 6 of 19 shrink when it is outside the object and expand when it is inside the object. Their pressure function is in the range of [−1, 1] and it has the form where c 1 and c 2 are defined in Equations (7) and (8). By plugging them into Equation (5), the solution for the ACSLGS model is Assume that both of the intensities are homogenous. Since min(I) ≤ c 1 , c 2 ≤ max(I) the inside and outside intensities cannot be obtained concurrently, where the contour is. Therefore, The dynamic contour has distinct intensity signs inside and outside of the object and the level-set function is initialized to a constant. div ∇φ |∇φ| ∇φ is used for regularization of the level-set function φ [43]. By removing both functions div ∇φ |∇φ| ∇φ , sp f (I(x))∇φ in Equation (14), which is unnecessary, the level set formulation can finally be written as The initialization of the active contour with selective local or global segmentation is where Ω is the image domain, Ω 0 is a subset of Ω for the initialization, ∂Ω 0 defines the boundary of the initialized region, and p is a constant, where p > 0. The active contour with selective local and global segmentation model has the same disadvantages as the CV model.

Proposed Method
The previous models of the active contour suffer from improper initialization and sometimes require reinitialization. We propose a novel method, as illustrated in Figure 2, in order to reduce the limitations discovered in Section 2 and use the active contour in the complex domain for salient object detection. In this section, we will describe the complex domain transformation and then our formulation for the active contour model. We will discuss and formulate a low-pass filter in the complex domain and, finally, the complex force function will be defined.

Complex Domain Transformation
The active contour model displays unstable behavior in the energy minimization problem and it suffers from improper initialization. Pixelwise salience detection provides a solution for initialization and further foreground identification. The conventional method of the active contour model has limitations in the case of noise. We propose a complex domain algorithm (ACCD_SOD) to resolve the problem in order to obtain clear and accurate salient objects. Our proposed algorithm for saliency operates on the complex domain. We combine the complex domain with the Schrödinger equation [44] to develop a fundamental solution for the linear case [45,46]. The complex domain has a general property of forward and inverse diffusion [47]. The complex domain contains salience cues and gives favorable results for cyclic objects, which makes it suitable for active contours [48]. The complex domain consists of two-two signals, real and imaginary. We transform the image to the complex domain by multiplying the input image by a complex number i (iota).
Let define the image domain and a complex function The transformations between the complex function and the original real image are summarized, as follows: where n is an odd number gives a complex value. Its output complex value preserves the characteristics that     , : ,   grow real value decrease imaginary value (19) If the number n increases to infinity, then the value has the following limit: ( ) i Figure 2. Illustration of the architecture of our ACCD_SOD algorithm.

Complex Domain Transformation
The active contour model displays unstable behavior in the energy minimization problem and it suffers from improper initialization. Pixelwise salience detection provides a solution for initialization and further foreground identification. The conventional method of the active contour model has limitations in the case of noise. We propose a complex domain algorithm (ACCD_SOD) to resolve the problem in order to obtain clear and accurate salient objects. Our proposed algorithm for saliency operates on the complex domain. We combine the complex domain with the Schrödinger equation [44] to develop a fundamental solution for the linear case [45,46]. The complex domain has a general property of forward and inverse diffusion [47]. The complex domain contains salience cues and gives favorable results for cyclic objects, which makes it suitable for active contours [48]. The complex domain consists of two-two signals, real and imaginary. We transform the image to the complex domain by multiplying the input image by a complex number i (iota).
Let Ω define the image domain and a complex function f :Ω → C, where C gives the complex value. The special symbol iota (i) stands for the unit of the imaginary part and can be represented as i = √ −1. It has the property that i 2 = −1. The standard form of a complex number is a + ib, a, b ∈ R. Complex numbers are the extension of real numbers. If b = 0, it is a real number, and, if a = 0, it is a pure imaginary number. We convert a real image to the complex domain via The transformations between the complex function and the original real image are summarized, as follows: where n is an odd number i × x gives a complex value. Its output complex value preserves the characteristics that If the number n increases to infinity, then the value has the following limit: We use the geometrical representation in Figure 3 to explain the limits. Multiplication by +1 is alternately equal to rotation by 360 degrees, and it will maintain the complex value at the same position. When we multiply a complex value by −1, it is equivalent to rotation by 180 degrees and, if we multiply by −1 two times, it is equal to +1. Multiplication by i is equivalent to 90-degree rotation. If we multiply by √ i, this can also be expressed as a rotation of 45 degrees. Multiplying by √ i eight times is equal to a 360-degrees rotation. The value on the imaginary unit circle of 45 degrees gives the value of √ i. In general, multiplication by n √ i is the same as a 90/n-degree rotation.
Appl. Sci. 2020, 10, x FOR PEER REVIEW 8 of 20 We use the geometrical representation in Figure 3 to explain the limits. Multiplication by 1  is alternately equal to rotation by 360 degrees, and it will maintain the complex value at the same position. When we multiply a complex value by −1, it is equivalent to rotation by 180 degrees and, if we multiply by 1  two times, it is equal to 1.  Multiplication by i is equivalent to 90-degree rotation. If we multiply by i , this can also be expressed as a rotation of 45 degrees. Multiplying by i eight times is equal to a 360-degrees rotation. The value on the imaginary unit circle of 45 degrees gives the value of i . In general, multiplication by n i is the same as a 90/ n-degree rotation.

Low-Pass Filter
The algorithm uses a low-pass filter in the complex domain to suppress noise and provide a hierarchical framework for determining salience. The low-pass filter used is a penalizing term, which works as the initialization and eliminates the reinitialization of the curve. The low-pass filter is employed on the active contour while using the gradient operator. The gradient operator suppresses the high-level spatial frequencies. In each iteration, each intermediate result is passed through the low-pass filter. The low-pass filter captures the details of the salient region in the specific frequency range to pass and attenuates frequencies outside that range. The low pass filter gives the best representation of the salient region at multiple resolutions. The output produces detailed information in the form of salience cues. This process extracts saliency results in the complex domain while fluctuating function when it is connected to high illumination. The low pass formulation can be written as where  represents the propagation speed,  represents gradient operator, and is a complex filter. Here, with a slight abuse of notation, we use in the complex domain discovers the objects of interest and helps the contour to converge on those areas.

Complex Force Function
The signed pressure force function has an important role in expanding the dynamic contour [41] when the contour is inside the object boundary and shrinking when the dynamic contour is outside the object boundary. The function that is introduced here is different from ACSLGS and the design of complex signed pressure [41,49]. We propose a pressure force, also called a complex force function, which utilizes the statistical information in the complex domain to change the intensities inside the contour to the foreground and those outside the contour to the background. The proposed complex force function can be written as

Low-Pass Filter
The algorithm uses a low-pass filter in the complex domain to suppress noise and provide a hierarchical framework for determining salience. The low-pass filter used is a penalizing term, which works as the initialization and eliminates the reinitialization of the curve. The low-pass filter is employed on the active contour while using the gradient operator. The gradient operator suppresses the high-level spatial frequencies. In each iteration, each intermediate result is passed through the low-pass filter. The low-pass filter captures the details of the salient region in the specific frequency range to pass and attenuates frequencies outside that range. The low pass filter gives the best representation of the salient region at multiple resolutions. The output produces detailed information in the form of salience cues. This process extracts saliency results in the complex domain while fluctuating function when it is connected to high illumination. The low pass formulation can be written as where α represents the propagation speed, ∇ represents gradient operator, and S is a complex filter. Here, with a slight abuse of notation, we use |.| L 2 to denote the square root of the sum of the squared values of the real and imaginary part. The low-pass filter H LP in the complex domain discovers the objects of interest and helps the contour to converge on those areas.

Complex Force Function
The signed pressure force function has an important role in expanding the dynamic contour [41] when the contour is inside the object boundary and shrinking when the dynamic contour is outside the object boundary. The function that is introduced here is different from ACSLGS and the design of complex signed pressure [41,49]. We propose a pressure force, also called a complex force function, which utilizes the statistical information in the complex domain to change the intensities inside the contour to the foreground and those outside the contour to the background. The proposed complex force function can be written as The complex force function dynamically increases the interior and exterior energies of the curves in order to reduce the chance of entrapment in a local minimum, while the curve is far from the boundary of the object. When the curve is close to the object boundary, then the energy is decreased, and the curve is automatically stopped near the boundary of the object. The introduced model reduces the computational cost efficiently. The same analogy is adopted for c 1 and c 2 , defined in Equation (7) and (8).
The global complex force function is created by multiplying the complex domain image per pixel with a low-pass filter and a complex force function. The proposed Algorithm 1 of active contour salience formulation is adopted according to Equation (15).
This algorithm maintains a fixed value 0.5 of λ, which controls the evolution of the contour, while the ACSLGS model uses a parameter that is not fixed. During contour evolution, the proposed algorithm needs to initialize the level-set function φ periodically for the signed distance function. We initialize the algorithm using a Gaussian filter to regularize the level-set function and avoid reinitialization. We define the initial level-set function φ and the initialization of φ for the proposed algorithm, as where the representation of the image domain is Ω, Ω 0 is a subset of Ω and ∂Ω 0 detects the boundary of the image domain; p is a constant, where p > 0, and p is important on the moving interface to control propagation on the specified interval.

Computational Efficiency
The performance, efficiency, and average running time of the proposed Algorithm 1 are tested on a desktop machine with a Core I-7 2.50 GHz CPU and 8 GB RAM on Windows 7. The proposed algorithm is implemented in MATLAB. The results are summarized in figures and tables. The results show that our algorithm is the fastest, has low computational cost, and provides high-quality of salient object results, as illustrated in Figure 1.

Comparison with Recent State-of-the-Art Models for Active Contours
We evaluate our algorithm qualitatively and quantitively on IMAGE_1, IMAGE_2 and IMAGE_3, as shown in Figure 4, and on real images by comparison with SDREL [50], RBS [51], the CV model [29], the LI model [52], and the DRLSE model [39]. The results show the illustration of initialization additionally, the initial contour, the final contour and the salient object.

Computational Efficiency
The performance, efficiency, and average running time of the proposed Algorithm 1 are tested on a desktop machine with a Core I-7 2.50 GHz CPU and 8 GB RAM on Windows 7. The proposed algorithm is implemented in MATLAB. The results are summarized in figures and tables. The results show that our algorithm is the fastest, has low computational cost, and provides high-quality of salient object results, as illustrated in Figure 1.

Comparison with Recent State-of-the-art Models for Active Contours
We evaluate our algorithm qualitatively and quantitively on IMAGE_1, IMAGE_2 and IMAGE_3, as shown in Figure 4, and on real images by comparison with SDREL [50], RBS [51], the CV model [29], the LI model [52], and the DRLSE model [39]. The results show the illustration of initialization additionally, the initial contour, the final contour and the salient object. The iterations represented in Figure 5 and Figure 6 describe the time of evolution, represented in the form of a curve. The comparison results in Figure 5 demonstrates that our algorithm has iterate only once for all the three images to detect initial and final contour of the object, while the other stateof-the-art models fails to detect boundary of the object in first iteration. Figure 6 also shows that our proposed algorithm has a very low computational cost when compared to the other methods. The iterations represented in Figures 5 and 6 describe the time of evolution, represented in the form of a curve. The comparison results in Figure 5 demonstrates that our algorithm has iterate only once for all the three images to detect initial and final contour of the object, while the other state-of-the-art models fails to detect boundary of the object in first iteration. Figure 6 also shows that our proposed algorithm has a very low computational cost when compared to the other methods. Appl. Sci. 2020, 10, x FOR PEER REVIEW 11 of 20    throughout the comparison. We comparatively evaluate the salient region of our algorithm and compare it with four state-of-the-art models on the basis of four metrics, i.e., the rand index [54], global consistency, variation information (VOI) [55], and the F-measure. We select 886 images from the MSRA dataset. Table 1 shows the results. The methods that are involved in the comparison are SDREL [50], RBS [51], the CV model [29], the LI    throughout the comparison. We comparatively evaluate the salient region of our algorithm and compare it with four state-of-the-art models on the basis of four metrics, i.e., the rand index [54], global consistency, variation information (VOI) [55], and the F-measure. We select 886 images from the MSRA dataset. Table 1 shows the results. The methods that are involved in the comparison are SDREL [50], RBS [51], the CV model [29], the LI Next, we conduct experiments on real images. The proposed method is evaluated on the Microsoft research Asia (MSRA-1000) dataset [53]. MSRA-1000 is a traditional dataset for saliency detection and it is designed to promote research in image via open and metrics-based evaluation. The dataset contains 1000 images with accurate pixel wise ground truth saliency annotations. MSRA-1000 dataset images have a large variety in their content, background structures are essentially simplistic and smooth. The dataset available on line: https://mmcheng.net/msra10k/. Our algorithm used fixed values of sigma (σ = 4) and alpha (α = 0.4) throughout the comparison. We comparatively evaluate the salient region of our algorithm and compare it with four state-of-the-art models on the basis of four metrics, i.e., the rand index [54], global consistency, variation information (VOI) [55], and the F-measure. We select 886 images from the MSRA dataset. Table 1 shows the results. The methods that are involved in the comparison are SDREL [50], RBS [51], the CV model [29], the LI model [52], and the DRLSE model [39]. Our ACCD_SOD achieves the highest RI (higher is better), lowest GCE and VOI (lower is better), and highest F-measure. In terms of running time, our algorithm is faster than the state-of-the-art models.
We also compare our ACCD-SOD algorithm with SDREL [50], LFE [40], and LI [52] on the six synthetic images given in Figure 7. The comparison results that are based on the number of iterations are summarized in Figure 8, with the evolutions time being summarized in Figure 9.   Figure 7. The first row consists of the original images. The second row demonstrates the object contours of the SDREL model, the third row gives the SDREL salience results, the fourth row shows the contours of LFE, the fifth row shows the salience results of the LFE model, the sixth row shows the contour of the LI model, and the salience results are in the seventh row. The eighth row shows the contour results of our algorithm, and the final salience is in the last row. We also compare our ACCD-SOD algorithm with SDREL [50], LFE [40], and LI [52] on the six synthetic images given in Figure 7. The comparison results that are based on the number of iterations are summarized in Figure 8, with the evolutions time being summarized in Figure 9. Appl. Sci. 2020, 10, x FOR PEER REVIEW 13 of 20

Comparison with Recent Models without Active Contour
The proposed method is qualitatively and quantitatively compared with state-of-the-art algorithms, including Normalized cut-based saliency detection by adaptive multi-level region merging published in Transactions on Image Processing simply abbreviated (TIP) [56], Visual saliency by extended quantum cuts (EQCUT) [57], Context-aware saliency detection (CA) [58], Saliency detection using maximum symmetric surround abbreviated on the name of Achanta (AC), [59] and Saliency filters: Contrast based filtering for salient region detection (SF) [60]. The proposed method is evaluated on part of the MSRA 1000 dataset. The aim of our complex domain algorithm is to boost the salient region that is based on the active contour. Figure 10 shows the comparison results, and Figure 11 shows the precision and recall curves. In Figure 12, we present the curves of true positives vs. false positives. The curves show that the proposed algorithm surpasses the others, except TIP. The Table 2 shows average running time for each algorithm image (image per second) on MSRA-1000 datast.

Comparison with Recent Models without Active Contour
The proposed method is qualitatively and quantitatively compared with state-of-the-art algorithms, including Normalized cut-based saliency detection by adaptive multi-level region merging published in Transactions on Image Processing simply abbreviated (TIP) [56], Visual saliency by extended quantum cuts (EQCUT) [57], Context-aware saliency detection (CA) [58], Saliency detection using maximum symmetric surround abbreviated on the name of Achanta (AC), [59] and Saliency filters: Contrast based filtering for salient region detection (SF) [60]. The proposed method is evaluated on part of the MSRA 1000 dataset. The aim of our complex domain algorithm is to boost the salient region that is based on the active contour. Figure 10 shows the comparison results, and Figure 11 shows the precision and recall curves. In Figure 12, we present the curves of true positives vs. false positives. The curves show that the proposed algorithm surpasses the others, except TIP. The Table 2 shows average running time for each algorithm image (image per second) on MSRA-1000 datast.

Comparison with Recent Models without Active Contour
The proposed method is qualitatively and quantitatively compared with state-of-the-art algorithms, including Normalized cut-based saliency detection by adaptive multi-level region merging published in Transactions on Image Processing simply abbreviated (TIP) [56], Visual saliency by extended quantum cuts (EQCUT) [57], Context-aware saliency detection (CA) [58], Saliency detection using maximum symmetric surround abbreviated on the name of Achanta (AC), [59] and Saliency filters: Contrast based filtering for salient region detection (SF) [60]. The proposed method is evaluated on part of the MSRA 1000 dataset. The aim of our complex domain algorithm is to boost the salient region that is based on the active contour. Figure 10 shows the comparison results, and Figure 11 shows the precision and recall curves. In Figure 12, we present the curves of true positives vs. false positives. The curves show that the proposed algorithm surpasses the others, except TIP. The Table 2 shows average running time for each algorithm image (image per second) on MSRA-1000 datast. We summarize the precision-recall curves and report the F-measure, defined by The parameter  is the tradeoff in precision P and recall R . The recall and prediction are calculated by   We summarize the precision-recall curves and report the F-measure, defined by Appl. Sci. 2020, 10, 3845

of 19
The parameter α is the tradeoff in precision P and recall R. The recall and prediction are calculated by Recall = |A∩B| |B| and Precision = |A∩B| |A| , where A is the detected object and B is the ground truth.   Figure 11. Precision-recall (PR) curves comparing the proposed approach with existing ones. Our algorithm outperforms the others, except TIP.

Discussion
The effectiveness and stability of the proposed algorithm are verified based on both synthetic and natural images in this section. The major contribution of our ACCD_SOD model is the complex domain. Our model introduces a novel algorithm while using the complex domain that simultaneously gives salience cues and finds the exact boundary. The proposed ACCD_SOD model obtains salience detection results using the global complex force function. Our primary aim is   Figure 11. Precision-recall (PR) curves comparing the proposed approach with existing ones. Our algorithm outperforms the others, except TIP.

Discussion
The effectiveness and stability of the proposed algorithm are verified based on both synthetic and natural images in this section. The major contribution of our ACCD_SOD model is the complex domain. Our model introduces a novel algorithm while using the complex domain that simultaneously gives salience cues and finds the exact boundary. The proposed ACCD_SOD model obtains salience detection results using the global complex force function. Our primary aim is

Discussion
The effectiveness and stability of the proposed algorithm are verified based on both synthetic and natural images in this section. The major contribution of our ACCD_SOD model is the complex domain. Our model introduces a novel algorithm while using the complex domain that simultaneously gives salience cues and finds the exact boundary. The proposed ACCD_SOD model obtains salience detection results using the global complex force function. Our primary aim is proposing a model that controls the initialization irregularity in active contour and is also capable of eliminating the expensive re-initialization. Pixelwise salience detection provides such a solution for initialization and the subsequent foreground identification. The model resolves not only the problem of initialization by complex domain, but, precisely makes the most excellent use of low pass filter in complex domain to eliminate the problem of re-initialization. It is clear from the results that our algorithm detects the boundary accurately without repeated iterations or an extra initialization step. Figures 1, 4 and 7 prove that the algorithm directly initialize on the object boundary.
Among the methods that are involved in comparison, the DRLSE model is over reliant to the initialization and creating numerical error. The LI model requires the convolution before the repetition and, therefore, the functioning time of the computing is decreased, but the initial contour is ineffective in robustness. The RBS model used the local interior and exterior statistics to initialize every pixel, which increases the computational complexity. The LFE model applies local information to handle intensity-inhomogeneity for creating acceptable segmentation decisions but sensitive to noise. SDREL fail on images with complex background, as investigated by a researcher [44][45][46][47]. Proved from our results, the complex domain provides a smooth curve and, meanwhile, its implementation reduces the computational cost when comparing to the other model.
Our proposed model shows high robustness compared to the SDREL, RBS, CV, LI, DRLSE, and LFE models and accommodates inhomogeneous images. It avoids reinitialization and thus also reduces the computational cost. The complex force function has a relatively extensive capture range and accommodates small concavities. We examine that, in both situations with strong objects or week objects, our complex domain approach achieves encouraging results. The proposed approach can be implemented and suited to different modality of images. However, the proposed approach detects and recognizes approximate boundaries in the presence of weak objects. Therefore, the evaluation capacity is reduced in the presence of poor and low quality or heavy blur. The heavy noise also reduced the estimation ability due to the impact on the edge of the object. Our proposed model is not considerable for images, like texture.
The parameter setting process plays an important role in the ultimate result of ACMs and it can produce a great performance improvement if the proper parameters are given to an active contour model. If the value of parameter α is large, the propagation of dynamic interface cause results and it cannot be controlled. In the case of small α, then contour lost its propagation speed. It is difficult to choose a proper value of α. In both cases, if value is too large or in the case of too small value of α, the performance and result will be effected and undesirable. Therefore, we set proper and fixed value of the parameter alpha (α) = 0.4 for all synthetic and natural images. The value of the computation parameter should be accurately used. It is well known that a very small value might generate undesirable results, whereas extremely large values can drive to high computational complexity. Sigma has a significant role, the sensitivity to noise arises in case of keeping the value of sigma is very small. However, the computational cost and edge leakage arise in the case of increasing sigma. Therefore, to achieve a more precise saliency result, setting the computation parameter σ to be a comparatively small value will give stable propagation. Our model uses a fixed parameter and it shows good performance when compared to state-of-the-art models. The local test shows that we can obtain better results if the parameter is optimized for every single image.

Conclusions and Future Research
In this paper, a new complex domain model is proposed, while using a low-pass filter as a penalty term and a complex force function to distinguish the object from the background for the active contour. It is well known that there is no perfect method that can succeed for all types of images. The detection results depend on many factors, such as the texture, intensity distribution, image content, and problem domain. Therefore, it is impossible to consider a single method for all types of images. We focus on the initialization and reinitialization problem and propose a novel complex-domain-based transformation for joint salience detection and object boundary detection. First, we transform the input image to the complex domain. The complex transformation provides salience cues. We use a low-pass filter for distinguishing the salient region from the background regions. Finally, the complex-force function is used to fine-tune the detected boundary. We compare the proposed model with state-of-the-art techniques and find that the proposed technique outperforms other methods and produces better results at low computational cost.
In the future, the following advantages are furnished by the ACM in the complex domain according to the analysis that is offered in the existing study. Here, we would like to provide some recommendations for researchers. First, the complex domain can be used in other areas; for example, the complex domain can help to efficiently extract more local information in medical images. Second, initialization and parameter settings play a very important role in the final results. The optimization algorithm for setting parameters should be studied in the future, and more suitable initialization methods for active contour models in the complex domain should be proposed in order to reduce the computational cost in the future. Finally, combining complex domain with other theories is a feasible and efficient way to address the deficiencies of ACMs; for example, the complex domain could be combined with a neural network to obtain a high level of accuracy.
Author Contributions: The authors' contributions are as follows: U.S.K. conceived the algorithm, designed and implemented the experiments and wrote the manuscript; resources were provided by X.Z.; Y.S. reviewed and edited. All authors have read and agreed to the published version of the manuscript.