Emotion-Based Classification and Indexing for Wallpaper and Textile

This study, based on human emotions and visual impression, develops a novel framework of classification and indexing for wallpaper and textiles. This method allows users to obtain a number of similar images that can be corresponded to a specific emotion by indexing through a reference image or an emotional keyword. In addition, a predefined color–emotion model is applied to deal with the transference between emotions and colors in the paper. Besides color and emotion, the other significant feature for indexing is texture. Therefore, two features—the main colors (the representative colors) and the foreground complexity of a color image—are adopted in the method. The foreground complexity (a pattern complexity) is also called the texture of the pattern in an image. Another contribution of this study is the new algorithms of Touch Four Sides (TFS) and Touch Up Sides (TUS), which can aid in extracting an accurate background and foreground for color images. The potential applications of this study can support non-professionals in finding suitable color-combinations based on emotions for many applications with the transference between emotions and colors, and to imitate the professional operation of the color matching such as interior design, product design, advertising design, image retrieval and other relative applications.


Introduction
The prediction of human emotions is very important in many applications of business, science and engineering.In recent years, there has been an accumulation of studies using images to convey emotions and opinions .Increasingly, studies of methods for image analysis, retrieval and annotation that use emotional semantics have been applied in many disciplines.
Images are an important medium for conveying human emotions and visual impression.Serving as the main components of an image, colors and textures are most commonly used in research.Meanwhile, the human reaction to colors has been studied for many years.Certain colors generate certain feelings.Designers who understand these colors and emotions can use such information to represent a business appropriately.For example, Figure 1 [15] shows a color-emotion guide and a fun infographic for the emotions elicited by well-known brands.
This paper focuses on the images of wallpaper and textiles, which will be discussed together, because they have similar features in terms of their inherent structure.This method reveals a substitute for traditional methods for classifying and indexing images with emotion.In computer image processing, there are many effective techniques for indexing images by extracting features automatically.Unfortunately, there has been little research on image indexing derived from human emotion.However, along with the growing number of multimedia data on the Internet, such as the various countless images, photos and videos, technical knowledge of the indexing of images (photo or video) and image (photo or video) retrieval are increasingly required.Meanwhile, it will be increasingly necessary for users based on human emotions.This paper proposes the indexing of wallpaper and textiles using emotion based on a predefined color-emotion model to identify the emotions generated by images.It is a new, interesting, and practical experience for users.
Understanding and describing the emotions aroused by an image remains a challenge.The inherent subjectivity of the emotional responses of the user is also difficult to classify.This domain is challenging because emotions are subjective feelings uniquely influenced by the personal characteristics of the user.Since emotions and messages about the content of an image are usually specific, views differ because of differences in culture, race, gender, age, etc.No studies have proposed a reliable and objective solution to this challenge.For instance, an experiment by Michela et al. [8] analyzed the opinions of only a few people.A case study analyzed emotion and opinion for natural images.A classification algorithm was used to categorize images according to the emotions perceived by people when looking at them and to understand which features were more closely related to such emotions.However, this is not the majority view.The solution developed in this study uses a color-emotion model instead of the previous method.Many studies of coloremotion models have been published.For instance, Kobayashi [9,10] published a text on the Color Image Scale and Eisemann [11] presented Pantone's Guide to Communication with Color.These models can be used to find reliable and objective solutions to color-emotion transferred problems.
The new mapping method for color-emotion proposed in this study uses a dynamic color combination scheme utilizing the complex content of a color image to determine the number of color combinations used: two, three or five colors.Chou's color-emotion guide [14], including 24 emotions, each of which contains four kinds of color combinations that are 24 single color, 24 two-color, 48 three-color and 32 five-color in the Color Scheme Bible Compact Edition, is used to connect the color and emotion.Figure 2 shows one of these emotions.Therefore, the proposed method supports both indexing solutions.The first solution allows users to choose the desired emotion directly from a predefined color-emotion model.This work uses the Chou color-emotion guide.The second solution allows users to index using reference images.In computer image processing, there are many effective techniques for indexing images by extracting features automatically.Unfortunately, there has been little research on image indexing derived from human emotion.However, along with the growing number of multimedia data on the Internet, such as the various countless images, photos and videos, technical knowledge of the indexing of images (photo or video) and image (photo or video) retrieval are increasingly required.Meanwhile, it will be increasingly necessary for users based on human emotions.This paper proposes the indexing of wallpaper and textiles using emotion based on a predefined color-emotion model to identify the emotions generated by images.It is a new, interesting, and practical experience for users.
Understanding and describing the emotions aroused by an image remains a challenge.The inherent subjectivity of the emotional responses of the user is also difficult to classify.This domain is challenging because emotions are subjective feelings uniquely influenced by the personal characteristics of the user.Since emotions and messages about the content of an image are usually specific, views differ because of differences in culture, race, gender, age, etc.No studies have proposed a reliable and objective solution to this challenge.For instance, an experiment by Michela et al. [8] analyzed the opinions of only a few people.A case study analyzed emotion and opinion for natural images.A classification algorithm was used to categorize images according to the emotions perceived by people when looking at them and to understand which features were more closely related to such emotions.However, this is not the majority view.The solution developed in this study uses a color-emotion model instead of the previous method.Many studies of color-emotion models have been published.For instance, Kobayashi [9,10] published a text on the Color Image Scale and Eisemann [11] presented Pantone's Guide to Communication with Color.These models can be used to find reliable and objective solutions to color-emotion transferred problems.
The new mapping method for color-emotion proposed in this study uses a dynamic color combination scheme utilizing the complex content of a color image to determine the number of color combinations used: two, three or five colors.Chou's color-emotion guide [14], including 24 emotions, each of which contains four kinds of color combinations that are 24 single color, 24 two-color, 48 three-color and 32 five-color in the Color Scheme Bible Compact Edition, is used to connect the color and emotion.Figure 2 shows one of these emotions.Therefore, the proposed method supports both indexing solutions.The first solution allows users to choose the desired emotion directly from a predefined color-emotion model.This work uses the Chou color-emotion guide.The second solution allows users to index using reference images.In the Chou's color-emotion study [14], the emotion of softness contains two, three, and five color combinations.Figure 2. In the Chou's color-emotion study [14], the emotion of softness contains two, three, and five color combinations.
The challenges of using dynamic color combinations for the proposed mapping method of color-emotion include dynamically determining the number of colors in the image, identifying the main color of the background and the foreground more accurately and pairing these chosen main colors with destination color combinations.The method developed by Su and Chang [22] is used to select the initial representative color points in the color space.No more than three cycles of the Linde, Buzo, and Gray (LBG) algorithm [23] are used to obtain the final representative color points by clustering.This study then proposes two novel algorithms: Touch Four Sides (TFS) and Touch Up Sides (TUS).This enables accurate identification of the background color and the other main colors.After feature extraction or emotion selection, it can help people or non-professionals to accurately and easily meet expectations of emotion, which are based on feelings.Figure 3 shows the schematic diagram for this method.
The remainder of this paper is organized as follows.Section 2 reviews previous studies in related areas.Section 3 describes the proposed architecture, the processing of the feature extraction and the dynamic color combinations for the color-emotion in detail, and, finally, performing the progressing of Emotion Based Classification and Indexing (EBCI).Section 4 demonstrates the experiment and the results.Section 5 proposes a conclusion and makes suggestions for future work.The challenges of using dynamic color combinations for the proposed mapping method of color-emotion include dynamically determining the number of colors in the image, identifying the main color of the background and the foreground more accurately and pairing these chosen main colors with destination color combinations.The method developed by Su and Chang [22] is used to select the initial representative color points in the color space.No more than three cycles of the Linde, Buzo, and Gray (LBG) algorithm [23] are used to obtain the final representative color points by clustering.This study then proposes two novel algorithms: Touch Four Sides (TFS) and Touch Up Sides (TUS).This enables accurate identification of the background color and the other main colors.After feature extraction or emotion selection, it can help people or non-professionals to accurately and easily meet expectations of emotion, which are based on feelings.Figure 3 shows the schematic diagram for this method.
The remainder of this paper is organized as follows.Section 2 reviews previous studies in related areas.Section 3 describes the proposed architecture, the processing of the feature extraction and the dynamic color combinations for the color-emotion in detail, and, finally, performing the progressing of Emotion Based Classification and Indexing (EBCI).Section 4 demonstrates the experiment and the results.Section 5 proposes a conclusion and makes suggestions for future work.
An automatic color-emotion transferring algorithm was first proposed by Reinhard et al. [16].This shifts and scales the pixel values of the input image to match the mean and standard deviation of the reference image.Although the Reinhard study was performed in 2001, psychological experiments were performed as early as 1894, when Cohn proposed the first empirical method for color preference [27,28].Despite the considerable number of studies, Norman [29] reported the findings of these early studies were diverse and tentative.Later, Kobayashi [10] developed a color-emotion model that obtained data in psychological studies to identify emotional features in the field of fashion.
In the Complete Guide to Colour Psychology [30], two color-emotion models were proposed: one for a single color and one for a combination of colors.The two main categories in the study of single color-emotion are classification and quantification.The classification of a single color-emotion uses principal component analysis to reduce a large number of colors to a small number of categories [31].Early studies to quantify color-emotion were undertaken by Sato and Nakamura [32], who
An automatic color-emotion transferring algorithm was first proposed by Reinhard et al. [16].This shifts and scales the pixel values of the input image to match the mean and standard deviation of the reference image.Although the Reinhard study was performed in 2001, psychological experiments were performed as early as 1894, when Cohn proposed the first empirical method for color preference [27,28].Despite the considerable number of studies, Norman [29] reported the findings of these early studies were diverse and tentative.Later, Kobayashi [10] developed a color-emotion model that obtained data in psychological studies to identify emotional features in the field of fashion.
In the Complete Guide to Colour Psychology [30], two color-emotion models were proposed: one for a single color and one for a combination of colors.The two main categories in the study of single color-emotion are classification and quantification.The classification of a single color-emotion uses principal component analysis to reduce a large number of colors to a small number of categories [31].Early studies to quantify color-emotion were undertaken by Sato and Nakamura [32], who provided a color-emotion model to quantify three color-emotion factors: activity, weight and heat.This study was then verified by Wei-Ning et al. [19].Studies of color combinations by Kobayashi [9] and Ou et al. [17,18] showed a simple relationship between emotion and color, and between an emotion and a color pair.Lee et al. [33] used set theory to evaluate color combinations.Whereas studies of color-emotion only considered color pairs, the proposed approach uses a color-emotion scheme called the Color Scheme Bible Compact Edition [14], utilizing a predefined color-emotion guide.Compared to a single or two color-emotion, color combinations provide a simpler and more accurate description of color-emotion in the images.Tanaka [34] reported emotion semantics are the most abstract semantic structure in images because they are closely related to cognitive models, cultural background and the aesthetic standards of users.Of all of the factors affecting the emotion of images, colors are considered the most important.Tanaka found the relative importance of each heterogeneous feature of attractiveness is color > size > spatial frequency and the relative contribution of each heterogeneous feature to attractiveness is color heterogeneity > size heterogeneity > shape heterogeneity > texture heterogeneity.Two methods to extract emotion features from images were soon developed.First, a domain-based method was developed to extract relevant features, based on specialist knowledge of the application field [19].For instance, Itten [35] formulated a theory for the use of color and color combinations in art and included the semantics.The other method to extract emotional features is to perform psychological experiments to identify the types of features significant affecting users.To this effect, Mao [36] proved the fractal dimensions of the images are related to the specific properties of an image.
For textiles, Kim et al. [2] presented an approach using physical feature extraction and fuzzy-rule-based evaluation to label images, using human emotion or feelings, which demonstrated positive results.They then proposed a method using a neural network for emotion-based textile indexing.The Kim et al. proposed indexing system was tested using 160 textile images.For each textile image, seventy people manually annotated the emotions that the images inspired.However, this experiment analyzed the opinions of only a few people.
Recently, human emotion has become a significant component in the design of the visual aspects of wallpaper, interior design, textiles, fashion and homepages [37].However, because the perception and interpretation of visual information at the affective level is ambiguous, it is difficult to predict human emotions directly using visual information.Therefore, it is important to determine the relationship between human emotions and visual information.In this work, a new corresponding method for color-emotion that adopts a dynamic color combination scheme is proposed.It demonstrates the relationship of colors and emotions based on a predefined color-emotion model, applied to the classification and indexing of wallpaper and textiles.

The Proposed Scheme
This study develops a new framework of classification and indexing for wallpaper and textiles, using human emotion.It supports both solutions: the reference image and the semantic.The schematic diagram in Figure 3 shows that, when the option of the semantic is selected, users choose a desired emotion by a predefined color-emotion model.On the contrary, the option of the reference image is not required to use the predefined color-emotion model.Both options of the above can index and identify the intended images from the classified database according to similar emotions.Figure 4 shows the flow diagram and the components of the proposed framework for Emotion-Based Classification and Indexing (EBCI).

Feature Extraction
In the feature extraction stage, the images coming from two pathways are analyzed and the features are extracted.One of the pathways uses the reference image to acquire the features that include the main colors and the complexity of the foreground.All inserted images by the other pathway, which were classified and added into the database based on their features, come from the system administrator.
As mentioned above, the feature extraction stage is used to extract the main colors from the two pathway images.These main colors can be a single color or composed of other three color combinations: two-color, three-color and five-color.The composed color combination depends on the color complexity of the images.Further, this method also supports other emotion guidebooks that are able to allow more than five-color combinations for the color-emotion model.
Figure 5 demonstrates some examples from the predefined color-emotion guidebook by Chou [14].Generally, the number of main colors increases with the color complexity of the image.That is, the number of color combinations is chosen according to the color complexity of the content of the image.In this study, the first color of the cluster is set as the background, the second color of the cluster is set as the dominant color, and the rest are then deduced by analogy.Therefore, the main colors are selected dynamically, according to the color complexity of the images.
The main colors extracted from an input image or a reference image correspond to a predefined color combination of emotion, and both have to own the same number of color combinations.In this approach, a color-emotion model is proposed by Chou [14], providing a single color and the other three color combinations (two, three and five colors).Accordingly, the number of main colors must be within this range.

Feature Extraction
In the feature extraction stage, the images coming from two pathways are analyzed and the features are extracted.One of the pathways uses the reference image to acquire the features that include the main colors and the complexity of the foreground.All inserted images by the other pathway, which were classified and added into the database based on their features, come from the system administrator.
As mentioned above, the feature extraction stage is used to extract the main colors from the two pathway images.These main colors can be a single color or composed of other three color combinations: two-color, three-color and five-color.The composed color combination depends on the color complexity of the images.Further, this method also supports other emotion guidebooks that are able to allow more than five-color combinations for the color-emotion model.
Figure 5 demonstrates some examples from the predefined color-emotion guidebook by Chou [14].Generally, the number of main colors increases with the color complexity of the image.That is, the number of color combinations is chosen according to the color complexity of the content of the image.In this study, the first color of the cluster is set as the background, the second color of the cluster is set as the dominant color, and the rest are then deduced by analogy.Therefore, the main colors are selected dynamically, according to the color complexity of the images.
The main colors extracted from an input image or a reference image correspond to a predefined color combination of emotion, and both have to own the same number of color combinations.In this approach, a color-emotion model is proposed by Chou [14], providing a single color and the other three color combinations (two, three and five colors).Accordingly, the number of main colors must be within this range.

The Color Emotion Model
A model having 24 emotions is proposed by Chou [14], and each emotion includes 24 two-color combinations, 48 three-color combinations and 32 five-color combinations.The label of 24 emotions are: fresh, soft and elegant, illusive and misty, pretty and mild, frank, soothing and lazy, lifelike, solicitous and considerate, clean and cool, excited, optimistic, ebullient, graceful, plentiful, magnificent, moderate, primitive, simple, solemn, mysterious, silent and quiet, sorrowful, recollecting and withered.All of the colors in Chou's model correspond to the RGB color space and the CMYK color space.In this study, the CIELab [38] color space, which is converted from the RGB color space, is adopted.The CIELab color space is demonstrated in Figure 6, which represents these values using three axes: L, a and b.

The Color Emotion Model
A model having 24 emotions is proposed by Chou [14], and each emotion includes 24 two-color combinations, 48 three-color combinations and 32 five-color combinations.The label of 24 emotions are: fresh, soft and elegant, illusive and misty, pretty and mild, frank, soothing and lazy, lifelike, solicitous and considerate, clean and cool, excited, optimistic, ebullient, graceful, plentiful, magnificent, moderate, primitive, simple, solemn, mysterious, silent and quiet, sorrowful, recollecting and withered.All of the colors in Chou's model correspond to the RGB color space and the CMYK color space.In this study, the CIELab [38] color space, which is converted from the RGB color space, is adopted.The CIELab color space is demonstrated in Figure 6, which represents these values using three axes: L, a and b.

The Dynamic Extraction of the Main Colors
The progression of main color extraction is demonstrated in the flowchart in Figure 7, which is part of the stage of feature extraction in this framework.In this period, the representative colors can be derived from both the reference image and the input image, which are also called the main colors of the treated image.The main colors and the amount of the main colors can be determined in this stage.For color-emotion correspondence, the conventional approaches [17,18] generally provide merely one color or a fixed number of color combinations to represent the relation between color and emotion.Nevertheless, adjustable and dynamic color combinations of one to five colors are permitted in this study, depending on the color complexity of the content of the images; here, the complexity is defined as the distribution of the image pixels in Lab color space.For instance, Figure 5b demonstrates less complex images using two-color combinations and Figure 5d shows more complex images using five-color combinations.

The Dynamic Extraction of the Main Colors
The progression of main color extraction is demonstrated in the flowchart in Figure 7, which is part of the stage of feature extraction in this framework.In this period, the representative colors can be derived from both the reference image and the input image, which are also called the main colors of the treated image.The main colors and the amount of the main colors can be determined in this stage.For color-emotion correspondence, the conventional approaches [17,18] generally provide merely one color or a fixed number of color combinations to represent the relation between color and emotion.Nevertheless, adjustable and dynamic color combinations of one to five colors are permitted in this study, depending on the color complexity of the content of the images; here, the complexity is defined as the distribution of the image pixels in Lab color space.For instance, Figure 5b demonstrates less complex images using two-color combinations and Figure 5d shows more complex images using five-color combinations.

The Dynamic Extraction of the Main Colors
The progression of main color extraction is demonstrated in the flowchart in Figure 7, which is part of the stage of feature extraction in this framework.In this period, the representative colors can be derived from both the reference image and the input image, which are also called the main colors of the treated image.The main colors and the amount of the main colors can be determined in this stage.For color-emotion correspondence, the conventional approaches [17,18] generally provide merely one color or a fixed number of color combinations to represent the relation between color and emotion.Nevertheless, adjustable and dynamic color combinations of one to five colors are permitted in this study, depending on the color complexity of the content of the images; here, the complexity is defined as the distribution of the image pixels in Lab color space.For instance, Figure 5b demonstrates less complex images using two-color combinations and Figure 5d shows more complex images using five-color combinations.

Identification of the Main Colors
The main colors of image are extracted using the second step of the flowchart in Figure 7.In this step, the representative colors of the processed images in Lab color space are determined.The sources of the processed images can be referenced images or input images.Most conventional methods use cluster-skill to identify the main colors, according to the distribution of the image-pixels in the color space.The k-means clustering method [39] is widely used, but its drawback is that the initial points in the color space must be randomly chosen from the source image.Another common method is clustering vector quantization (VQ) [40], in which the nearest quantization vector centroid is moved towards the initial point by a small fraction of the distance.This classic quantization technique was initially developed for image processing.However, its disadvantage is its slow speed when the initial points are poor.More importantly, its performance significantly depends on the initially selected points.
The selected representative points from image-pixels are gained by a method that combines two techniques in this study.First, adopting the skill of Independent Scalar Quantization (ISQ) [21] enables simply and quickly to acquire the initial point from the pixels of the RGB color space.Figure 8 shows the 3-D and 2-D diagrams and Figure 8a shows that the 3-D Lab color space is quantized into 8 individual cubes of the same size, using ISQ. Figure 8b shows the pixels distributed in the 2-D Lab color space.There are three cubes holding pixels, and the one cube with no pixels is called an empty cube.The centroids of these cubes, which are based on the distribution of the pixels, are computed using the pixels of the cube.Since an empty cube has no representative point, the number of initial points may be less than 8, but the maximum number is 8. Chou [14] limited the maximum number of color combinations to five, so this method appropriately modulates those initial points to match the color combinations.
Figure 8c shows how the LBG algorithm [23] is used to make necessary modifications after ISQ.The LBG algorithm is similar to a K-means clustering algorithm, which uses a set of input vectors, S = {x i ∈ R d | i = 1, 2, . . ., n}, as the input and generates a representative subset of vectors, initiate a codebook, C = {c j ∈ R d | j = 1, 2, . . ., K}, which is randomly selected, with a user-specified K << n as the output.Although the settings for Vector Quantization (VQ) are usually d = 16 and K = 256 or 512, the settings used in this study are d = 3 and K ≤ 8.

Identification of the Main Colors
The main colors of image are extracted using the second step of the flowchart in Figure 7.In this step, the representative colors of the processed images in Lab color space are determined.The sources of the processed images can be referenced images or input images.Most conventional methods use cluster-skill to identify the main colors, according to the distribution of the image-pixels in the color space.The k-means clustering method [39] is widely used, but its drawback is that the initial points in the color space must be randomly chosen from the source image.Another common method is clustering vector quantization (VQ) [40], in which the nearest quantization vector centroid is moved towards the initial point by a small fraction of the distance.This classic quantization technique was initially developed for image processing.However, its disadvantage is its slow speed when the initial points are poor.More importantly, its performance significantly depends on the initially selected points.
The selected representative points from image-pixels are gained by a method that combines two techniques in this study.First, adopting the skill of Independent Scalar Quantization (ISQ) [21] enables simply and quickly to acquire the initial point from the pixels of the RGB color space.Figure 8 shows the 3-D and 2-D diagrams and Figure 8a shows that the 3-D Lab color space is quantized into 8 individual cubes of the same size, using ISQ. Figure 8b shows the pixels distributed in the 2-D Lab color space.There are three cubes holding pixels, and the one cube with no pixels is called an empty cube.The centroids of these cubes, which are based on the distribution of the pixels, are computed using the pixels of the cube.Since an empty cube has no representative point, the number of initial points may be less than 8, but the maximum number is 8. Chou [14] limited the maximum number of color combinations to five, so this method appropriately modulates those initial points to match the color combinations.
Figure 8c shows how the LBG algorithm [23] is used to make necessary modifications after ISQ.The algorithm is shown below: Step 1. Input the training vectors, S = {x i ∈ R d | i = 1, 2, . . ., n}.
Step 4. Classify the training vectors into K clusters, according to x i ∈ S q if ||x i − c q || p ≤ ||x i − c j || p for j = q.
Step 6. Set k ← k + 1 and compute the distortion D k = ∑ K j=1 ∑ x i∈S j x i -c j p .
The convergence of the LBG algorithm depends on the initial codebook, C, the distortion, D k , and the threshold, ε.The maximum number of iterations must be defined in order to ensure convergence.
In the process of the LBG algorithm, Step 2 of the LBG is replaced by the ISQ, namely the method of random selection is replaced by the initial representative points provided by the ISQ as the initial main colors.When the initial points (initial guesses) are appropriately chosen, a good representative point will be gained by the process of the pixel clusters that adopts the LBG algorithm.However, in the LBG algorithm, when the initial points are poor, the final representative points may also be poor and this increases computation time.Therefore, the performance of the LBG significantly depends on the determination of the initial point.When the applicable initial points are picked using the ISQ, the LBG algorithm can be performed more efficiently [22].The experiment shows no more than three cycles of the LBG are necessary to obtain a good result [22].Figure 9 demonstrates representative colors are lost during processing by the LBG.For Equation ( 1), the d (p, q) expresses the Euclidean distance between points p and q. p = (p L , p a , p b ) and q = (q L , q a , q b ) are two points in the Lab color space.
The convergence of the LBG algorithm depends on the initial codebook, C, the distortion, Dk, and the threshold, ε.The maximum number of iterations must be defined in order to ensure convergence.
In the process of the LBG algorithm, Step 2 of the LBG is replaced by the ISQ, namely the method of random selection is replaced by the initial representative points provided by the ISQ as the initial main colors.When the initial points (initial guesses) are appropriately chosen, a good representative point will be gained by the process of the pixel clusters that adopts the LBG algorithm.However, in the LBG algorithm, when the initial points are poor, the final representative points may also be poor and this increases computation time.Therefore, the performance of the LBG significantly depends on the determination of the initial point.When the applicable initial points are picked using the ISQ, the LBG algorithm can be performed more efficiently [22].The experiment shows no more than three cycles of the LBG are necessary to obtain a good result [22].Figure 9 demonstrates representative colors are lost during processing by the LBG.For Equation ( 1), the d (p, q) expresses the Euclidean distance between points p and q. p = (pL, pa, pb) and q = (qL, qa, qb) are two points in the Lab color space.d (p, q)= ( q − p ) + ( q − p ) + ( q − p )  Continuing the above, the updating of the clustering centers is expressed in Step 5 of the LBG, as shown in the Equation (2).Cubes with no or insufficient pixels must be re-established, and the representative colors in these cubes are also eliminated, namely, the number of representative colors may be reduced based on the distribution of the pixels in the color space.For instance, Figure 9 demonstrates a pixel can belong to cube Ci, but it may be further away from centroid Ci than from centroid Cj of cube Cj.In the previous case, the LBG must be reused to make modifications.In Figure 9a,b, the stars are the representative colors for each cube and the dots represent the pixels in the color space.Continuing the above, the updating of the clustering centers is expressed in Step 5 of the LBG, as shown in the Equation (2).Cubes with no or insufficient pixels must be re-established, and the representative colors in these cubes are also eliminated, namely, the number of representative colors may be reduced based on the distribution of the pixels in the color space.For instance, Figure 9 demonstrates a pixel can belong to cube C i , but it may be further away from centroid C i than from centroid C j of cube C j.In the previous case, the LBG must be reused to make modifications.In Figure 9a,b, the stars are the representative colors for each cube and the dots represent the pixels in the color space.

Determination of the Amount of Main Colors
As mentioned previously, the representative points, which are also called the main colors in this study, are gained from the color space of each single image.The number of main colors can be determined with this phase, and the number of closed intervals is allowed to be 1-8 in the proposed method.Further, the decision of the background, foreground and other main colors are described in the next section.
In Figure 7, the cluster sorting step based on the number of pixels is used to judge and to sort all of the clusters obtained by the ISQ and a single cycle of the LBG.In the process of sorting, every cluster is checked using the number of pixels, after establishing an appropriate threshold, and all clusters are compared.Those not meeting the threshold are eliminated because they have less effect on the color emotion expression.If the judgment in the step is "true" (Y), a further single cycle of the LBG renews the remaining clusters.In this implementation, the threshold is set to 1% of the total number of pixels in the color space.The threshold can be adjusted according to the circumstances, e.g., 0.5% or 2%.
This study adopts Chou's work [14] that provides the maximum number of color combinations is five.Therefore, if the number of clusters exceeds the maximum number of color combinations, the top five largest clusters are retained and a single cycle of the LBG is used to renew these remaining clusters.Eventually, the number of clusters is adjusted dynamically to no more than five.

Identification of the Background and the Foreground
In previous research, it is difficult to extract the background color and the foreground color consistently and accurately from images for most methods because only the numbers of pixels in clusters are considered to determine the largest is treated as the most important [3].Figure 10 demonstrates that the proposed method can easily, efficiently and accurately segment the background and the foreground.Figure 10a is named the source image; the color of the parts that are not leaves (dark green) is expected to be the background color cluster.In Figure 10b,c, the proposed approach uses the Touch Four Sides (TFS) algorithm to improve the identification of the background and the other main colors extracted from the images.As mentioned previously, the representative points, which are also called the main colors in this study, are gained from the color space of each single image.The number of main colors can be determined with this phase, and the number of closed intervals is allowed to be 1-8 in the proposed method.Further, the decision of the background, foreground and other main colors are described in the next section.
In Figure 7, the cluster sorting step based on the number of pixels is used to judge and to sort all of the clusters obtained by the ISQ and a single cycle of the LBG.In the process of sorting, every cluster is checked using the number of pixels, after establishing an appropriate threshold, and all clusters are compared.Those not meeting the threshold are eliminated because they have less effect on the color emotion expression.If the judgment in the step is "true" (Y), a further single cycle of the LBG renews the remaining clusters.In this implementation, the threshold is set to 1% of the total number of pixels in the color space.The threshold can be adjusted according to the circumstances, e.g., 0.5% or 2%.
This study adopts Chou's work [14] that provides the maximum number of color combinations is five.Therefore, if the number of clusters exceeds the maximum number of color combinations, the top five largest clusters are retained and a single cycle of the LBG is used to renew these remaining clusters.Eventually, the number of clusters is adjusted dynamically to no more than five.

Identification of the Background and the Foreground
In previous research, it is difficult to extract the background color and the foreground color consistently and accurately from images for most methods because only the numbers of pixels in clusters are considered to determine the largest is treated as the most important [3].Figure 10 demonstrates that the proposed method can easily, efficiently and accurately segment the background and the foreground.Figure 10a is named the source image; the color of the parts that are not leaves (dark green) is expected to be the background color cluster.In Figure 10b,c, the proposed approach uses the Touch Four Sides (TFS) algorithm to improve the identification of the background and the other main colors extracted from the images.Usually, the images of wallpaper and textiles have many similar characteristics, in terms of the background and the foreground pattern.Generally, their pattern is located near the center and the background is on the outside or on the top.In previous work by Krishnan et al. [41], dominant colors were identified according to foreground objects.Wu et al. [4] analyzed the structure and content of a scene and divided it into different regions, such as salient objects, sky and ground, similar to the proposed method.Usually, the images of wallpaper and textiles have many similar characteristics, in terms of the background and the foreground pattern.Generally, their pattern is located near the center and the background is on the outside or on the top.In previous work by Krishnan et al. [41], dominant colors were identified according to foreground objects.Wu et al. [4] analyzed the structure and content of a scene and divided it into different regions, such as salient objects, sky and ground, similar to the proposed method.
All of the clusters of the main colors in Figures 11 and 12 are initially ready on Step 1.These clusters are checked by the TFS algorithm in Step 2. If the pixels in a cluster are in contact with the edge of the image, at the upper side, lower side, right side and left side (the examples illustrate the upper side of Figures 13 and 14a,b, i.e., if they touch four sides simultaneously, this cluster must be chosen and the process moves to the next step.When the number of the clusters owning the TFS is more than one, then one of them with the most pixels touching the four edges is chosen as the background color.If none of the clusters touch four sides in Step 2, then the TUS algorithm would be adopted to check all the clusters again.The TUS examines the pixels of these clusters to ascertain whether they contact the edge of the upper side or not.Some examples are shown below in Figures 13 and 14c,d.When the number of clusters that keep TUS is more than one, the cluster with the most pixels touching the top is chosen as the background color.
All of the clusters of the main colors in Figures 11 and 12 are initially ready on Step 1.These clusters are checked by the TFS algorithm in Step 2. If the pixels in a cluster are in contact with the edge of the image, at the upper side, lower side, right side and left side (the examples illustrate the upper side of Figures 13 and 14a,b, i.e., if they touch four sides simultaneously, this cluster must be chosen and the process moves to the next step.When the number of the clusters owning the TFS is more than one, then one of them with the most pixels touching the four edges is chosen as the background color.If none of the clusters touch four sides in Step 2, then the TUS algorithm would be adopted to check all the clusters again.The TUS examines the pixels of these clusters to ascertain whether they contact the edge of the upper side or not.Some examples are shown below in Figures 13 and 14c,d.When the number of clusters that keep TUS is more than one, the cluster with the most pixels touching the top is chosen as the background color.When the background color has been identified, the other main colors are then determined in the next phase.Figure 12 shows the flowchart for this procedure.First, the cluster of background colors must be eliminated from these clusters of main colors and the remaining main colors are then sorted based on the amount of pixels in each cluster.Normally, the dominant pattern in the images of wallpaper and textiles is spread over one or many areas combining the other main colors without a background.The dominant pattern is also called the foreground of an image. The previous phase continuously selects the background color and then sorts the remaining main colors.The main colors are ordered according to their importance in the images, i.e., the background color is the first, then the largest cluster (in the remaining main colors) is secondary, and so on.This is mapped to the color combinations of the predefined color-emotion guide.

Image Binarization and Pattern Analysis
In addition to the emotion, the proposed method uses a pattern, which is also called the foreground, to improve the accuracy of indexing.In Figure 14, (5) of (a-d) show the background of the image, which is colored black.In Figure 14, ( 6) of (a-d) show the pattern of the image, which is colored gray.In Figure 14a-d, the binary images of ( 5) and ( 6) are derived from the images of (3), based on their distribution of the main colors.This process of the proposed method is called image binarization.It clearly distinguishes the background and foreground from the images.It is also called the segmentation of images.However, the focus is the pattern with the main colors, so the color of the background must be eliminated from the main colors.
When the pattern undergoes binarization, the construction of the binary foreground (black) turns into another important feature.In Figure 15a, which comes from Figure 14b, this image is divided into 8 × 8 blocks of equal size.The proposed method then computes the ratio of the black pixels for every block and the result is shown in Figure 15b.This feature provides two advantages of the proposed method.The first is for the classification of input images and the second is for the indexing of the reference image.When the background color has been identified, the other main colors are then determined in the next phase.Figure 12 shows the flowchart for this procedure.First, the cluster of background colors must be eliminated from these clusters of main colors and the remaining main colors are then sorted based on the amount of pixels in each cluster.Normally, the dominant pattern in the images of wallpaper and textiles is spread over one or many areas combining the other main colors without a background.The dominant pattern is also called the foreground of an image.
The previous phase continuously selects the background color and then sorts the remaining main colors.The main colors are ordered according to their importance in the images, i.e., the background color is the first, then the largest cluster (in the remaining main colors) is secondary, and so on.This is mapped to the color combinations of the predefined color-emotion guide.

Image Binarization and Pattern Analysis
In addition to the emotion, the proposed method uses a pattern, which is also called the foreground, to improve the accuracy of indexing.In Figure 14, (5) of (a-d) show the background of the image, which is colored black.In Figure 14, ( 6) of (a-d) show the pattern of the image, which is colored gray.In Figure 14a-d, the binary images of ( 5) and ( 6) are derived from the images of (3), based on their distribution of the main colors.This process of the proposed method is called image binarization.It clearly distinguishes the background and foreground from the images.It is also called the segmentation of images.However, the focus is the pattern with the main colors, so the color of the background must be eliminated from the main colors.
When the pattern undergoes binarization, the construction of the binary foreground (black) turns into another important feature.In Figure 15a, which comes from Figure 14b, this image is divided into 8 × 8 blocks of equal size.The proposed method then computes the ratio of the black pixels for every block and the result is shown in Figure 15b.This feature provides two advantages of the proposed method.The first is for the classification of input images and the second is for the indexing of the reference image.

Classification
In the proposed framework (Figure 4), when an input image is added into the database by the system administrator, the processed image is classified based on two conditions: the "emotion" and the "complexity".That is to say, each image in the database contains two of these features.In addition to the "emotion", the "complexity" allows indexing that is more appropriate to the user.In the proposed method, the "complexity" can also be separated into two options: the pattern complexity and the color complexity.
Figure 16 shows the procedure whereby the feature of the "complexity" is employed to promote the accuracy for the processing of classification.The binary images in Figure 16, ( 2) of (a-c), are transferred from (1).This is one of the two features for the proposed method.In this research, the pattern complexity is set at a threshold and has three grades: "sparse", "middle" and "dense", depending on the complexity of the binary images.

Classification
In the proposed framework (Figure 4), when an input image is added into the database by the system administrator, the processed image is classified based on two conditions: the "emotion" and the "complexity".That is to say, each image in the database contains two of these features.In addition to the "emotion", the "complexity" allows indexing that is more appropriate to the user.In the proposed method, the "complexity" can also be separated into two options: the pattern complexity and the color complexity.
Figure 16 shows the procedure whereby the feature of the "complexity" is employed to promote the accuracy for the processing of classification.The binary images in Figure 16, ( 2) of (a-c), are transferred from ( 1).This is one of the two features for the proposed method.In this research, the pattern complexity is set at a threshold and has three grades: "sparse", "middle" and "dense", depending on the complexity of the binary images.

Classification
In the proposed framework (Figure 4), when an input image is added into the database by the system administrator, the processed image is classified based on two conditions: the "emotion" and the "complexity".That is to say, each image in the database contains two of these features.In addition to the "emotion", the "complexity" allows indexing that is more appropriate to the user.In the proposed method, the "complexity" can also be separated into two options: the pattern complexity and the color complexity.
Figure 16 shows the procedure whereby the feature of the "complexity" is employed to promote the accuracy for the processing of classification.The binary images in Figure 16, ( 2) of (a-c), are transferred from (1).This is one of the two features for the proposed method.In this research, the pattern complexity is set at a threshold and has three grades: "sparse", "middle" and "dense", depending on the complexity of the binary images.1) of (a-c) TFS and TUS are adopted to determine the background color and the other main color.In ( 2) and ( 3) of (a-c), the feature of complexity is demonstrated using the binary image.1) of (a-c) TFS and TUS are adopted to determine the background color and the other main color.In ( 2) and ( 3) of (a-c), the feature of complexity is demonstrated using the binary image.Figure 16a shows an example of the complexity for "two colors" and "sparse", where the average complexity of the binary image is less than 0.33.The numerical details are shown in Figure 16, (3) of (a).An example of "three colors" and "middle" is shown in Figure 16, (3) of (b).The average complexity of the binary image is less than 0.66 and more than 0.33.Finally, Figure 16c shows an example for "five colors" and "dense".The average complexity of the binary image is more than 0.66.The numerical details are shown in Figure 16, (3) of (c).
Figure 17 shows the classification processes that employ the two features emotion and complexity, which were acquired from the processed image.Therefore, all images in the database keep both the features of "emotion" and "complexity".Figure 16a shows an example of the complexity for "two colors" and "sparse", where the average complexity of the binary image is less than 0.33.The numerical details are shown in Figure 16, (3) of (a).An example of "three colors" and "middle" is shown in Figure 16, (3) of (b).The average complexity of the binary image is less than 0.66 and more than 0.33.Finally, Figure 16c shows an example for "five colors" and "dense".The average complexity of the binary image is more than 0.66.The numerical details are shown in Figure 16, (3) of (c).
Figure 17 shows the classification processes that employ the two features emotion and complexity, which were acquired from the processed image.Therefore, all images in the database keep both the features of "emotion" and "complexity".In the proposed method, the feature of "emotion" corresponds to one of the emotions of the predefined color-emotion guide, which has four different types of color combination.As well as a single color, the are illustrated on the upper side of Figure 17a-c.Figure 17a shows an example of two colors with the emotion feminine, Figure 17b is the emotion lazy, with three colors and Figure 17c contains five colors with the emotion forceful.The lower side of Figure 17a-c shows another feature, the pattern complexity, which is labeled "sparse", "middle", and "dense", respectively.

Reference Image
This study provides three indexing methods for the user.Figure 18 shows the first way, whereby the reference image is used for indexing.The query image, also called the reference image, undergoes feature extraction using the proposed framework, and then the features of "emotion" and "complexity" can be earned.The "emotion", which is also known as the main color extracted from the reference image, corresponds to the predefined color-emotion guide.The emotion closest to the reference image in the color-emotion guide is then identified.
The search for "emotion" adopts a measured skill that is similar to the Euclidean Distance to estimate the similarity between the reference image and the predefined color-emotion guide.Therefore, a calculation is operated for the main colors in the reference image and the same number in the color-emotion combination of the color-emotion guide, and then choosing the emotion in the color combination closest to the reference image from the predefined color-emotion guide.
The second way to improve the indexing is "complexity", which is the distribution of the content of an image.In other words, it is the foreground of an image, when the background is eliminated from all of the main colors.This feature has an 8 × 8 ratio table (see Figure 16), which is the distribution of the foreground.This is the "complexity" of the main colors in an image.In this phase, another measured skill is used for the indexing in Figure 18.The City Block Distance [42] is used to estimate the similarity between the reference image and queried image with the same emotion in the database.It can further improve the accuracy of the indexing, and the indexed outcome can be more approachable and closer to the reference image.Figure 19 demonstrates the processing for indexing using "emotion" and "complexity".In the proposed method, the feature of "emotion" corresponds to one of the emotions of the predefined color-emotion guide, which has four different types of color combination.As well as a single color, the others are illustrated on the upper side of Figure 17a-c.Figure 17a shows an example of two colors with the emotion feminine, Figure 17b is the emotion lazy, with three colors and Figure 17c contains five colors with the emotion forceful.The lower side of Figure 17a-c shows another feature, the pattern complexity, which is labeled "sparse", "middle", and "dense", respectively.

Reference Image
This study provides three indexing methods for the user.Figure 18 shows the first way, whereby the reference image is used for indexing.The query image, also called the reference image, undergoes feature extraction using the proposed framework, and then the features of "emotion" and "complexity" can be earned.The "emotion", which is also known as the main color extracted from the reference image, corresponds to the predefined color-emotion guide.The emotion closest to the reference image in the color-emotion guide is then identified.
The search for "emotion" adopts a measured skill that is similar to the Euclidean Distance to estimate the similarity between the reference image and the predefined color-emotion guide.Therefore, a calculation is operated for the main colors in the reference image and the same number in the color-emotion combination of the color-emotion guide, and then choosing the emotion in the color combination closest to the reference image from the predefined color-emotion guide.
The second way to improve the indexing is "complexity", which is the distribution of the content of an image.In other words, it is the foreground of an image, when the background is eliminated from all of the main colors.This feature has an 8 × 8 ratio table (see Figure 16), which is the distribution of the foreground.This is the "complexity" of the main colors in an image.In this phase, another measured skill is used for the indexing in Figure 18.The City Block Distance [42] is used to estimate the similarity between the reference image and queried image with the same emotion in the database.It can further improve the accuracy of the indexing, and the indexed outcome can be more approachable and closer to the reference image.Figure 19 demonstrates the processing for indexing using "emotion" and "complexity".

Semantic with Emotion
In Figure 20, when a key word of the emotion is chosen by the user, the semantic method with only one feature-the emotion-is adopted for the indexing operation.For progressing, first, the users have to choose the desired emotion, and then employ this feature to search the database.Finally, the outputs listed all of the same emotions where the images contain the same emotion with the input key word.

Semantic with Emotion and Complexity
Another skill of the semantic method involves adopting two features: emotion and complexity.The procedure of this method is shown in Figure 21.In this mode, the use of both emotion and complexity can improve the accuracy and be better tailored to the requirements of the user.For progressing, initially, two items must be selected to make a choice in two phases.An emotion is chosen in the first phase, and then the complexity is the second.The complexity is based on the distribution of the foreground in an image.The proposed method separates the complexity of the foreground into two options.Therefore, there are three selections for complexity in this proposed method: the "color" (emotion), the "pattern" (complexity) and the combination of the "color" and the "pattern".The results of indexing using these three are shown on the right side of Figure 21.

Semantic with Emotion
In Figure 20, when a key word of the emotion is chosen by the user, the semantic method with only one feature-the emotion-is adopted for the indexing operation.For progressing, first, the users have to choose the desired emotion, and then employ this feature to search the database.Finally, the outputs listed all of the same emotions where the images contain the same emotion with the input key word.

Semantic with Emotion
In Figure 20, when a key word of the emotion is chosen by the user, the semantic method with only one feature-the emotion-is adopted for the indexing operation.For progressing, first, the users have to choose the desired emotion, and then employ this feature to search the database.Finally, the outputs listed all of the same emotions where the images contain the same emotion with the input key word.

Semantic with Emotion and Complexity
Another skill of the semantic method involves adopting two features: emotion and complexity.The procedure of this method is shown in Figure 21.In this mode, the use of both emotion and complexity can improve the accuracy and be better tailored to the requirements of the user.For progressing, initially, two items must be selected to make a choice in two phases.An emotion is chosen in the first phase, and then the complexity is the second.The complexity is based on the distribution of the foreground in an image.The proposed method separates the complexity of the foreground into two options.Therefore, there are three selections for complexity in this proposed method: the "color" (emotion), the "pattern" (complexity) and the combination of the "color" and the "pattern".The results of indexing using these three are shown on the right side of Figure 21.

Semantic with Emotion and Complexity
Another skill of the semantic method involves adopting two features: emotion and complexity.The procedure of this method is shown in Figure 21.In this mode, the use of both emotion and complexity can improve the accuracy and be better tailored to the requirements of the user.For progressing, initially, two items must be selected to make a choice in two phases.An emotion is chosen in the first phase, and then the complexity is the second.The complexity is based on the distribution of the foreground in an image.The proposed method separates the complexity of the foreground into two options.Therefore, there are three selections for complexity in this proposed method: the "color" (emotion), the "pattern" (complexity) and the combination of the "color" and the "pattern".The results of indexing using these three are shown on the right side of Figure 21.

Experimentation
The proposed method provides a novel framework for users, allowing three options for indexing.According to the framework (Figure 4), the three options are "indexing by emotion (semantic)", "indexing by emotion and complexity (semantic)", and "indexing by emotion and complexity (reference image)".They are described in Sections 4.2-4.4,respectively.To assess the validity of the proposed method, this classification and indexing systems were tested using 1280 wallpaper and textile images.For the semantic operation using emotion, a predefined color-emotion guide was used.First, all images put in the database need to be preprocessed (Section 4.1).

Feature Extraction for All Images in Database
In Figure 4, the left side of the proposed EBCI framework demonstrates the classification process.The testing images were added to the database and these were classified by the two features, "emotion" and "complexity".That is to say, each image in the database contained two of these features.An example of classification is shown in Figure 22, which shows the same emotion: "soft and elegant".In Figure 22a, all images with the same emotion are divided into three groups, based on the degree of pattern complexity.These are "Sparse", "Middle" and "Dense".The results for another color complexity option are illustrated in Figure 22b.These are "Simple" (1 color or 2 colors), "Middle" (3 colors) and "Complex" (5 colors).The classification results measure up to human's visual expectations.

Experimentation
The proposed method provides a novel framework for users, allowing three options for indexing.According to the framework (Figure 4), the three options are "indexing by emotion (semantic)", "indexing by emotion and complexity (semantic)", and "indexing by emotion and complexity (reference image)".They are described in Sections 4.2-4.4,respectively.To assess the validity of the proposed method, this classification and indexing systems were tested using 1280 wallpaper and textile images.For the semantic operation using emotion, a predefined color-emotion guide was used.First, all images put in the database need to be preprocessed (Section 4.1).

Feature Extraction for All Images in Database
In Figure 4, the left side of the proposed EBCI framework demonstrates the classification process.The testing images were added to the database and these were classified by the two features, "emotion" and "complexity".That is to say, each image in the database contained two of these features.An example of classification is shown in Figure 22, which shows the same emotion: "soft and elegant".In Figure 22a, all images with the same emotion are divided into three groups, based on the degree of pattern complexity.These are "Sparse", "Middle" and "Dense".The results for another color complexity option are illustrated in Figure 22b.These are "Simple" (1 color or 2 colors), "Middle" (3 colors) and "Complex" (5 colors).The classification results measure up to human's visual expectations.

Experimentation
The proposed method provides a novel framework for users, allowing three options for indexing.According to the framework (Figure 4), the three options are "indexing by emotion (semantic)", "indexing by emotion and complexity (semantic)", and "indexing by emotion and complexity (reference image)".They are described in Sections 4.2-4.4,respectively.To assess the validity of the proposed method, this classification and indexing systems were tested using 1280 wallpaper and textile images.For the semantic operation using emotion, a predefined color-emotion guide was used.First, all images put in the database need to be preprocessed (Section 4.1).

Feature Extraction for All Images in Database
In Figure 4, the left side of the proposed EBCI framework demonstrates the classification process.The testing images were added to the database and these were classified by the two features, "emotion" and "complexity".That is to say, each image in the database contained two of these features.An example of classification is shown in Figure 22, which shows the same emotion: "soft and elegant".In Figure 22a, all images with the same emotion are divided into three groups, based on the degree of pattern complexity.These are "Sparse", "Middle" and "Dense".The results for another color complexity option are illustrated in Figure 22b.These are "Simple" (1 color or 2 colors), "Middle" (3 colors) and "Complex" (5 colors).The classification results measure up to human's visual expectations.

Indexing Using Emotion via Semantic
Figure 23a-d gives some examples of different emotions indexed from the database using the predefined color-emotion guide.These illustrate four emotions: "soft and elegant", "soothing and lazy", "cordial and feminine" and "silent and quiet".The indexing using the proposed method obtains all good results that have highly similar color combinations.

Indexing Using Emotion via Semantic
Figure 23a-d gives some examples of different emotions indexed from the database the predefined color-emotion guide.These illustrate four emotions: "soft and elegant", "soothing and lazy", "cordial and feminine" and "silent and quiet".The indexing using the proposed method obtains all good results that have highly similar color combinations.(1) "Simple" (1 main color, or 2 main colors).

Indexing Using Emotion via Semantic
Figure 23a-d gives some examples of different emotions indexed from the database using the predefined color-emotion guide.These illustrate four emotions: "soft and elegant", "soothing and lazy", "cordial and feminine" and "silent and quiet".The indexing using the proposed method obtains all good results that have highly similar color combinations.

Indexing Using Emotion Complexity via Semantic
This option adds another feature: complexity.The complexity of this study has two options: pattern complexity and color complexity.Figure 24a gives an example of several images with the same emotion indexed from the database using color complexity.Figure 24b demonstrates the adoption of pattern complexity.Two kinds of complexity can be combined and served for indexing.The results are shown in Figure 24c, where the image takes the complexity features, including the pattern and the color, and the image classified as "Sparse" and "Simple" is closer to human expectations.

Indexing Using Emotion and Complexity via Semantic
This option adds another feature: complexity.The complexity of this study has two options: pattern complexity and color complexity.Figure 24a gives an example of several images with the same emotion indexed from the database using color complexity.Figure 24b demonstrates the adoption of pattern complexity.Two kinds of complexity can be combined and served for indexing.The results are shown in Figure 24c, where the image takes the complexity features, including the pattern and the color, and the image classified as "Sparse" and "Simple" is closer to human expectations.
(a) The images from left to right are "Simple" (2 colors), "Middle" (3 colors) and "Complex" (5 colors) by using the feature of color complexity.
(c) The image is classified as "Sparse" and "Simple" by using the features of complexity, which is including the pattern and the color.

Indexing Using Emotion and Complexity via Reference Image
When the reference image is adopted for indexing, the reference image needs to be first analyzed and then the two features must be extracted to calculate the similarity between the reference image and the images in the database.This process is shown in Figures 18 and 19.The feature of "emotion" applies the Euclidean Distance to estimate the similarity between the reference image and the images in the database.The "complexity" feature uses the City Block Distance to estimate the similarity between the reference image and the images in the database.An example of indexing using the reference image is demonstrated in Figure 25, which shows the results for the same "emotion" in Figure 25b, for the same "emotion" and keeping the same "complexity" of "dense" in Figure 25c, and for the same "complexity" of "dense" but having different "emotions" in Figure 25d.Good results are shown in Figure 25 based on the proposed method.

Indexing Using Emotion and Complexity via Reference Image
When the reference image is adopted for indexing, the reference image needs to be first analyzed and then the two features must be extracted to calculate the similarity between the reference image and the images in the database.This process is shown in Figures 18 and 19.The feature of "emotion" applies the Euclidean Distance to estimate the similarity between the reference image and the images in the database.The "complexity" feature uses the City Block Distance to estimate the similarity between the reference image and the images in the database.An example of indexing using the reference image is demonstrated in Figure 25, which shows the results for the same "emotion" in Figure 25b, for the same "emotion" and keeping the same "complexity" of "dense" in Figure 25c, and for the same "complexity" of "dense" but having different "emotions" in Figure 25d.Good results are shown in Figure 25 based on the proposed method.

Indexing Using Emotion and Complexity via Semantic
This option adds another feature: complexity.The complexity of this study has two options: pattern complexity and color complexity.Figure 24a gives an example of several images with the same emotion indexed from the database using color complexity.Figure 24b demonstrates the adoption of pattern complexity.Two kinds of complexity can be combined and served for indexing.The results are shown in Figure 24c, where the image takes the complexity features, including the pattern and the color, and the image classified as "Sparse" and "Simple" is closer to human expectations.
(a) The images from left to right are "Simple" (2 colors), "Middle" (3 colors) and "Complex" (5 colors) by using the feature of color complexity.
(c) The image is classified as "Sparse" and "Simple" by using the features of complexity, which is including the pattern and the color.

Indexing Using Emotion and Complexity via Reference Image
When the reference image is adopted for indexing, the reference image needs to be first analyzed and then the two features must be extracted to calculate the similarity between the reference image and the images in the database.This process is shown in Figures 18 and 19.The feature of "emotion" applies the Euclidean Distance to estimate the similarity between the reference image and the images in the database.The "complexity" feature uses the City Block Distance to estimate the similarity between the reference image and the images in the database.An example of indexing using the reference image is demonstrated in Figure 25, which shows the results for the same "emotion" in Figure 25b, for the same "emotion" and keeping the same "complexity" of "dense" in Figure 25c, and for the same "complexity" of "dense" but having different "emotions" in Figure 25d.Good results are shown in Figure 25 based on the proposed method.(a) An example of a reference image with the features of "cordial and feminine" and "dense".(b) The results for indexed images using the feature of the "emotion" of the "cordial and feminine".
(c) The results for the indexed images that are all the "cordial and feminine" and "dense" using the features of the "emotion" and the "complexity".(d) An example shows the results that have same "complexity" (full matching), but having different "emotions".The outcome of a user study according to the emotions they feel from the images is shown in Table 1.Eighty people who are all Ethnic Chinese are selected to participate in and experience the system comprising three kinds of tests on indexing including "Reference Image-Emotion and Complexity"; "Semantic-Emotion"; and "Semantic-Emotion and Complexity".Every test in the system provides three options, which are the effects of human feeling including Satisfied, Partially satisfied, and Dissatisfied.In the test, every subject has to acquire 30 images from database using the system.When the number of the acquired images which match subject's feeling exceed 20, the subject is classified as Satisfied.When the number of the acquired images which match subject's feeling locate between 10 and 20, the subject is classified as partially satisfied.When the number of the acquired images which match subject's feeling are below 10, the subject is classified as Dissatisfied.The outcomes of the tests are shown in Table 1, indicating the proposed method is able to achieve good results.In particular, when the indexing adopts "Emotion and Complexity", we can gain a better outcome.However, since the emotions and messages concerning the content of an image are usually conveyed to humans, views differ because of differences in culture, race, gender or age.To obtain a good outcome, we need a good method and to choose suitable color-emotion models.Therefore, in the future, we suggest adopting different color-emotion models from different nationalities and cultures that have been researched and edited beforehand, to make them more widely adopted and universally applied.The outcome of a user study according to the emotions they feel from the images is shown in Table 1.Eighty people who are all Ethnic Chinese are selected to participate in and experience the system comprising three kinds of tests on indexing including "Reference Image-Emotion and Complexity"; "Semantic-Emotion"; and "Semantic-Emotion and Complexity".Every test in the system provides three options, which are the effects of human feeling including Satisfied, Partially satisfied, and Dissatisfied.In the test, every subject has to acquire 30 images from database using the system.When the number of the acquired images which match subject's feeling exceed 20, the subject is classified as Satisfied.When the number of the acquired images which match subject's feeling locate between 10 and 20, the subject is classified as partially satisfied.When the number of the acquired images which match subject's feeling are below 10, the subject is classified as Dissatisfied.The outcomes of the tests are shown in Table 1, indicating the proposed method is able to achieve good results.In particular, when the indexing adopts "Emotion and Complexity", we can gain a better outcome.However, since the emotions and messages concerning the content of an image are usually conveyed to humans, views differ because of differences in culture, race, gender or age.To obtain a good outcome, we need a good method and to choose suitable color-emotion models.Therefore, in the future, we suggest adopting different color-emotion models from different nationalities and cultures that have been researched and edited beforehand, to make them more widely adopted and universally applied.Table 2 demonstrates the correct rate of identification of the foreground and background based on the color complexity of images.Arbitrarily, 100 images that are used in each kind of color complexity were chosen for each kind of methods.It compares the proposed method and some previous methods, such as RPGB [43], KBC [41], and TMHG [44].In these previous methods, the dominant color region in an image can be represented as a connected fragment of homogeneous color pixels which is perceived by human vision.The RPGB proposed a technique to index and store images based on dominant color regions; in which features like region size and the location of the region are extracted simply considering the color and spatial relations of the regions detected in the image.The main drawback of this technique is it never retrieves the same objects of varying sizes as the similar image.For the smaller object (flowers), the background will be the dominant region, as shown in Figure 26a, whereas, in bigger objects (flowers), the object itself is dominant, as shown in Figure 26b.Even though the semantics of the objects are the same, they are not retrieved as similar images.The KBC proposed a method where dominant color identification based on foreground objects retrieves a greater number of similar images based on the foreground color irrespective of size.It still had the same problem as the RPGB.The TMHG adopted a weighted dominant color descriptor for content-based image retrieval, and the proposed technique helped reduce the effect of image background on an image matching decision where an object's colors receive much more focus, but it can partially improve previous methods instead of fully distinguishing the foreground or background of images; especially when we are dealing with the images of two kinds of wallpaper and textile.In Figure 26, the proposed method can explicitly identify the same objects (flowers) of varying sizes, even if some differences between the two images exist.Table 2 demonstrates the correct rate of identification of the foreground and background based on the color complexity of images.Arbitrarily, 100 images that are used in each kind of color complexity were chosen for each kind of methods.It compares the proposed method and some previous methods, such as RPGB [43], KBC [41], and TMHG [44].In these previous methods, the dominant color region in an image can be represented as a connected fragment of homogeneous color pixels which is perceived by human vision.The RPGB proposed a technique to index and store images based on dominant color regions; in which features like region size and the location of the region are extracted simply considering the color and spatial relations of the regions detected in the image.The main drawback of this technique is it never retrieves the same objects of varying sizes as the similar image.For the smaller object (flowers), the background will be the dominant region, as shown in Figure 26a, whereas, in bigger objects (flowers), the object itself is dominant, as shown in Figure 26b.Even though the semantics of the objects are the same, they are not retrieved as similar images.The KBC proposed a method where dominant color identification based on foreground objects retrieves a greater number of similar images based on the foreground color irrespective of size.It still had the same problem as the RPGB.The TMHG adopted a weighted dominant color descriptor for content-based image retrieval, and the proposed technique helped reduce the effect of image background on an image matching decision where an object's colors receive much more focus, but it can partially improve previous methods instead of fully distinguishing the foreground or background of images; especially when we are dealing with the images of two kinds of wallpaper and textile.In Figure 26, the proposed method can explicitly identify the same objects (flowers) of varying sizes, even if some differences between the two images exist.

Conclusions Future Work
This system is a new, interesting, and practical experience for the indexing of wallpaper and textiles.It substitutes emotion for color, and delivers user's feeling based on a suitable color-emotion model.Since this method uses a predefined color-emotion, the system allows an evidence-based and objective portrayal of human emotions.The experimental results shown in Figures 22-25 reveal the advantages of using the proposed system.
The proposed approach, combining the two skills of ISQ and LBG, is more effective than LBG alone.It has many advantages, including the dynamic color combinations and a predefined color-emotions, providing an easier and closer link between color and emotion.Further, the algorithms of the TFS and the TUS are another major contribution in the proposed method.It is helpful for determining the background and the foreground more accurately.
In the future, different color-emotion models could be used in this framework to take account of differences in culture, race, gender and age.Although this research merely provides for the images of wallpaper and textiles, the proposed method may also to deal with other kinds of image such as landscape, painting, etc.Moreover, it can aid non-professionals in finding suitable color-combinations based on emotion, and to imitate professional operation, such as interior design, product design, advertising design, image retrieval and other related applications.In addition to the feature of colors, it can try to add other features such as the shape and texture in the future.

Figure 1 .
Figure 1.An example of a color-emotion guide.

Figure 1 .
Figure 1.An example of a color-emotion guide.

Figure 2 .
Figure 2.In the Chou's color-emotion study[14], the emotion of softness contains two, three, and five color combinations.Figure2.In the Chou's color-emotion study[14], the emotion of softness contains two, three, and five color combinations.

Figure 3 .
Figure 3.The schematic diagram for this method.

Figure 3 .
Figure 3.The schematic diagram for this method.
(a) Two examples of single color.(b) Some examples for two-color combination (fresh).(c) Some examples of three-color combination (fresh).(d) Some examples of five-color combination (fresh).

Figure 5 .
Figure 5.The number of color combinations is chosen according to the color complexity of the image.

Figure 5 .
Figure 5.The number of color combinations is chosen according to the color complexity of the image.

Figure 7 .
Figure 7.The process for the dynamic extraction of the main colors.

Figure 7 .
Figure 7.The process for the dynamic extraction of the main colors.Figure 7. The process for the dynamic extraction of the main colors.

Figure 7 .
Figure 7.The process for the dynamic extraction of the main colors.Figure 7. The process for the dynamic extraction of the main colors.
The LBG algorithm is similar to a K-means clustering algorithm, which uses a set of input vectors, S = {xi ∈ R d | i = 1, 2,…, n}, as the input and generates a representative subset of vectors, initiate a codebook, C = {cj ∈ R d | j = 1, 2,…, K}, which is randomly selected, with a user-specified K << n as the output.Although the settings for Vector Quantization (VQ) are usually d = 16 and K = 256 or 512, the settings used in this study are d = 3 and K ≤ 8.

Figure 8 .
Figure 8.(a) The 3-D color space is partitioned using the ISQ algorithm; (b) using the ISQ algorithm to identify an initial point, one cube is empty; and (c) using the LBG algorithm to improve the quality further.

Figure 9 .
Figure 9.The stars are defined as the representative point for the main colors, and the small points represent the pixels of the color space: (a) the initial points are gained using ISQ; and (b) some representative colors are lost during the processing by LBG.

Figure 9 .
Figure 9.The stars are defined as the representative point for the main colors, and the small points represent the pixels of the color space: (a) the initial points are gained using ISQ; and (b) some representative colors are lost during the processing by LBG.

27 3. 1 . 4 .
Appl.Sci.2017, 7, x FOR PEER REVIEW 11 of Determination of the Amount of Main Colors (a) A source image.(b) Showing the background.(c) Showing the foreground.

Figure 10 .
Figure 10.This example shows that the proposed method can easily and accurately identify the background and the foreground.

Figure 10 .
Figure 10.This example shows that the proposed method can easily and accurately identify the background and the foreground.

Figure 11 .
Figure 11.The flowchart for identifying the background color.Figure 11.The flowchart for identifying the background color.

Figure 11 .
Figure 11.The flowchart for identifying the background color.Figure 11.The flowchart for identifying the background color.

Figure 12 .
Figure 12.Sorting the remainder of the main colors and identifying the foreground.

Figure 13 .Figure 12 .
Figure 13.Some examples show the possible situations, when the TFS and TUS algorithms are used to find the background color, which is light-blue.The red and the dark-blue represent the possible solutions for the remaining main colors in the images.

27 Figure 12 .
Figure 12.Sorting the remainder of the main colors and identifying the foreground.

Figure 13 .Figure 13 . 27 Figure 12 .
Figure 13.Some examples show the possible situations, when the TFS and TUS algorithms are used to find the background color, which is light-blue.The red and the dark-blue represent the possible solutions for the remaining main colors in the images.

Figure 13 .Figure 14 .
Figure 13.Some examples show the possible situations, when the TFS and TUS algorithms are used to find the background color, which is light-blue.The red and the dark-blue represent the possible solutions for the remaining main colors in the images.

Figure 14 .
Figure 14.Some actual examples for TFS and TUS, (a) and (b) are TFS; (c) and (d) are TUS.

Figure 14 .
Figure 14.Some actual examples for TFS and TUS, (a) and (b) are TFS; (c) and (d) are TUS.

Figure 16 .
Figure 16.In (1) of (a-c) TFS and TUS are adopted to determine the background color and the other main color.In (2) and (3) of (a-c), the feature of complexity is demonstrated using the binary image.

Figure 16 .
Figure 16.In (1) of (a-c) TFS and TUS are adopted to determine the background color and the other main color.In (2) and (3) of (a-c), the feature of complexity is demonstrated using the binary image.
(a) The feature extraction for an image with two main colors.(b) The feature extraction for an image with three main colors.(c) The feature extraction for an image with five main colors.

Figure 17 .
Figure 17.The image classification was based on the features, emotion and complexity.Figure 17.The image classification was based on the features, emotion and complexity.

Figure 17 .
Figure 17.The image classification was based on the features, emotion and complexity.Figure 17.The image classification was based on the features, emotion and complexity.

Figure 18 .
Figure 18.The reference image is used for indexing.Figure 18.The reference image is used for indexing.

Figure 18 .
Figure 18.The reference image is used for indexing.Figure 18.The reference image is used for indexing.

Figure 19 .
Figure 19.Adopting two features including the "emotion" and the "complexity".

Figure 20 .
Figure 20.Indexing by semantic just utilizing the feature of emotion.

Figure 19 .
Figure 19.Adopting two features including the "emotion" and the "complexity".

Figure 20 .
Figure 20.Indexing by semantic just utilizing the feature of emotion.

Figure 20 .
Figure 20.Indexing by semantic just utilizing the feature of emotion.

Figure 21 .
Figure 21.Indexing by semantic utilizing the features of emotion and complexity.

Figure 21 .
Figure 21.Indexing by semantic utilizing the features of emotion and complexity.

Figure 21 .
Figure 21.Indexing by semantic utilizing the features of emotion and complexity.

Figure 22 .
Figure 22.Examples of the results of classification.

Figure 23 .
Figure 23.Some examples of different emotions that are indexed from the database using the predefined color-emotion guide.

Figure 22 .
Figure 22.Examples of the results of classification.

Figure 22 .
Figure 22.Examples of the results of classification.

Figure 23 .
Figure 23.Some examples of different emotions that are indexed from the database using the predefined color-emotion guide.

Figure 23 .
Figure 23.Some examples of different emotions that are indexed from the database using the predefined color-emotion guide.

Figure 24 .
Figure 24.An example whereby the feature of complexity is added to the same emotion.
(a) An example of a reference image with the features of "cordial and feminine" and "dense".

Figure 24 .
Figure 24.An example whereby the feature of complexity is added to the same emotion.

Figure 24 .
Figure 24.An example whereby the feature of complexity is added to the same emotion.

Figure 25 .
Figure 25.An example of indexing using a reference image.

Figure 25 .
Figure 25.An example of indexing using a reference image.

Figure 26 .
Figure 26.Similar images with dominant Background and dominant foreground using previous methods: (a) smaller object (flowers); (b) bigger objects (flowers).Figure 26.Similar images with dominant Background and dominant foreground using previous methods: (a) smaller object (flowers); (b) bigger objects (flowers).

Figure 26 .
Figure 26.Similar images with dominant Background and dominant foreground using previous methods: (a) smaller object (flowers); (b) bigger objects (flowers).Figure 26.Similar images with dominant Background and dominant foreground using previous methods: (a) smaller object (flowers); (b) bigger objects (flowers).

Table 1 .
The people feel from the images.

Table 2 .
The correct rate of identification of foreground and background.

Table 1 .
The effects people feel from the images.

Table 2 .
The correct rate of identification of foreground and background.