Next Article in Journal
IoT and iTV for Interconnection, Monitoring, and Automation of Common Areas of Residents
Next Article in Special Issue
Preparation of Titania on Stainless Steel by the Spray-ILGAR Technique as Active Photocatalyst under UV Light Irradiation for the Decomposition of Acetaldehyde
Previous Article in Journal
Design of 4 × 1 Power Beam Combiner Based on MultiCore Photonic Crystal Fiber
Previous Article in Special Issue
Ambient Light Rejection Using a Novel Average Voltage Tracking in Visible Light Communication System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Emotion-Based Classification and Indexing for Wallpaper and Textile

1
Department of Computer Science, National Tsing Hua University, No. 101, Section 2, Kuang-Fu Road, Hsinchu 30013, Taiwan
2
Research Center for Information Technology Innovation, Academia, Sinica, No. 128, Section 2, Yan-Jiu-Yuan Road, Nan Gang District 11529, Taiwan
*
Author to whom correspondence should be addressed.
Appl. Sci. 2017, 7(7), 691; https://doi.org/10.3390/app7070691
Submission received: 24 May 2017 / Revised: 22 June 2017 / Accepted: 28 June 2017 / Published: 5 July 2017
(This article belongs to the Special Issue Selected Papers from IEEE ICASI 2017)

Abstract

:
This study, based on human emotions and visual impression, develops a novel framework of classification and indexing for wallpaper and textiles. This method allows users to obtain a number of similar images that can be corresponded to a specific emotion by indexing through a reference image or an emotional keyword. In addition, a predefined color–emotion model is applied to deal with the transference between emotions and colors in the paper. Besides color and emotion, the other significant feature for indexing is texture. Therefore, two features—the main colors (the representative colors) and the foreground complexity of a color image—are adopted in the method. The foreground complexity (a pattern complexity) is also called the texture of the pattern in an image. Another contribution of this study is the new algorithms of Touch Four Sides (TFS) and Touch Up Sides (TUS), which can aid in extracting an accurate background and foreground for color images. The potential applications of this study can support non-professionals in finding suitable color-combinations based on emotions for many applications with the transference between emotions and colors, and to imitate the professional operation of the color matching such as interior design, product design, advertising design, image retrieval and other relative applications.

1. Introduction

The prediction of human emotions is very important in many applications of business, science and engineering. In recent years, there has been an accumulation of studies using images to convey emotions and opinions [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21]. Increasingly, studies of methods for image analysis, retrieval and annotation that use emotional semantics have been applied in many disciplines.
Images are an important medium for conveying human emotions and visual impression. Serving as the main components of an image, colors and textures are most commonly used in research. Meanwhile, the human reaction to colors has been studied for many years. Certain colors generate certain feelings. Designers who understand these colors and emotions can use such information to represent a business appropriately. For example, Figure 1 [15] shows a color–emotion guide and a fun infographic for the emotions elicited by well-known brands.
This paper focuses on the images of wallpaper and textiles, which will be discussed together, because they have similar features in terms of their inherent structure. This method reveals a substitute for traditional methods for classifying and indexing images with emotion.
In computer image processing, there are many effective techniques for indexing images by extracting features automatically. Unfortunately, there has been little research on image indexing derived from human emotion. However, along with the growing number of multimedia data on the Internet, such as the various countless images, photos and videos, technical knowledge of the indexing of images (photo or video) and image (photo or video) retrieval are increasingly required. Meanwhile, it will be increasingly necessary for users based on human emotions. This paper proposes the indexing of wallpaper and textiles using emotion based on a predefined color–emotion model to identify the emotions generated by images. It is a new, interesting, and practical experience for users.
Understanding and describing the emotions aroused by an image remains a challenge. The inherent subjectivity of the emotional responses of the user is also difficult to classify. This domain is challenging because emotions are subjective feelings uniquely influenced by the personal characteristics of the user. Since emotions and messages about the content of an image are usually specific, views differ because of differences in culture, race, gender, age, etc. No studies have proposed a reliable and objective solution to this challenge. For instance, an experiment by Michela et al. [8] analyzed the opinions of only a few people. A case study analyzed emotion and opinion for natural images. A classification algorithm was used to categorize images according to the emotions perceived by people when looking at them and to understand which features were more closely related to such emotions. However, this is not the majority view. The solution developed in this study uses a color–emotion model instead of the previous method. Many studies of color–emotion models have been published. For instance, Kobayashi [9,10] published a text on the Color Image Scale and Eisemann [11] presented Pantone’s Guide to Communication with Color. These models can be used to find reliable and objective solutions to color–emotion transferred problems.
The new mapping method for color–emotion proposed in this study uses a dynamic color combination scheme utilizing the complex content of a color image to determine the number of color combinations used: two, three or five colors. Chou’s color–emotion guide [14], including 24 emotions, each of which contains four kinds of color combinations that are 24 single color, 24 two-color, 48 three-color and 32 five-color in the Color Scheme Bible Compact Edition, is used to connect the color and emotion. Figure 2 shows one of these emotions. Therefore, the proposed method supports both indexing solutions. The first solution allows users to choose the desired emotion directly from a predefined color–emotion model. This work uses the Chou color–emotion guide. The second solution allows users to index using reference images.
The challenges of using dynamic color combinations for the proposed mapping method of color–emotion include dynamically determining the number of colors in the image, identifying the main color of the background and the foreground more accurately and pairing these chosen main colors with destination color combinations. The method developed by Su and Chang [22] is used to select the initial representative color points in the color space. No more than three cycles of the Linde, Buzo, and Gray (LBG) algorithm [23] are used to obtain the final representative color points by clustering. This study then proposes two novel algorithms: Touch Four Sides (TFS) and Touch Up Sides (TUS). This enables accurate identification of the background color and the other main colors. After feature extraction or emotion selection, it can help people or non-professionals to accurately and easily meet expectations of emotion, which are based on feelings. Figure 3 shows the schematic diagram for this method.
The remainder of this paper is organized as follows. Section 2 reviews previous studies in related areas. Section 3 describes the proposed architecture, the processing of the feature extraction and the dynamic color combinations for the color–emotion in detail, and, finally, performing the progressing of Emotion Based Classification and Indexing (EBCI). Section 4 demonstrates the experiment and the results. Section 5 proposes a conclusion and makes suggestions for future work.

2. Related Works

Human emotion has been studied and applied to areas such as color–emotion transference [3,4,5,6,7], emotion recognition interfaces [1,2,3,4,5,6,7], color psychological study [8,9,10,11,12,13,14,15] and content-based retrieval systems [24,25,26].
An automatic color–emotion transferring algorithm was first proposed by Reinhard et al. [16]. This shifts and scales the pixel values of the input image to match the mean and standard deviation of the reference image. Although the Reinhard study was performed in 2001, psychological experiments were performed as early as 1894, when Cohn proposed the first empirical method for color preference [27,28]. Despite the considerable number of studies, Norman [29] reported the findings of these early studies were diverse and tentative. Later, Kobayashi [10] developed a color–emotion model that obtained data in psychological studies to identify emotional features in the field of fashion.
In the Complete Guide to Colour Psychology [30], two color–emotion models were proposed: one for a single color and one for a combination of colors. The two main categories in the study of single color–emotion are classification and quantification. The classification of a single color–emotion uses principal component analysis to reduce a large number of colors to a small number of categories [31]. Early studies to quantify color–emotion were undertaken by Sato and Nakamura [32], who provided a color–emotion model to quantify three color–emotion factors: activity, weight and heat. This study was then verified by Wei-Ning et al. [19]. Studies of color combinations by Kobayashi [9] and Ou et al. [17,18] showed a simple relationship between emotion and color, and between an emotion and a color pair. Lee et al. [33] used set theory to evaluate color combinations. Whereas studies of color–emotion only considered color pairs, the proposed approach uses a color–emotion scheme called the Color Scheme Bible Compact Edition [14], utilizing a predefined color–emotion guide. Compared to a single or two color–emotion, color combinations provide a simpler and more accurate description of color–emotion in the images. Tanaka [34] reported emotion semantics are the most abstract semantic structure in images because they are closely related to cognitive models, cultural background and the aesthetic standards of users. Of all of the factors affecting the emotion of images, colors are considered the most important. Tanaka found the relative importance of each heterogeneous feature of attractiveness is color > size > spatial frequency and the relative contribution of each heterogeneous feature to attractiveness is color heterogeneity > size heterogeneity > shape heterogeneity > texture heterogeneity. Two methods to extract emotion features from images were soon developed. First, a domain-based method was developed to extract relevant features, based on specialist knowledge of the application field [19]. For instance, Itten [35] formulated a theory for the use of color and color combinations in art and included the semantics. The other method to extract emotional features is to perform psychological experiments to identify the types of features significant affecting users. To this effect, Mao [36] proved the fractal dimensions of the images are related to the specific properties of an image.
For textiles, Kim et al. [2] presented an approach using physical feature extraction and fuzzy-rule-based evaluation to label images, using human emotion or feelings, which demonstrated positive results. They then proposed a method using a neural network for emotion-based textile indexing. The Kim et al. proposed indexing system was tested using 160 textile images. For each textile image, seventy people manually annotated the emotions that the images inspired. However, this experiment analyzed the opinions of only a few people.
Recently, human emotion has become a significant component in the design of the visual aspects of wallpaper, interior design, textiles, fashion and homepages [37]. However, because the perception and interpretation of visual information at the affective level is ambiguous, it is difficult to predict human emotions directly using visual information. Therefore, it is important to determine the relationship between human emotions and visual information. In this work, a new corresponding method for color–emotion that adopts a dynamic color combination scheme is proposed. It demonstrates the relationship of colors and emotions based on a predefined color–emotion model, applied to the classification and indexing of wallpaper and textiles.

3. The Proposed Scheme

This study develops a new framework of classification and indexing for wallpaper and textiles, using human emotion. It supports both solutions: the reference image and the semantic. The schematic diagram in Figure 3 shows that, when the option of the semantic is selected, users choose a desired emotion by a predefined color–emotion model. On the contrary, the option of the reference image is not required to use the predefined color–emotion model. Both options of the above can index and identify the intended images from the classified database according to similar emotions. Figure 4 shows the flow diagram and the components of the proposed framework for Emotion-Based Classification and Indexing (EBCI).

3.1. Feature Extraction

In the feature extraction stage, the images coming from two pathways are analyzed and the features are extracted. One of the pathways uses the reference image to acquire the features that include the main colors and the complexity of the foreground. All inserted images by the other pathway, which were classified and added into the database based on their features, come from the system administrator.
As mentioned above, the feature extraction stage is used to extract the main colors from the two pathway images. These main colors can be a single color or composed of other three color combinations: two-color, three-color and five-color. The composed color combination depends on the color complexity of the images. Further, this method also supports other emotion guidebooks that are able to allow more than five-color combinations for the color–emotion model.
Figure 5 demonstrates some examples from the predefined color–emotion guidebook by Chou [14]. Generally, the number of main colors increases with the color complexity of the image. That is, the number of color combinations is chosen according to the color complexity of the content of the image. In this study, the first color of the cluster is set as the background, the second color of the cluster is set as the dominant color, and the rest are then deduced by analogy. Therefore, the main colors are selected dynamically, according to the color complexity of the images.
The main colors extracted from an input image or a reference image correspond to a predefined color combination of emotion, and both have to own the same number of color combinations. In this approach, a color–emotion model is proposed by Chou [14], providing a single color and the other three color combinations (two, three and five colors). Accordingly, the number of main colors must be within this range.

3.1.1. The Color Emotion Model

A model having 24 emotions is proposed by Chou [14], and each emotion includes 24 two-color combinations, 48 three-color combinations and 32 five-color combinations. The label of 24 emotions are: fresh, soft and elegant, illusive and misty, pretty and mild, frank, soothing and lazy, lifelike, solicitous and considerate, clean and cool, excited, optimistic, ebullient, graceful, plentiful, magnificent, moderate, primitive, simple, solemn, mysterious, silent and quiet, sorrowful, recollecting and withered. All of the colors in Chou’s model correspond to the RGB color space and the CMYK color space. In this study, the CIELab [38] color space, which is converted from the RGB color space, is adopted. The CIELab color space is demonstrated in Figure 6, which represents these values using three axes: L, a and b.

3.1.2. The Dynamic Extraction of the Main Colors

The progression of main color extraction is demonstrated in the flowchart in Figure 7, which is part of the stage of feature extraction in this framework. In this period, the representative colors can be derived from both the reference image and the input image, which are also called the main colors of the treated image. The main colors and the amount of the main colors can be determined in this stage. For color–emotion correspondence, the conventional approaches [17,18] generally provide merely one color or a fixed number of color combinations to represent the relation between color and emotion. Nevertheless, adjustable and dynamic color combinations of one to five colors are permitted in this study, depending on the color complexity of the content of the images; here, the complexity is defined as the distribution of the image pixels in Lab color space. For instance, Figure 5b demonstrates less complex images using two-color combinations and Figure 5d shows more complex images using five-color combinations.

3.1.3. Identification of the Main Colors

The main colors of image are extracted using the second step of the flowchart in Figure 7. In this step, the representative colors of the processed images in Lab color space are determined. The sources of the processed images can be referenced images or input images. Most conventional methods use cluster-skill to identify the main colors, according to the distribution of the image-pixels in the color space. The k-means clustering method [39] is widely used, but its drawback is that the initial points in the color space must be randomly chosen from the source image. Another common method is clustering vector quantization (VQ) [40], in which the nearest quantization vector centroid is moved towards the initial point by a small fraction of the distance. This classic quantization technique was initially developed for image processing. However, its disadvantage is its slow speed when the initial points are poor. More importantly, its performance significantly depends on the initially selected points.
The selected representative points from image-pixels are gained by a method that combines two techniques in this study. First, adopting the skill of Independent Scalar Quantization (ISQ) [21] enables simply and quickly to acquire the initial point from the pixels of the RGB color space. Figure 8 shows the 3-D and 2-D diagrams and Figure 8a shows that the 3-D Lab color space is quantized into 8 individual cubes of the same size, using ISQ. Figure 8b shows the pixels distributed in the 2-D Lab color space. There are three cubes holding pixels, and the one cube with no pixels is called an empty cube. The centroids of these cubes, which are based on the distribution of the pixels, are computed using the pixels of the cube. Since an empty cube has no representative point, the number of initial points may be less than 8, but the maximum number is 8. Chou [14] limited the maximum number of color combinations to five, so this method appropriately modulates those initial points to match the color combinations.
Figure 8c shows how the LBG algorithm [23] is used to make necessary modifications after ISQ. The LBG algorithm is similar to a K-means clustering algorithm, which uses a set of input vectors, S = {xi ∈ Rd | i = 1, 2,…, n}, as the input and generates a representative subset of vectors, initiate a codebook, C = {cj ∈ Rd | j = 1, 2,…, K}, which is randomly selected, with a user-specified K << n as the output. Although the settings for Vector Quantization (VQ) are usually d = 16 and K = 256 or 512, the settings used in this study are d = 3 and K ≤ 8.
The algorithm is shown below:
  • Step 1. Input the training vectors, S = {xi ∈ Rd | i = 1, 2,…, n}.
  • Step 2. Initiate a codebook, C = {cj ∈ Rd | j = 1, 2,…, K}, which is randomly selected.
  • Step 3. Set D0 = 0 and let k = 0.
  • Step 4. Classify the training vectors into K clusters, according to xi ∈ Sq if ||xi − cq||p ≤ ||xi − cj||p for j q.
  • Step 5. Update the cluster centers, cj, j = 1, 2,…, K, using cj = x i S j x i / | S j | .
  • Step 6. Set k ← k + 1 and compute the distortion Dk = j = 1 K x i S j | |   x i     c j | | p .
  • Step 7. If (Dk−1 − Dk) /Dk > ε   (a small number), repeat Steps 4–6.
  • Step 8. Output the codebook, C = {cj ∈ Rd| j = 1, 2,…, K}.
The convergence of the LBG algorithm depends on the initial codebook, C, the distortion, Dk, and the threshold, ε. The maximum number of iterations must be defined in order to ensure convergence.
In the process of the LBG algorithm, Step 2 of the LBG is replaced by the ISQ, namely the method of random selection is replaced by the initial representative points provided by the ISQ as the initial main colors. When the initial points (initial guesses) are appropriately chosen, a good representative point will be gained by the process of the pixel clusters that adopts the LBG algorithm. However, in the LBG algorithm, when the initial points are poor, the final representative points may also be poor and this increases computation time. Therefore, the performance of the LBG significantly depends on the determination of the initial point. When the applicable initial points are picked using the ISQ, the LBG algorithm can be performed more efficiently [22]. The experiment shows no more than three cycles of the LBG are necessary to obtain a good result [22]. Figure 9 demonstrates representative colors are lost during processing by the LBG. For Equation (1), the d (p, q) expresses the Euclidean distance between points p and q. p = (pL, pa, pb) and q = (qL, qa, qb) are two points in the Lab color space.
d   ( p ,   q ) = (   q L   p L ) 2 + (   q a   p a ) 2 + (   q b   p b ) 2
Continuing the above, the updating of the clustering centers is expressed in Step 5 of the LBG, as shown in the Equation (2). Cubes with no or insufficient pixels must be re-established, and the representative colors in these cubes are also eliminated, namely, the number of representative colors may be reduced based on the distribution of the pixels in the color space. For instance, Figure 9 demonstrates a pixel can belong to cube Ci, but it may be further away from centroid Ci than from centroid Cj of cube Cj. In the previous case, the LBG must be reused to make modifications. In Figure 9a,b, the stars are the representative colors for each cube and the dots represent the pixels in the color space.
C j = x i S j x i / | S j | ,   j = 1 , 2 , ,   K

3.1.4. Determination of the Amount of Main Colors

As mentioned previously, the representative points, which are also called the main colors in this study, are gained from the color space of each single image. The number of main colors can be determined with this phase, and the number of closed intervals is allowed to be 1–8 in the proposed method. Further, the decision of the background, foreground and other main colors are described in the next section.
In Figure 7, the cluster sorting step based on the number of pixels is used to judge and to sort all of the clusters obtained by the ISQ and a single cycle of the LBG. In the process of sorting, every cluster is checked using the number of pixels, after establishing an appropriate threshold, and all clusters are compared. Those not meeting the threshold are eliminated because they have less effect on the color emotion expression. If the judgment in the step is “true” (Y), a further single cycle of the LBG renews the remaining clusters. In this implementation, the threshold is set to 1% of the total number of pixels in the color space. The threshold can be adjusted according to the circumstances, e.g., 0.5% or 2%.
This study adopts Chou’s work [14] that provides the maximum number of color combinations is five. Therefore, if the number of clusters exceeds the maximum number of color combinations, the top five largest clusters are retained and a single cycle of the LBG is used to renew these remaining clusters. Eventually, the number of clusters is adjusted dynamically to no more than five.

3.1.5. Identification of the Background and the Foreground

In previous research, it is difficult to extract the background color and the foreground color consistently and accurately from images for most methods because only the numbers of pixels in clusters are considered to determine the largest is treated as the most important [3]. Figure 10 demonstrates that the proposed method can easily, efficiently and accurately segment the background and the foreground. Figure 10a is named the source image; the color of the parts that are not leaves (dark green) is expected to be the background color cluster. In Figure 10b,c, the proposed approach uses the Touch Four Sides (TFS) algorithm to improve the identification of the background and the other main colors extracted from the images.
Usually, the images of wallpaper and textiles have many similar characteristics, in terms of the background and the foreground pattern. Generally, their pattern is located near the center and the background is on the outside or on the top. In previous work by Krishnan et al. [41], dominant colors were identified according to foreground objects. Wu et al. [4] analyzed the structure and content of a scene and divided it into different regions, such as salient objects, sky and ground, similar to the proposed method.
All of the clusters of the main colors in Figure 11 and Figure 12 are initially ready on Step 1. These clusters are checked by the TFS algorithm in Step 2. If the pixels in a cluster are in contact with the edge of the image, at the upper side, lower side, right side and left side (the examples illustrate the upper side of Figure 13 and Figure 14a,b, i.e., if they touch four sides simultaneously, this cluster must be chosen and the process moves to the next step. When the number of the clusters owning the TFS is more than one, then one of them with the most pixels touching the four edges is chosen as the background color. If none of the clusters touch four sides in Step 2, then the TUS algorithm would be adopted to check all the clusters again. The TUS examines the pixels of these clusters to ascertain whether they contact the edge of the upper side or not. Some examples are shown below in Figure 13 and Figure 14c,d. When the number of clusters that keep TUS is more than one, the cluster with the most pixels touching the top is chosen as the background color.
When the background color has been identified, the other main colors are then determined in the next phase. Figure 12 shows the flowchart for this procedure. First, the cluster of background colors must be eliminated from these clusters of main colors and the remaining main colors are then sorted based on the amount of pixels in each cluster. Normally, the dominant pattern in the images of wallpaper and textiles is spread over one or many areas combining the other main colors without a background. The dominant pattern is also called the foreground of an image.
The previous phase continuously selects the background color and then sorts the remaining main colors. The main colors are ordered according to their importance in the images, i.e., the background color is the first, then the largest cluster (in the remaining main colors) is secondary, and so on. This is mapped to the color combinations of the predefined color–emotion guide.

3.1.6. Image Binarization and Pattern Analysis

In addition to the emotion, the proposed method uses a pattern, which is also called the foreground, to improve the accuracy of indexing. In Figure 14, (5) of (a–d) show the background of the image, which is colored black. In Figure 14, (6) of (a–d) show the pattern of the image, which is colored gray. In Figure 14a–d, the binary images of (5) and (6) are derived from the images of (3), based on their distribution of the main colors. This process of the proposed method is called image binarization. It clearly distinguishes the background and foreground from the images. It is also called the segmentation of images. However, the focus is the pattern with the main colors, so the color of the background must be eliminated from the main colors.
When the pattern undergoes binarization, the construction of the binary foreground (black) turns into another important feature. In Figure 15a, which comes from Figure 14b, this image is divided into 8 × 8 blocks of equal size. The proposed method then computes the ratio of the black pixels for every block and the result is shown in Figure 15b. This feature provides two advantages of the proposed method. The first is for the classification of input images and the second is for the indexing of the reference image.

3.2. Classification

In the proposed framework (Figure 4), when an input image is added into the database by the system administrator, the processed image is classified based on two conditions: the “emotion” and the “complexity”. That is to say, each image in the database contains two of these features. In addition to the “emotion”, the “complexity” allows indexing that is more appropriate to the user. In the proposed method, the “complexity” can also be separated into two options: the pattern complexity and the color complexity.
Figure 16 shows the procedure whereby the feature of the “complexity” is employed to promote the accuracy for the processing of classification. The binary images in Figure 16, (2) of (a–c), are transferred from (1). This is one of the two features for the proposed method. In this research, the pattern complexity is set at a threshold and has three grades: “sparse”, “middle” and “dense”, depending on the complexity of the binary images.
Figure 16a shows an example of the complexity for “two colors” and “sparse”, where the average complexity of the binary image is less than 0.33. The numerical details are shown in Figure 16, (3) of (a). An example of “three colors” and “middle” is shown in Figure 16, (3) of (b). The average complexity of the binary image is less than 0.66 and more than 0.33. Finally, Figure 16c shows an example for “five colors” and “dense”. The average complexity of the binary image is more than 0.66. The numerical details are shown in Figure 16, (3) of (c).
Figure 17 shows the classification processes that employ the two features emotion and complexity, which were acquired from the processed image. Therefore, all images in the database keep both the features of “emotion” and “complexity”.
In the proposed method, the feature of “emotion” corresponds to one of the emotions of the predefined color–emotion guide, which has four different types of color combination. As well as a single color, the others are illustrated on the upper side of Figure 17a–c. Figure 17a shows an example of two colors with the emotion feminine, Figure 17b is the emotion lazy, with three colors and Figure 17c contains five colors with the emotion forceful. The lower side of Figure 17a–c shows another feature, the pattern complexity, which is labeled “sparse”, “middle”, and “dense”, respectively.

3.3. Indexing

3.3.1. Reference Image

This study provides three indexing methods for the user. Figure 18 shows the first way, whereby the reference image is used for indexing. The query image, also called the reference image, undergoes feature extraction using the proposed framework, and then the features of “emotion” and “complexity” can be earned. The “emotion”, which is also known as the main color extracted from the reference image, corresponds to the predefined color–emotion guide. The emotion closest to the reference image in the color–emotion guide is then identified.
The search for “emotion“ adopts a measured skill that is similar to the Euclidean Distance to estimate the similarity between the reference image and the predefined color–emotion guide. Therefore, a calculation is operated for the main colors in the reference image and the same number in the color–emotion combination of the color–emotion guide, and then choosing the emotion in the color combination closest to the reference image from the predefined color–emotion guide.
The second way to improve the indexing is “complexity”, which is the distribution of the content of an image. In other words, it is the foreground of an image, when the background is eliminated from all of the main colors. This feature has an 8 × 8 ratio table (see Figure 16), which is the distribution of the foreground. This is the “complexity” of the main colors in an image. In this phase, another measured skill is used for the indexing in Figure 18. The City Block Distance [42] is used to estimate the similarity between the reference image and queried image with the same emotion in the database. It can further improve the accuracy of the indexing, and the indexed outcome can be more approachable and closer to the reference image. Figure 19 demonstrates the processing for indexing using “emotion” and “complexity”.

3.3.2. Semantic with Emotion

In Figure 20, when a key word of the emotion is chosen by the user, the semantic method with only one feature—the emotion—is adopted for the indexing operation. For progressing, first, the users have to choose the desired emotion, and then employ this feature to search the database. Finally, the outputs listed all of the same emotions where the images contain the same emotion with the input key word.

3.3.3. Semantic with Emotion and Complexity

Another skill of the semantic method involves adopting two features: emotion and complexity. The procedure of this method is shown in Figure 21. In this mode, the use of both emotion and complexity can improve the accuracy and be better tailored to the requirements of the user. For progressing, initially, two items must be selected to make a choice in two phases. An emotion is chosen in the first phase, and then the complexity is the second. The complexity is based on the distribution of the foreground in an image. The proposed method separates the complexity of the foreground into two options. Therefore, there are three selections for complexity in this proposed method: the “color” (emotion), the “pattern” (complexity) and the combination of the “color” and the “pattern”. The results of indexing using these three are shown on the right side of Figure 21.

4. Experimentation

The proposed method provides a novel framework for users, allowing three options for indexing. According to the framework (Figure 4), the three options are “indexing by emotion (semantic)”, “indexing by emotion and complexity (semantic)”, and “indexing by emotion and complexity (reference image)”. They are described in Section 4.2, Section 4.3 and Section 4.4, respectively. To assess the validity of the proposed method, this classification and indexing systems were tested using 1280 wallpaper and textile images. For the semantic operation using emotion, a predefined color-emotion guide was used. First, all images put in the database need to be preprocessed (Section 4.1).

4.1. Feature Extraction for All Images in Database

In Figure 4, the left side of the proposed EBCI framework demonstrates the classification process. The testing images were added to the database and these were classified by the two features, “emotion” and “complexity”. That is to say, each image in the database contained two of these features. An example of classification is shown in Figure 22, which shows the same emotion: “soft and elegant”. In Figure 22a, all images with the same emotion are divided into three groups, based on the degree of pattern complexity. These are “Sparse”, “Middle” and “Dense”. The results for another color complexity option are illustrated in Figure 22b. These are “Simple” (1 color or 2 colors), “Middle” (3 colors) and “Complex” (5 colors). The classification results measure up to human’s visual expectations.

4.2. Indexing Using Emotion via Semantic

Figure 23a–d gives some examples of different emotions indexed from the database using the predefined color–emotion guide. These illustrate four emotions: “soft and elegant”, “soothing and lazy”, “cordial and feminine” and “silent and quiet”. The indexing using the proposed method obtains all good results that have highly similar color combinations.

4.3. Indexing Using Emotion and Complexity via Semantic

This option adds another feature: complexity. The complexity of this study has two options: pattern complexity and color complexity. Figure 24a gives an example of several images with the same emotion indexed from the database using color complexity. Figure 24b demonstrates the adoption of pattern complexity. Two kinds of complexity can be combined and served for indexing. The results are shown in Figure 24c, where the image takes the complexity features, including the pattern and the color, and the image classified as “Sparse” and “Simple” is closer to human expectations.

4.4. Indexing Using Emotion and Complexity via Reference Image

When the reference image is adopted for indexing, the reference image needs to be first analyzed and then the two features must be extracted to calculate the similarity between the reference image and the images in the database. This process is shown in Figure 18 and Figure 19. The feature of “emotion” applies the Euclidean Distance to estimate the similarity between the reference image and the images in the database. The “complexity” feature uses the City Block Distance to estimate the similarity between the reference image and the images in the database. An example of indexing using the reference image is demonstrated in Figure 25, which shows the results for the same “emotion” in Figure 25b, for the same “emotion” and keeping the same “complexity” of “dense” in Figure 25c, and for the same “complexity” of “dense” but having different “emotions” in Figure 25d. Good results are shown in Figure 25 based on the proposed method.
The outcome of a user study according to the emotions they feel from the images is shown in Table 1. Eighty people who are all Ethnic Chinese are selected to participate in and experience the system comprising three kinds of tests on indexing including “Reference Image—Emotion and Complexity”; “Semantic—Emotion”; and “Semantic—Emotion and Complexity”. Every test in the system provides three options, which are the effects of human feeling including Satisfied, Partially satisfied, and Dissatisfied. In the test, every subject has to acquire 30 images from database using the system. When the number of the acquired images which match subject’s feeling exceed 20, the subject is classified as Satisfied. When the number of the acquired images which match subject’s feeling locate between 10 and 20, the subject is classified as partially satisfied. When the number of the acquired images which match subject’s feeling are below 10, the subject is classified as Dissatisfied. The outcomes of the tests are shown in Table 1, indicating the proposed method is able to achieve good results. In particular, when the indexing adopts “Emotion and Complexity”, we can gain a better outcome. However, since the emotions and messages concerning the content of an image are usually conveyed to humans, views differ because of differences in culture, race, gender or age. To obtain a good outcome, we need a good method and to choose suitable color–emotion models. Therefore, in the future, we suggest adopting different color–emotion models from different nationalities and cultures that have been researched and edited beforehand, to make them more widely adopted and universally applied.
Table 2 demonstrates the correct rate of identification of the foreground and background based on the color complexity of images. Arbitrarily, 100 images that are used in each kind of color complexity were chosen for each kind of methods. It compares the proposed method and some previous methods, such as RPGB [43], KBC [41], and TMHG [44]. In these previous methods, the dominant color region in an image can be represented as a connected fragment of homogeneous color pixels which is perceived by human vision. The RPGB proposed a technique to index and store images based on dominant color regions; in which features like region size and the location of the region are extracted simply considering the color and spatial relations of the regions detected in the image. The main drawback of this technique is it never retrieves the same objects of varying sizes as the similar image. For the smaller object (flowers), the background will be the dominant region, as shown in Figure 26a, whereas, in bigger objects (flowers), the object itself is dominant, as shown in Figure 26b. Even though the semantics of the objects are the same, they are not retrieved as similar images. The KBC proposed a method where dominant color identification based on foreground objects retrieves a greater number of similar images based on the foreground color irrespective of size. It still had the same problem as the RPGB. The TMHG adopted a weighted dominant color descriptor for content-based image retrieval, and the proposed technique helped reduce the effect of image background on an image matching decision where an object’s colors receive much more focus, but it can partially improve previous methods instead of fully distinguishing the foreground or background of images; especially when we are dealing with the images of two kinds of wallpaper and textile. In Figure 26, the proposed method can explicitly identify the same objects (flowers) of varying sizes, even if some differences between the two images exist.

5. Conclusions and Future Work

This system is a new, interesting, and practical experience for the indexing of wallpaper and textiles. It substitutes emotion for color, and delivers user’s feeling based on a suitable color–emotion model. Since this method uses a predefined color–emotion, the system allows an evidence-based and objective portrayal of human emotions. The experimental results shown in Figure 22, Figure 23, Figure 24 and Figure 25 reveal the advantages of using the proposed system.
The proposed approach, combining the two skills of ISQ and LBG, is more effective than LBG alone. It has many advantages, including the dynamic color combinations and a predefined color–emotions, providing an easier and closer link between color and emotion. Further, the algorithms of the TFS and the TUS are another major contribution in the proposed method. It is helpful for determining the background and the foreground more accurately.
In the future, different color–emotion models could be used in this framework to take account of differences in culture, race, gender and age. Although this research merely provides for the images of wallpaper and textiles, the proposed method may also to deal with other kinds of image such as landscape, painting, etc. Moreover, it can aid non-professionals in finding suitable color-combinations based on emotion, and to imitate professional operation, such as interior design, product design, advertising design, image retrieval and other related applications. In addition to the feature of colors, it can try to add other features such as the shape and texture in the future.

Acknowledgments

This research was supported in part by the Ministry of Science and Technology, Taiwan, under the Grants MOST 103-2221-E-007-073-MY3 and MOST 104-2221-E-007 -071 -MY3.

Author Contributions

Yuan-Yuan Su and Hung-Min Sun conceived and designed the experiments; Yuan-Yuan Su performed the experiments; Yuan-Yuan Su and Hung-Min Sun analyzed the data; Yuan-Yuan Su and Hung-Min Sun contributed analysis tools and checked the paper; and Yuan-Yuan Su wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kim, N.Y.; Shin, Y.; KimJ, E.Y. Emotion-Based Textile Indexing Using Neural Networks. In Proceedings of the 2007 IEEE International Symposium on Consumer Electronics, Irving, TX, USA, 20–23 June 2007. [Google Scholar]
  2. Kim, E.Y.; Kim, S. Emotion-based Textile Indexing Using Color, Texture. Fuzzy Syst. Knowl. Discov. 2005, 3613, 1077–1080. [Google Scholar]
  3. Yang, C.K.; Peng, L.K. Automatic mood-transferring between color images. IEEE Comput. Graph. Appl. 2008, 28, 52–61. [Google Scholar] [CrossRef] [PubMed]
  4. Wu, F.; Dong, W.; Kong, Y.; Mei, X.; Paul, J.; Zhang, X. Content-Based Color Transfer. Comput. Graph. Forum 2013, 32, 190–203. [Google Scholar] [CrossRef]
  5. Pouli, T.; Reinhard, E. Progressive histogram reshaping for creative color transfer and tone reproduction. Comput. Graph. 2011, 35, 67–80. [Google Scholar] [CrossRef]
  6. Chang, Y.; Saito, S.; Uchikawa, K.; Nakajima, M. Example-based color stylization of images. ACM Trans. Appl. Percept. 2005, 2, 322–345. [Google Scholar] [CrossRef]
  7. Zhanga, M.; Zhanga, K.; Fenga, Q.; Wanga, J.; Konga, J.; Lua, Y. A novel image retrieval method based on hybrid information descriptors. J. Vis. Commun. Image Represent. 2014, 25, 1574–1587. [Google Scholar] [CrossRef]
  8. Dellagiacoma, M.; Zontone, P.; Boato, G. Emotion Based Classification of Natural Images. In Proceedings of the DETECT’11, the 2011 International Workshop on DETecting and Exploiting Cultural Diversity on the Social Web, Glasgow, Scotland, UK, 24 October 2011. [Google Scholar]
  9. Kobayashi, S. Color Image Scale; Kodansha International: Tokyo, Japan, 1992. [Google Scholar]
  10. Kobayashi, S. The Aim and Method of the Color Image Scale. Color Res. Appl. 1981, 6, 93–107. [Google Scholar] [CrossRef]
  11. Eisemann, L. Pantone’s Guide to Communicating with Color; Goodreads Inc.: San Francisco, CA, USA, 2000. [Google Scholar]
  12. Kawamoto, N.; Soen, T. Objective Evaluation of Color Design. II. Color Res. Appl. 1993, 18, 260–266. [Google Scholar] [CrossRef]
  13. Um, J.; Eum, K.; Lee, J. A Study of the Emotional Evaluation Models of Color Patterns Based on the Adaptive Fuzzy System and the Neural Network. Color Res. Appl. 2002, 27, 208–216. [Google Scholar] [CrossRef]
  14. Chien-Kuo, C. Color Scheme Bible Compact Edition; Grandtech Information Co., Ltd.: Taipei, Taiwan, 2011. [Google Scholar]
  15. Psychology of Color in Logo Design. Available online: www.huffingtonpost.com/brian-honigman/psychology-color-design-infographic_b_2516608.html (accessed on 26 March 2017).
  16. Reinhard, E.; Ashikhmin, M.; Gooch, B.; Shirley, P. Color transfer between images. IEEE Comput. Graph. Appl. 2001, 21, 34–41. [Google Scholar] [CrossRef]
  17. Ou, L.C.; Luo, M.R.; Woodcock, A.; Wright, A. Colour emotions for single colours. In part I of A study of colour emotion and colour preference. Color Res. Appl. 2004, 29, 232–240. [Google Scholar] [CrossRef]
  18. Ou, L.C.; Luo, M.R.; Woodcock, A.; Wright, A. Colour emotions for two-colour combinations. In part II of A study of colour emotion and colour preference. Color Res. Appl. 2004, 29, 292–298. [Google Scholar] [CrossRef]
  19. Wang, W.; Yu, Y.; Jiang, S. Image retrieval by emotional semantics. A study of emotional space and feature extraction. In Proceedings of the 2006 IEEE International Conference on Systems, Man and Cybernetics, Taipei, Taiwan, 8–11 October 2006; Volume 4, pp. 3534–3539. [Google Scholar]
  20. Csurka, G.; Skaff, S.; Marchesotti, L.; Saunders, C. Learning moods and emotions from color combinations. In Proceedings of the Seventh Indian Conference on Computer Vision, Graphics and Image Processing, ICVGIP ’10, Chennai, India, 12–15 December 2010; pp. 298–305. [Google Scholar]
  21. Foley, J.D.; Dam, A.V.; Feiner, S.k.; Hughes, J.F. Computer Graphics: Principles and Practice; Addison-Wesley: Boston, MA, USA, 1990. [Google Scholar]
  22. Su, Y.Y.; Chang, C.C. A New Approach of Color Image Quantization Based on Multi-Dimensional Directory. In Proceedings of the VRAI’ 2002, Virtual Reality and its Application in Industry, Hangzhou, China, 9–12 April 2002; pp. 508–514. [Google Scholar]
  23. Linde, Y.; Buzo, A.; Gray, R.M. An Algorithm for Vector Quantizer Design. IEEE Trans. Commun. 1980, 28, 84–95. [Google Scholar] [CrossRef]
  24. Singha, M.; Hemachandran, K. Content Based Image Retrieval using Color and Texture. Signal Image Process. 2012, 3. [Google Scholar] [CrossRef]
  25. Chaudhari, R.; Patil, A.M. Content Based Image Retrieval Using Color and Shape Features. Int. J. Adv. Res. Electr. Electron. Instrum. Eng. 2012, 1. [Google Scholar] [CrossRef]
  26. Datta, R.; Joshi, D.; Li, J.; Wang, J.Z. Image Retrieval: Ideas, Influences, and Trends of the New Age. ACM Comput. Surv. 2008, 40. [Google Scholar] [CrossRef]
  27. Eysenck, H.J. A critical and experimental study of colour preferences. Am. J. Psychol. 1941, 54, 385–394. [Google Scholar] [CrossRef]
  28. Cohn, J. Experimentelle Untersuchunger uber Gefuhlsbetonung Farben, helligkeiten und ihre Combinationen. Philos. Stud. 1894, 10, 562–603. [Google Scholar]
  29. Norman, R.D.; Scott, W.A. Colour and affect: A review and semantic evaluation. J. Gen. Psychol. 1952, 46, 185–233. [Google Scholar] [CrossRef]
  30. Amara. The Complete Guide to Colour Psychology. Available online: www.amara.com/luxpad/inspiration/colour-psychology/ (accessed on 29 February 2016).
  31. Tai, Y.W.; Jia, J.; Tang, C.K. Local color transfer via probabilistic segmentation by expectation-maximization. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego, CA, USA, 20–25 June 2005; pp. 747–754. [Google Scholar]
  32. Sato, T.; Kajiwara, K.; Hoshino, H.; Nakamura, T. Quantitative evaluation and categorizing of human emotion induced by colour. Adv. Colour Sci. Technol. 2000, 3, 53–59. [Google Scholar]
  33. Lee, J.; Cheon, Y.M.; Kim, S.Y.; Park, E.J. Emotional evaluation of color patterns based on rough sets. In Proceedings of the 2007 International Symposium on Information Technology Convergence (ISITC 2007), Jeonju, Korea, 23–24 November 2007; Volume 1, pp. 140–144. [Google Scholar]
  34. Tanaka, S.; Iwadate, Y.; Inokuchi, S. An attractiveness evaluation model based on the physical features of image regions. In Proceedings of the 15th International Conference on Pattern Recognition, ICPR-2000, Barcelona, Spain, 3–7 September 2000. [Google Scholar]
  35. Itten, J. Art of Colour; Van Nostrand Reinhold: New York, NY, USA, 1962. [Google Scholar]
  36. Mao, X.; Chen, B.; Muta, I. Affective property of image and fractal dimension. Chaos Solitons Fractals 2003, 15, 905–910. [Google Scholar] [CrossRef]
  37. Kim, J.; Lee, J.; Choi, D. Designing emotionally evocative homepages: An empirical study of the quantitative relations between design factors and emotional dimensions. Int. J. Hum. Comput. Stud. 2003, 59, 899–940. [Google Scholar] [CrossRef]
  38. CIELAB. CIELab–Color Models-Technical Guides. Available online: dba.med.sc.edu/price/irf/Adobe_tg/models/cielab.html (accessed on 26 February 2017).
  39. Hartigan, J.A.; Wong, M.A. Algorithm AS 136: A K-Means Clustering Algorithm. J. R. Stat. Soc. Ser. C 1979, 28, 100–108. [Google Scholar] [CrossRef]
  40. Gray, R. Vector Quantization. IEEE ASSP Mag. 1984, 1, 4–29. [Google Scholar] [CrossRef]
  41. Krishnan, N.; Banu, M.S.; Christiyana, C.C. Content Based Image Retrieval Using Dominant Color Identification Based on Foreground Objects. In Proceedings of the International Conference on Conference on Computational Intelligence and Multimedia Applications, Sivakasi, Tamil Nadu, India, 13–15 December 2007; Volume 3. [Google Scholar]
  42. Zhang, D.; Lu, G. Evaluation of Similarity Measurement for Image Retrieval. In Proceedings of the IEEE International Conference Neural Networks & Signal Processing, Nanjing, China, 14–17 December 2003. [Google Scholar]
  43. Ravishankar, K.C.; Prasad, B.G.; Gupta, S.K.; Biswas, K.K. Dominant color region based indexing for CBIR. In Proceedings of the International Conference on Image Analysis and Processing, Venice, Italy, 27–29 September 1999; pp. 887–892. [Google Scholar]
  44. Talib, A.; Mahmuddin, M.; Husni, H.; George, L.E. Dominant Color-Based Indexing Method for Fast Content-Based Image Retrieval. J. Vis. Commun. Image Represent. 2013, 24, 345–360. [Google Scholar] [CrossRef]
Figure 1. An example of a color–emotion guide.
Figure 1. An example of a color–emotion guide.
Applsci 07 00691 g001
Figure 2. In the Chou’s color–emotion study [14], the emotion of softness contains two, three, and five color combinations.
Figure 2. In the Chou’s color–emotion study [14], the emotion of softness contains two, three, and five color combinations.
Applsci 07 00691 g002
Figure 3. The schematic diagram for this method.
Figure 3. The schematic diagram for this method.
Applsci 07 00691 g003
Figure 4. The proposed framework for EBCI.
Figure 4. The proposed framework for EBCI.
Applsci 07 00691 g004
Figure 5. The number of color combinations is chosen according to the color complexity of the image.
Figure 5. The number of color combinations is chosen according to the color complexity of the image.
Applsci 07 00691 g005
Figure 6. The CIELab color model.
Figure 6. The CIELab color model.
Applsci 07 00691 g006
Figure 7. The process for the dynamic extraction of the main colors.
Figure 7. The process for the dynamic extraction of the main colors.
Applsci 07 00691 g007
Figure 8. (a) The 3-D color space is partitioned using the ISQ algorithm; (b) using the ISQ algorithm to identify an initial point, one cube is empty; and (c) using the LBG algorithm to improve the quality further.
Figure 8. (a) The 3-D color space is partitioned using the ISQ algorithm; (b) using the ISQ algorithm to identify an initial point, one cube is empty; and (c) using the LBG algorithm to improve the quality further.
Applsci 07 00691 g008
Figure 9. The stars are defined as the representative point for the main colors, and the small points represent the pixels of the color space: (a) the initial points are gained using ISQ; and (b) some representative colors are lost during the processing by LBG.
Figure 9. The stars are defined as the representative point for the main colors, and the small points represent the pixels of the color space: (a) the initial points are gained using ISQ; and (b) some representative colors are lost during the processing by LBG.
Applsci 07 00691 g009
Figure 10. This example shows that the proposed method can easily and accurately identify the background and the foreground.
Figure 10. This example shows that the proposed method can easily and accurately identify the background and the foreground.
Applsci 07 00691 g010
Figure 11. The flowchart for identifying the background color.
Figure 11. The flowchart for identifying the background color.
Applsci 07 00691 g011
Figure 12. Sorting the remainder of the main colors and identifying the foreground.
Figure 12. Sorting the remainder of the main colors and identifying the foreground.
Applsci 07 00691 g012
Figure 13. Some examples show the possible situations, when the TFS and TUS algorithms are used to find the background color, which is light-blue. The red and the dark-blue represent the possible solutions for the remaining main colors in the images.
Figure 13. Some examples show the possible situations, when the TFS and TUS algorithms are used to find the background color, which is light-blue. The red and the dark-blue represent the possible solutions for the remaining main colors in the images.
Applsci 07 00691 g013
Figure 14. Some actual examples for TFS and TUS, (a) and (b) are TFS; (c) and (d) are TUS.
Figure 14. Some actual examples for TFS and TUS, (a) and (b) are TFS; (c) and (d) are TUS.
Applsci 07 00691 g014aApplsci 07 00691 g014b
Figure 15. Division into 8 × 8 blocks of equal size.
Figure 15. Division into 8 × 8 blocks of equal size.
Applsci 07 00691 g015
Figure 16. In (1) of (ac) TFS and TUS are adopted to determine the background color and the other main color. In (2) and (3) of (ac), the feature of complexity is demonstrated using the binary image.
Figure 16. In (1) of (ac) TFS and TUS are adopted to determine the background color and the other main color. In (2) and (3) of (ac), the feature of complexity is demonstrated using the binary image.
Applsci 07 00691 g016aApplsci 07 00691 g016b
Figure 17. The image classification was based on the features, emotion and complexity.
Figure 17. The image classification was based on the features, emotion and complexity.
Applsci 07 00691 g017
Figure 18. The reference image is used for indexing.
Figure 18. The reference image is used for indexing.
Applsci 07 00691 g018
Figure 19. Adopting two features including the “emotion” and the “complexity”.
Figure 19. Adopting two features including the “emotion” and the “complexity”.
Applsci 07 00691 g019
Figure 20. Indexing by semantic just utilizing the feature of emotion.
Figure 20. Indexing by semantic just utilizing the feature of emotion.
Applsci 07 00691 g020
Figure 21. Indexing by semantic utilizing the features of emotion and complexity.
Figure 21. Indexing by semantic utilizing the features of emotion and complexity.
Applsci 07 00691 g021
Figure 22. Examples of the results of classification.
Figure 22. Examples of the results of classification.
Applsci 07 00691 g022aApplsci 07 00691 g022b
Figure 23. Some examples of different emotions that are indexed from the database using the predefined color–emotion guide.
Figure 23. Some examples of different emotions that are indexed from the database using the predefined color–emotion guide.
Applsci 07 00691 g023
Figure 24. An example whereby the feature of complexity is added to the same emotion.
Figure 24. An example whereby the feature of complexity is added to the same emotion.
Applsci 07 00691 g024
Figure 25. An example of indexing using a reference image.
Figure 25. An example of indexing using a reference image.
Applsci 07 00691 g025aApplsci 07 00691 g025b
Figure 26. Similar images with dominant Background and dominant foreground using previous methods: (a) smaller object (flowers); (b) bigger objects (flowers).
Figure 26. Similar images with dominant Background and dominant foreground using previous methods: (a) smaller object (flowers); (b) bigger objects (flowers).
Applsci 07 00691 g026
Table 1. The effects people feel from the images.
Table 1. The effects people feel from the images.
IndexingReference ImageSemantic
Effects Emotion + ComplexityEmotionEmotion + Complexity
Satisfied70.2557.5071.25
Partially Satisfied28.5033.7525.00
Dissatisfied1.258.753.75
Table 2. The correct rate of identification of foreground and background.
Table 2. The correct rate of identification of foreground and background.
MethodsRPGB [43]KBC [41]TMHG [44]Proposed Method
Color Complexity
Simple38%38%66%100%
Middle52%56%72%100%
Complex68%70%82%92%

Share and Cite

MDPI and ACS Style

Su, Y.-Y.; Sun, H.-M. Emotion-Based Classification and Indexing for Wallpaper and Textile. Appl. Sci. 2017, 7, 691. https://doi.org/10.3390/app7070691

AMA Style

Su Y-Y, Sun H-M. Emotion-Based Classification and Indexing for Wallpaper and Textile. Applied Sciences. 2017; 7(7):691. https://doi.org/10.3390/app7070691

Chicago/Turabian Style

Su, Yuan-Yuan, and Hung-Min Sun. 2017. "Emotion-Based Classification and Indexing for Wallpaper and Textile" Applied Sciences 7, no. 7: 691. https://doi.org/10.3390/app7070691

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop