Next Article in Journal
Structural Topology Design Optimization Using the Binary Bat Algorithm
Previous Article in Journal
Modelling a Spatial-Motion Deep Learning Framework to Classify Dynamic Patterns of Videos
Previous Article in Special Issue
Perspective Morphometric Criteria for Facial Beauty and Proportion Assessment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Observing Pictures and Videos of Creative Products: An Eye Tracking Study

Faculty of Science and Technology, Free University of Bozen-Bolzano, 39100 Bolzano, Italy
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(4), 1480; https://doi.org/10.3390/app10041480
Submission received: 26 January 2020 / Revised: 16 February 2020 / Accepted: 17 February 2020 / Published: 21 February 2020
(This article belongs to the Special Issue Human-Computer Interaction and 3D Face Analysis)

Abstract

:
The paper offers insights into people’s exploration of creative products shown on a computer screen within the overall task of capturing artifacts’ original features and functions. In particular, the study presented here analyzes the effects of different forms of representations, i.e., static pictures and videos. While the relevance of changing stimuli’s forms of representation is acknowledged in both engineering design and human-computer interaction, scarce attention has been paid to this issue hitherto when creative products are in play. Six creative products have been presented to twenty-eight subjects through either pictures or videos in an Eye-Tracking-supported experiment. The results show that major attention is paid by people to original product features and functional elements when products are displayed by means of videos. This aspect is of paramount importance, as original shapes, parts, or characteristics of creative products might be inconsistent with people’s habits and cast doubts about their rationale and utility. In this sense, videos seemingly emphasize said original elements and likely lead to their explanation/resolution. Overall, the outcomes of the study strengthen the need to match appropriate forms of representation with different design stages in light of the needs for designs’ evaluation and testing user experience.

1. Introduction

In the design field, a traditional strategy to reduce the risk of failures stands in making designed products be evaluated, especially if they have innovative features and functions or if they create and fulfill new users’ needs. “Evaluations” is here intended in a broad sense and includes all categories of conscious and unconscious reactions people (other from designers) have when faced with a design. The importance of evaluating a product already from the early design phases is stressed by many scholars, e.g., [1,2,3,4]. In particular, Arrighi et al. [5] hypothesize an improvement of products with consequent saving of resources if final users are involved from the beginning of the design process. A collaborative relationship between designers and users is therefore crucial for the success of a product, and the scholars developed a tool to collect data in a faster and flexible way.
Users have been routinely involved in the design process to carry out evaluations, but this has not covered the early design stages regularly—prototypes close to their final design have been mainly dealt with in the past. With the advent of computers, the Internet, e-commerce, and social media, the variety of representation forms of designs has increased, and the number of potential evaluators has grown dramatically. The use of physical prototypes to test the effectiveness of a product has advantages in terms of evaluations’ reliability, but presents many disadvantages that cannot be overlooked, especially in terms of resources and time. Mengoni et al. [6] report that the effort in virtual prototyping should be less than 40% compared to physical prototyping in consideration of all digital steps of the settings. Dong and Liu [7] state that physical and virtual models should be combined and integrated for an ideal presentation of a design concept. Users’ impressions and experience of a product are certainly influenced by the level of human–product interaction, which changes significantly if products are represented by visual and non-physical stimuli, rather than physical ones [8]. When only virtual prototypes are involved, Chen et al. [9] show that a product is evaluated better if it is placed in the (virtual) context where users can interact with it through multiple senses and have the impression of an experience closer to reality.
Designers supposedly choose products’ forms of representation with a different degree of abstraction depending on the design strategy and on the product development stage. As a result, just a conceptual representation of the product can be leveraged in the very early design stages [10]. Therefore, actually, while the possible forms of product representation are manifold (see Table 1), the acknowledged design paradigm that follows has not been overcome despite the diffusion of computers and IT. The outputs of early design phases are vague [11,12], the degree to which products are defined is low and the necessary information to evaluate the goodness of such outputs is missing. At the same time, feedback coming from players intended to evaluate products is unreliable and user experience studies are unfeasible. In such uncertain conditions, early design stages are attributed of the major responsibilities for the successful completion of the product development process. Therefore, there is little chance to get valuable information when it would be the most needed.
It follows that it is imperative to find a trade-off between the detail level and the interactivity of design forms, the availability of design information and the reliability of evaluations, which is the research context of the present study. While the objectives will be better clarified in Section 2, the starting point of the research is to understand the peculiarities of different forms of representation when these are employed for evaluation purposes. At a first stage, it is of special interest to understand if a correlation exists between the abstraction degree of products’ representations and the depth of their related evaluations. Therefore, the authors collected a sample of scientific contributions in which products have been evaluated. Table 1 shows in detail how the contributions have been organized relating the forms of representation used by scholars (rows) with the kind of evaluations that have been made (columns).
The forms of representation have been arranged with an increasing order of design detail starting from “text”, when users were only provided with a description or written details of the object. Next to the text are “images”, which can be further differentiated into sketches, photos, and photorealistic images. Images with text integration have been classified as “text + images”. Other representation forms are “videos”, which are not particularly widespread as a whole, while “Virtual prototypes”, “Virtual Reality” (VR), “Virtual/Mixed Reality” represent products as if they were in their final stages of design, with a high degree of detail but with a lower level of interaction than “physical prototypes” and “end-use products”. Consequently, the latter are mainly involved at the end of the whole design process.
The authors further classified the representation forms into three categories of stimuli: static, dynamic, and physical ones, as apparent from Table 1. The classes are considered self-explanatory and intuitive, so that no clarifications for the rationale behind this categorization are given. In particular, in the very early stages of design, it is necessary to use static and dynamic stimuli because physical objects are not available. Likewise, it is supposed that more insightful evaluations and interactive experiences take place as the design process progresses. However, it has emerged that there is no balance between the use of static and dynamic stimuli. It is worth noting that the matching of representation forms and evaluation scopes does not give rise to any strong relations between the former and the latter, as the table is not populated by references on the diagonal or in specific quadrants only. In addition, Table 1 clearly shows scholars’ preference in using the former more diffusedly than the latter whatever the evaluation objectives. As aforementioned, images are the most leveraged form of representation; indeed, they are used for each kind of evaluation examined (particularly, to evaluate user experience, attractiveness, and value perception).
As already stated, forms of representation substantially affect user-product interaction and consequently product evaluation [7,8]. However, there is a lack of studies focusing on this aspect; actually, most scholars adopt forms of representation based on availability or convenience, as their focus is steadily on users’ evaluations and feedback, rather than leveraged inputs. Even in contributions that use multiple forms of representation for the same product to compare the outputs of different evaluations [1,2,13], scholars’ aims have not targeted users’ observation strategies during the interaction with stimuli.
It can also be remarked that the study of interactions with stimuli can nowadays benefit from systems and technologies that allow scholars to analyze people’s behavior insightfully. Such systems refer particularly to tools for biometric measures [83], such as Eye-Tracking (ET), and behavioral studies [84], such as facial expression recognition. All these systems, which have made inroads in design and human-computer interaction, are capable of capturing facets of people’s interactions in terms of spontaneous and uncontrolled actions, reactions and physiological changes.

2. Objectives and Originality of the Study

Given the lack of studies in the understanding of people’s interaction with creative products, this paper is intended as a starting point in the study of the visual behavior of potential users when they are administered with different forms of representation. In particular, the overall scope is to understand if there are tangible differences between two or more forms of representations. Images and videos are chosen as stimuli for a first investigation for the reasons that follow.
  • They have different level of dynamics based on the classification presented in Table 1, i.e., pictures are static, while videos are dynamic stimuli.
  • It is possible to use consistent stimuli for the product sets under analysis; otherwise said, the possible bias due to the use of different products can be overcome due to the diffused presence of pictures and videos depicting the same product.
  • Both forms of representation can be employed in ET studies alternatively and supported by the same hardware and software. ET is clearly essential to capture data on people’s visual behavior objectively.
An exhaustive description of the experimental applications of ET in design is provided by [83]. When product evaluations are considered, pictures still represent the predominant form of representation in design-related experiments, while the opportunity to employ videos is substantially neglected. Therefore, the use of ET in the study of products displayed through videos is an additional element of originality of the present paper.
Conversely to design and engineering, contributions are diffused in medicine, education, and social sciences that use videos as stimuli to be studied in ET studies. Some examples are given below.
Pusiol et al. [85] studied the possibility to characterize and diagnose different developmental disorders using the participants’ gaze patterns. Different types of ET were employed by Kok and Jarodzka [86], who reviewed the pros and cons of using these tools to analyze the appropriateness of videos for learning purposes in the medical field. Another study [87] focused on the frontiers of ET in the study of visual expertise in the medical field. Another example of the learning process and skill acquisition is given by [88]; the scholars used a video to assess awareness of high-stress situations in case of an aircraft mission simulator. Videos with captions are used by [89] to understand the participant’s attention allocation in the pre-learning of unknown words of a foreign language. A similar study was carried out by [90], even though infant sensitivity to visual language was taken into account here. In [91], a mute video of a project team meeting was shown to external observers, and it was possible to measure their level of engagement during the observation of non-verbal interaction of the team members through a remote ET.
These studies overall indicate that videos analyzed through ET are useful to investigate the understanding and the visual behavior of people, especially when they perceive something unfamiliar, unusual, or new.

3. Materials and Methods

In line with the objectives of the paper, the authors carried out an experiment to gather data on people’s visual behavior while they observe creative products presented through pictures and videos. Six different creative products were shown to two groups of participants (Group_A and Group_B, 14 participants each) in the mentioned forms of representation. More precisely, the products that were shown as pictures to Group_A were presented as videos to Group_B and vice versa. In this way, each participant observed 3 products in form of pictures and 3 products in form of videos. More in details, videos and pictures were presented on a 23-inch LCD monitor while a remote ET device (Tobii X2-60) recorded participants’ visual behavior. The ET acquired data on the screen coordinates in which participants focused their attention on while carrying out the same task, i.e., observing the pictures/videos with the aim of understanding the products’ original characteristics, advantages, and disadvantages.

3.1. Participants

A sample of 28 adult participants (14 females) have been involved in this research. They were recruited during the initiatives/events that follow in the timeframe September–December 2019.
  • The Long Night of Research held at the Free University of Bozen-Bolzano, Italy.
  • A lecture about ET technologies held by the authors in a course of MSc in Mechanical Engineering at the University of Florence, Italy.
  • A workshop with BSc students in Industrial Design at the University of Florence, Italy.
The involvement in the experiment was volunteer. Written consent has not been filled in, as participants were not requested of any sensitive or demographic data, which was considered of minor importance at the present exploratory stage of research. However, given the circumstances in which participants were hired, a vast majority of them were in the age range 18–30 at the time of the experiment.

3.2. Stimuli

Six different creative products (see Figure 1), in which all the authors recognized original characteristics, have been exploited in the experiment. Creativity was meant as a fundamental feature of stimuli because of the reasons that follow.
  • Creative designs are supposed to attract greater interest than commonplace products and capture users’ attention.
  • A task was designed to keep participants’ attention focused on products and their elements (see Section 3.3) and this required the use of products that are not supposedly everyday objects.
  • As stated in the Introduction section, creative designs are featured by a major need for being evaluated, as they introduce novel elements or functions that deviate from people’s habits.
The creative products were selected based on the availability of videos on YouTube that illustrate these products’ functioning. In addition, the authors selected products that, despite being considered creative and non-commonplace, could be used or benefitted from in everyday situations. This measure is meant to minimize the potential effects of people’s age, experience, and technical background.
The videos were first cut in order to obtain eight-second-long clips and edited to remove text messages. The employed pictures coincide with the first frame of the corresponding video and they are shown in Figure 1. Thanks to this measure, leveraged pictures and videos include a comparable amount of information about the shown products; the major difference is the fact that depicted objects, components, and/or people move in videos.
From the pictures shown in Figure 1, it is possible to notice the original characteristics of the corresponding products; the authors deemed that their original characteristics could be understood or inferred, although with a different level of difficulty.
The products shown in Figure 1 present the following creative characteristics:
  • Equilibrium: It is a colander integrated into a bowl. The water is kept inside the bowl; the removal of the water takes place without any need for disassembling the object while the fall of the washed food is prevented.
  • Flame: it is an induction hob where the flame-shaped LED light projected on the pot indicates the intensity of the heat like in gas-powered hobs.
  • Stairs: space-saving foldable stairs are integrated in the kitchen furniture.
  • OnPot: it is a suction cup applied on a lid and it allows the user to open the lid over the pot and, at the same time, to put it on the edge of the pot.
  • Hood: it is a kitchen hood integrated in the hob so that the user can arrange easily the hob in the middle of the kitchen room.
  • Tire: it is an airless tire that keeps being usable in case of an accidental puncture.

3.3. Procedure

Two different sequences of alternated pictures and videos (see Figure 2) were arranged to be shown to Group_A and Group_B, respectively. Consequently, if a specific product was depicted as picture in the first sequence, the corresponding video was shown in the second sequence and vice versa. The stimuli have been sorted in increasing order of presumable difficulty of understanding. The choice of making participants observe a subset of pictures and videos, instead of e.g., pictures only or videos only, was aimed to minimize the dependence of results on people’s typical eye scan paths, which are known to be largely idiosyncratic, e.g., [92]. This led, therefore, to design an experiment where the condition related to the form of representation, i.e., picture or video, has been randomized between subjects.
At the beginning of each test, in order to allow the ET technology to acquire reliable data on the visual data of each participant, a calibration process was performed. Then, the authors read participants the following test instructions. “You will see pictures or watch videos showing products for a few seconds. We ask you to observe them in silence. At the end of each display, the screen will turn black. As soon as the screen turns black, describe the product, its advantages/disadvantages and the characteristics that make it unusual. At the end of your explanation, nod, look at the center of the screen and we will move forward. So, to summarize, you have to describe the product, its advantages/disadvantages and the characteristics that make it unusual.” In this way, the authors introduced a measure to make the participants’ observations comparable, as they were assigned the same task. In addition, the presence of a quiz-like task was supposed to motivate participants’ persistence in observing products and particularly those features relevant for the understanding of creative aspects.
All the pictures were shown for eight seconds, so that the exposition time of videos and pictures was consistent. When every video and image disappeared, a black screen was shown to allow participants to describe the product they had just seen or watched, for which unlimited time was assigned. Actually, the first product shown was a trial picture aimed to verify the participants’ understanding of the procedure.

4. Data Acquisition and Elaboration

4.1. Areas of Interest and Eye-Tracking Data

As mentioned in the previous sections, the visual behavior of participants was acquired through an ET technology. However, in order to compare the two forms of representations in terms of quantitative data, it is necessary to resort to Areas of Interest (AOIs). The AOIs are portions of the pictures (frames when it comes to videos) on which specific ET data can be collected. In other words, the creation of an AOI enables the collection of specific ET data ascribable to that portion of the picture/frame. In particular, in this paper, the Total Visit Duration (TVD) on the AOIs has been taken as a reference of AOI’s attraction and devoted attention. The TVD is the amount of time that one spends gazing a specific AOI—it considers both the time of fixations and saccades located in the same AOI.
Since the AOIs are created to map the TVD on specific areas of pictures/frames, these areas include pictures/frames’ characteristics useful for the scope of the study. Clearly, the AOIs are static across the product exposition when the products are shown as pictures. On the other hand, since the spatial position of the specific products’ features could change along videos, dynamic AOIs that follow these features’ changes were created.
More in details, Figure 3 shows the static AOIs created on the products’ pictures. AOIs with the same name corresponding to the same products’ features were created on products’ videos. Here, the dynamic changes of AOIs are based on interpolation of polygons across a number of frames in which the authors have adapted AOIs’ shapes. A full explanation of the interpolation mechanism is available in the user’s manual of the software Tobii Pro Studio used in the present experiment, accessible here (see “Dynamic AOIs”, p. 79). Figure 4 shows, as an illustrative example, the different polygons depicting AOIs created on the OnPot’s video in different timeframes indicated in the bottom right of the corresponding frames, which represent the boundary conditions for the creation of dynamic AOIs. Illustrative dynamic AOIs can be viewed in the video OnPot.avi, provided as supplementary material.
A list of all AOIs created, along with the reason for considering them, is illustrated in Table 2. Eventually, 23 different AOIs were created across the six creative products. Clearly, the size of each AOI and its time of exposition could have an influence on the TVD. Indeed, due to the dynamic nature of the AOIs created on the videos, it is possible that AOIs’ size changes or they disappear for limited videos scenes. These sources of variability will be discussed in light of the results emerged.

4.2. Data Elaboration

TVD data has been summarized in Table 3. In the table, the form of representation (picture/video) of the AOIs analyzed is presented in the fourth column. It is possible to notice the alternation of pictures and videos of the same AOI. The TVD sum is the amount of time (in seconds) spent to the specific AOI by the 14 participants. The TVD Average is the average time (in seconds) spent by each participant to the specific AOI, while the TVD SD is the resulting standard deviation.
In order to compare the two forms of representation, the variable TVD Diff% has been created and it is shown in the penultimate column of Table 3. The TVD Diff% is a variable considered within each AOI and it has been calculated as in Equations (1) and (2) for pictures and videos, respectively.
TVD   Diff % P i c t u r e = 100 T D V   S u m P i c t u r e T D V   S u m V i d e o T D V   S u m P i c t u r e
TVD   Diff % V i d e o = 100 T D V   S u m V i d e o T D V   S u m P i c t u r e T D V   S u m V i d e o
From the above equations, it can be noticed that the variable TVD Diff% indicates the extent to which the considered form of representation has extended the TVD of the same AOI. For instance, a high value of TVD Diff% for an AOI referred to a video, e.g., +453% for the AOI Water in the product Equilibrium, means that the element depicted in that AOI was dedicated more attention to in presence of a dynamic stimulus (5.53 times in the illustrative example). On the other hand, as for the same product, the picture as form of representation was able to drive the attention on the AOI Hand for a much longer time (TVD Diff% = +591). Table 3 includes indications on the significance of the increase of the TVD when pictures or videos are displayed. Clearly, the stars depicting the level of significance of the increases as a common rule of thumb can be present just in one of the two rows standing for the same AOI of the same product. For the sake clarity, significant TVD decreases are not indicated, but they can be inferred for videos (pictures) when significant TVD increases emerge for pictures (videos).
With the aim of shading light on the major TVD differences inferable from Table 3 (|TVD Diff%| > 100), a graphical representation of these differences is presented in Figure 5, Figure 6, Figure 7, Figure 8, Figure 9 and Figure 10. In these figures, the distribution frequencies of the TVD on specific AOIs are presented through violin plots. Blue violins refer to pictures, whereas red violins denote videos. The width of each violin represents the proportion of the TVD located there. Each of these figures considers a specific product.
When observing Equilibrium through videos, people tend to pay significantly more attention on Fruit (+277%), Hinge-1 (+447%), and Water (+453%), while the Hand of the user tends to be more unnoticed (−86%), as it can be easily inferred through Figure 5.
A clear shift of attention can be perceived between Flame and Handle in Figure 6. Indeed, people that observed the product Flame through a video tended to focus on the Flame AOI (+253%), while people that observed the same product through picture tended to focus more on the Handle (+109%)—both differences are statistically significant.
In Figure 7, it is possible to notice that the participants that observed the product Stairs in form of picture tended not to pay much attention on Frame-2.
A clear and significant difference between the two forms of representation emerged for the AOI Cap of the product OnPot. Indeed, in Figure 8, it is possible to notice how the video of the OnPot tended to direct the attention to the Cap if compared to the corresponding picture.
As for the product Hood, from Figure 9, it is apparent that the Hood AOI tended to be observed overall less on the Hood’s picture. In addition, the AOI Steam-1 tended to be observed more through the product’s video.
The video of the Tire generally tended to focus the attention on the AOIs Nail-2, Rim, and especially on Tire more than when the corresponding picture was shown.

5. Discussions and Limitations

The paper presents a pioneer study of the comparison of people’s visual behavior when they are administered with static (pictures) and dynamic stimuli (videos) depicting creative products. Beyond its objectives, as already highlighted in Section 2, original elements of the paper include the use of ET in analyzing videos in the design field, and, more in general, the possibility to exploit dynamic AOIs provided by some ET software applications. The critical discussion of the main outcomes follows.
Videos, as a form of representation, overall tend to focus participants’ visual attention on the products’ parts that the authors considered more original and helpful to explain the products’ novel elements. It is worth underlining that the function of videos in product representations might be a specific and not generalizable characteristic for product design and evaluation. Indeed, although videos’ capability of driving attention towards moving elements can be intuitively supposed, the effectiveness of dynamic elements in terms of willingly directing people’s attention has been investigated in different fields of research with divergent results, e.g., [93,94].
In contrast, pictures have shown to lead to a dispersion of attention towards elements of little relevance for the understanding of the products’ functioning and originality such as objects in the background. To this respect, there is an even more remarked difference in the cases of the products Flame and OnPot. Here, indeed, the video led all participants to increase their attention to the AOIs Flame and Cap for the products Flame and OnPot, respectively. These two AOIs have been identified by the authors as the original characteristic of the product (Flame) and the object that undergoes the main function of the product OnPot (Cap). A similar result can be observed for the AOIs Fruit and Water in the product Equilibrium. These AOIs were observed longer in the videos and, these AOIs are representative of the objects that undergo the main function.
An additional evidence emerged by the data analysis is the better capability of the videos to focus the people’s attention on products’ essential/original characteristics even if these are small-sized. To this respect, the AOIs Nail-2 and Hinge-1 of the products Tire and Equilibrium, respectively, can be taken into consideration. Moreover, as for the product Tire, the attention is noteworthy captured by the AOI Nail-2 as the exposure time varies. Indeed, this AOI was shown statically for 8 s while, in its dynamic exposure, Nail-2 was shown for about 4 s only. Nevertheless, Nail-2 is observed more in the video rather than in the corresponding picture.
In the product Hood, it is possible to notice that the video led participants to pay particular attention to Steam-1 compared with the same AOI shown in the picture. This can be due to the fact that the movement of a hardly visible element like steam can be noticed and observed better in a video than in a picture or, at least, to notice the movement of steam in a picture can be more challenging.
In the videos of Equilibrium and Flame, a lower tendency of the participants to observe the AOIs Hand (Equilibrium) and Handle (Flame) can be noticed. This highlights the videos’ capability to drive participants’ attention on specific products’ features than on the user.
The above observations and discussions were based on the data analysis carried out in the experiment described in this paper. It is useful to remark that the data collected were related to participants’ eight-second-long exposure to each stimulus. The analysis of the dynamics of visual behavior could lead to interesting considerations regarding the participants’ attention along the exposure time. Indeed, on the one hand, the quantitative analyses clearly indicate that videos tend to maintain a high concentration of the participants on the characteristic features of the product. On the other hand, the relationship of the time spent on specific AOIs and the understanding of the original characteristics has not been investigated. The use of videos seemingly diminishes mismatches between designers’ intention and observers’ interpretation, and favors products’ understanding, as reported by a study conducted by the authors in parallel to the present one, which partially shares materials and methods [95].
However, it could be possible that a long lasting attention on certain features is not necessarily motivated by the need for product comprehension and evaluation. Consequently, a brief exposure time could lead to different results in terms of comparing different forms of representation. The dynamic analysis of visual behavior could be carried out using the methods proposed in [96], which, on the other hand, underlines the difficulties to be faced when accounting for the time dimension in ET studies. Through the dynamic analysis of visual behavior, it could be possible to study the speed at which videos and pictures tend to drive participants towards focusing on specific AOIs. However, to understand how this affects the understanding and evaluation of the product, future tests, in which the exposure time to the stimuli will be varied, will be carried out.
Beyond the lack of clear indications of dynamics and sequences of observation, which requires additional scrutiny, the study is affected by some limitations. At first, as aforementioned, the duration of the presence of AOIs and their size varies when comparing pictures and videos. This element has been overlooked so far and the study of its potential moderating effect is the object of future planned studies. The outcomes of the study are supposedly affected by the following choices and conditions, whose impact could be beneficially analyzed.
  • The number of leveraged products is limited, as well as the sample of participants is not representative of any population. Despite the latter, many significant differences between attention focused by pictures and videos emerged.
  • The choice of products along with the corresponding videos was largely arbitrary. The level of creativity and technological sophistication can differ across the chosen products, as well as, although they refer to supposedly common contexts (home, kitchen, means of transportation), they can be featured by different familiarity. With respect to the unusualness of the products chosen, the selection can be considered successful, as no participant spontaneously stated that they were overall familiar with the depictions.
  • Pictures were made out of the first frame of corresponding videos, but they did not necessarily coincide with the most explicative frame of the same videos. As the latter criterion could have introduced additional arbitrariness, the former was chosen.
  • Both pictures and videos depicted the products in their final stage of design, as all the displayed products are marketed. The products are shown in their context of use in all stimuli, and they depict real, physical, and non-simulated environments. The backgrounds of pictures and videos have not been checked for consistency in terms of presence of potentially misleading elements. Pictures and videos showing creative products in intermediate design phases could have affected the results.
  • The background of participants is known just in a subset of cases (engineering, industrial design) and its effect is worth taking into account in future studies along with other demographic data.
  • The duration of videos and corresponding pictures’ exposition was arbitrary, although consistent.
  • The task participants had to carry out is not a standard one. Results would have been likely different if participants had been left free to observe products and videos. The relationship between TVD on AOIs and the understanding of products’ original elements could be beneficially analyzed.
  • The sequences were standardized for the sake of convenience, but they could be randomized in future studies.
  • The TVD was here chosen as a measure of attention and interest aroused by AOIs, but other ET variables are common in design studies to represent gaze and observation phenomena, see [83].

6. Conclusions and Outlook

Despite the limitations exposed in Section 5 and the number of potentially affecting conditions, the authors deem that the results are sufficiently strong to outline peculiarities of using videos in design evaluations. To this respect, a clear issue stands in the impossibility of creating videos or dynamic representations until an intermediate phase in the design process has been reached, i.e., a first layout of the new product is available in a CAD system. The large and significant differences in observed product elements remark that the forms of representation are not neutral when it comes to reactions to depictions of designs and creative products in particular. Therefore, within design studies of emotions, user-experience, or human-computer interaction, forms of representations are to be chosen accurately, as well as multiple ones could be beneficially used to achieve reliable evaluators’ feedback.
From a methodological point of view, the approach followed in the present study can be considered as an initial benchmark for understanding the different effects of forms of representation on product evaluation. However, while the detected differences regard the visual behavior, other aspects can be of interest and worth investigating. To this respect, the evaluation of additional criteria might benefit from different techniques, including participants’ questionnaires, interviews, reports, and surveys, which are not seldom combined with ET data [83]. Still, biometric and behavioral measures other than ET are nowadays available to investigate emotions. In this context, the recalled facial expression recognition systems are clearly an increasingly viable option within design [97,98] and product evaluation [99], while the chance to combining them with ET is under investigation [100].
The integration of the ET-based investigation with other techniques represents a trigger for future work along with the determination of the effects played by the potentially impacting circumstances underlined in Section 5.
Eventually, the authors are available to share the materials used in the experiments, ET data and some information about participants’ understanding of creative product features.

Supplementary Materials

The following are available online at https://www.mdpi.com/2076-3417/10/4/1480/s1, Video OnPot.avi.

Author Contributions

The authors contributed equally to each part of this work. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding. The study is conducted in the frame of the project EYE-TRACK funded by the Free University of Bozen-Bolzano with the call CRC2017.

Acknowledgments

This work was supported by the Open Access Publishing Fund of the Free University of Bozen-Bolzano. The study has benefitted from the equipment of the Mechanical Lab at the Free University of Bozen-Bolzano. The authors are particularly grateful to the 28 volunteer participants who have contributed to the experiment, as well as to Federico Rotini who has contributed to the organization of initiatives in which participants have been hired.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Christoforakos, L.; Diefenbach, S. Idealization Effects in UX Evaluation at Early Concept Stages: Challenges of Low-Fidelity Prototyping. In Proceedings of the AHFE 2018 International Conferences on Usability & User Experience and Human Factors and Assistive Technology, in Loews Sapphire Falls Resort at Universal Studios, Orlando, FL, USA, 21–25 July 2018; Ahram, T., Falcão, C., Eds.; Springer: Cham, Switzerland, 2018; Volume 794, pp. 3–14. [Google Scholar] [CrossRef]
  2. Samantak, R.; Choi, Y.M. Employing design representations for user feedback in the product design lifecycle. In Proceedings of the 21st International Conference on Engineering Design (ICED17), Vancouver, BC, Canada, 21–25 August 2017; Maier, A., Škec, S., Eds.; The Design Society: Glasgow, Scotland, 2017; Volume 4, pp. 563–572. [Google Scholar]
  3. Kushniruk, A.; Nøhr, C. Participatory Design, User Involvement and Health IT Evaluation. Stud. Health Technol. Inform. 2016, 222, 139–151. [Google Scholar] [CrossRef] [PubMed]
  4. Tiwari, V.; Kumar, P.; Tandon, P. Product design concept evaluation using rough sets and VIKOR method. Adv. Eng. Inform. 2016, 30, 16–25. [Google Scholar] [CrossRef]
  5. Arrighi, P.A.; Maurya, S.; Mougenot, C. Towards Co-designing with Users: A Mixed Reality Tool for Kansei Engineering. In Proceedings of the IFIP International Conference on Product Lifecycle Management, Doha, Quatar, 19–21 October 2015; Bouras, A., Eynard, B., Foufou, S., Thoben, K.-D., Eds.; Springer International Publishing: Cham, Switzerland, 2015; Volume 467, pp. 751–760. [Google Scholar] [CrossRef] [Green Version]
  6. Mengoni, M.; Peruzzini, M.; Germani, M. Virtual vs. Physical: An Experimental Study to Improve Shape Perception. In Proceedings of the ASME International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, San Diego, CA, USA, 30 August–2 September 2009; ASME Digital Collection: New York, NY, USA, 2009; pp. 1495–1506. [Google Scholar] [CrossRef]
  7. Dong, Y.; Liu, W. Research on UX evaluation method of design concept under multi-modal experience scenario in the earlier design stages. Int. J. Interact. Des. Manuf. 2018, 12, 505–515. [Google Scholar] [CrossRef]
  8. Khalaj, J.; Pedgley, O. Comparison of semantic intent and realization in product design: A study on high-end furniture impressions. Int. J. Des. 2014, 8, 79–96. [Google Scholar]
  9. Chen, M.; Mata, I.; Fadel, G. Interpreting and tailoring affordance based design user-centered experiments. Int. J. Des. Creat. Innov. 2019, 80, 46–68. [Google Scholar] [CrossRef]
  10. Gibson, I.; Gao, Z.; Campbell, I. A comparative study of virtual prototyping and physical prototyping. Int. J. Manuf. Technol. Manag. 2004, 6, 503–522. [Google Scholar] [CrossRef]
  11. Bacciotti, D.; Borgianni, Y.; Cascini, G.; Rotini, F. Product Planning techniques: Investigating the differences between research trajectories and industry expectations. Res. Eng. Des. 2016, 27, 367–389. [Google Scholar] [CrossRef] [Green Version]
  12. Borgianni, Y.; Cascini, G.; Rotini, F. Investigating the future of the fuzzy front end: Towards a change of paradigm in the very early design phases? J. Eng. Des. 2018, 29, 644–664. [Google Scholar] [CrossRef]
  13. Diefenbach, S.; Hassenzahl, M.; Eckoldt, K.; Laschke, M. The impact of concept (re)presentation on users’ evaluation and perception. In Proceedings of the 6th Nordic Conference on Human-Computer Interaction, Reykjavik, Iceland, 16 October–10 November 2010; Association for Computing Machinery: New York, NY, USA, 2010; pp. 631–634. [Google Scholar] [CrossRef]
  14. Du, P.; MacDonald, E.F. Products’ Shared Visual Features Do Not Cancel in Consumer Decisions. J. Mech. Des. 2015, 137, 1–11. [Google Scholar] [CrossRef]
  15. Kim, M. Digital product presentation, information processing, need for cognition and behavioral intent in digital commerce. J. Retail. Consum. Serv. 2019, 50, 362–370. [Google Scholar] [CrossRef]
  16. Koivunen, K.; Kukkonen, S.; Lahtinen, S.; Rantala, H.; Sharmin, S. Towards deeper understanding of how people perceive design in products. In Proceedings of the Computers in Art and Design Education Conference, Malmö, Sweden, 29 June–1 July 2004; Copenhagen Business School, CBS, Denmark and Malmö University, Eriksen, M.A., Malmborg, L., Nielsen, J., Eds.; 2014. [Google Scholar]
  17. Guo, F.; Ding, Y.; Liu, W.; Liu, C.; Zhang, X. Can eye-tracking data be measured to assess product design? Visual attention mechanism should be considered. Int. J. Ind. Ergon. 2016, 53, 229–235. [Google Scholar] [CrossRef]
  18. Yang, X.; He, H.; Wu, Y.; Tang, C.; Chen, H.; Liang, J. User intent perception by gesture and eye tracking. Cogent Eng. 2016, 3, 1–10. [Google Scholar] [CrossRef]
  19. Yang, C.; An, F.; Chen, C.; Zhu, B. The effect of product characteristic familiarity on product recognition. In IOP Conference Series: Materials Science and Engineering; IOP Publishing: Bristol, UK, 2017; Volume 231, p. 012016. [Google Scholar] [CrossRef] [Green Version]
  20. Avanzini, C.; Mantelet, F.; Aoussat, A.; Jeanningros, F.; Bouchard, C. Evaluating Perceived Quality through Sensory Evaluation in the Development Process of New Products: A Case Study of Luxury Market. In Proceedings of the 7th International Conference on Kansei Engineering and Emotion Research. KEER 2018, Kuching, Sarawak, Malaysia, 19–22 March 2018; Lokman, A., Yamanaka, T., Lévy, P., Chen, K., Koyama, S., Eds.; Springer: Singapore, 2018; Volume 739, pp. 379–388. [Google Scholar] [CrossRef]
  21. Du, P.; MacDonald, E.F. Eye-Tracking Data Predict Importance of Product Features and Saliency of Size Change. J. Mech. Des. 2014, 136, 081005. [Google Scholar] [CrossRef]
  22. Lou, S.; Feng, Y.; Tian, G.; Lv, Z.; Li, Z.; Tan, J. A Cyber-Physical System for Product Conceptual Design Based on an Intelligent Psycho-Physiological Approach. IEEE Access 2017, 5, 5378–5387. [Google Scholar] [CrossRef]
  23. Kovačević, D.; Brozović, M.; Možina, K. Do prominent warnings make packaging less attractive? Saf. Sci. 2018, 110, 336–343. [Google Scholar] [CrossRef]
  24. Razzaghi, M.; Nouri, M. Communicative affordance of industrial design sketching. In Proceedings of the 12th International Conference on Engineering and Product Design Education: When Design Education and Design Research Meet, Trondheim, Norway, 2–3 September 2010; Boks, W., Ion, W., McMahon, C., Parkinson, B., Eds.; The Design Society: Glasgow, Scotland, 2010; pp. 150–155. [Google Scholar]
  25. Boa, D.R.; Ranscombe, C.; Hicks, B. Determining the similarity of products using pairwise comparisons and eye tracking. In Proceedings of the 20th International Conference on Engineering Design, ICED, Milan, Italy, 27–30 July 2015; Weber, C., Husung, S., Cascini, G., Cantamessa, M., Marjanovic, D., Rotini, F., Eds.; The Design Society: Glasgow, Scotland, 2015; Volume 5, pp. 225–234. [Google Scholar]
  26. Ishak, S.M.; Sivaji, A.; Tzuaan, S.S.; Hussein, H.; Bujang, R. Assessing eye fixation behavior through design evaluation of Lawi Ayam artefact. J. Teknol. 2015, 77. [Google Scholar] [CrossRef] [Green Version]
  27. Rojas, J.-C.; Contero, M.; Bartomeu, N.; Guixeres, J. Using Combined Bipolar Semantic Scales and Eye-Tracking Metrics to Compare Consumer Perception of Real and Virtual Bottles. Packag. Technol. Sci. 2015, 28, 1047–1056. [Google Scholar] [CrossRef]
  28. Hyun, K.H.; Lee, J.-H.; Kim, M. The gap between design intent and user response: Identifying typical and novel car design elements among car brands for evaluating visual significance. J. Intell. Manuf. 2017, 28, 1729–1741. [Google Scholar] [CrossRef]
  29. Kukkonen, S. Exploring eye tracking in design evaluation. In Proceedings of the Joining Forces, University of Art and Design Helsinki, Helsinki, Finland, 22–24 September 2005; pp. 119–126. [Google Scholar]
  30. Wang, H.; Wang, J.; Zhang, N.N.; Sun, G.B.; Huang, H.L.; Zhao, H.B. Automobile Modeling Evaluations Based on Electrophysiology. Adv. Mater. Res. 2010, 118, 454–458. [Google Scholar] [CrossRef]
  31. Aurup, G.M.; Akgunduz, A. Pair-Wise Preference Comparisons Using Alpha-Peak Frequencies. J. Integr. Des. Process Sci. 2012, 16, 3–18. [Google Scholar] [CrossRef]
  32. Boa, D.; Hicks, B.; Nassehi, A. A comparison of product preference and visual behaviour for product representations. In Proceedings of the 19th International Conference on Engineering Design (ICED13), Seoul, Korea, 19–22 August 2013; Lindemann, U., Srinivasan, V., Kim, Y.S., Lee, S.W., Clarkson, J., Cascini, G., Eds.; The Design Society: Glasgow, Scotland, 2013; Volume 7, pp. 487–496. [Google Scholar]
  33. Khushaba, R.N.; Wise, C.; Kodagoda, S.; Louviere, J.; Kahn, B.E.; Townsend, C. Consumer neuroscience: Assessing the brain response to marketing stimuli using electroencephalogram (EEG) and eye tracking. Expert Syst. Appl. 2013, 40, 3803–3812. [Google Scholar] [CrossRef]
  34. Sylcott, B.; Cagan, J.; Tabibnia, G. Understanding Consumer Tradeoffs between Form and Function Through Metaconjoint and Cognitive Neuroscience Analyses. J. Mech. Des. 2013, 135, 101002. [Google Scholar] [CrossRef]
  35. Ueda, K. Neural Mechanisms of Evaluation and Memory of Product. In Proceedings of the ASME 2014 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Buffalo, NY, USA, 17–20 August 2014; ASME: New York, NY, USA, 2014; Volume 1, pp. 1–5. [Google Scholar] [CrossRef]
  36. Yılmaz, B.; Korkmaz, S.; Arslan, D.B.; Güngör, E.; Asyalı, M.H. Like/dislike analysis using EEG: Determination of most discriminative channels and frequencies. Comput. Methods Programs Biomed. 2014, 113, 705–713. [Google Scholar] [CrossRef] [PubMed]
  37. Khalighy, S.; Green, G.; Scheepers, C.; Whittet, C. Quantifying the qualities of aesthetics in product design using eye-tracking technology. Int. J. Ind. Ergon. 2015, 49, 31–43. [Google Scholar] [CrossRef]
  38. Rojas, J.-C.; Contero, M.; Camba, J.D.; Castellanos, M.C.; García-González, E.; Gil-Macián, S. Design Perception: Combining Semantic Priming with Eye Tracking and Event-Related Potential (ERP) Techniques to Identify Salient Product Visual Attributes. In Proceedings of the ASME 2015 International Mechanical Engineering Congress and Exposition, Houston, TX, USA, 13–19 November 2015; ASME: New York, NY, USA, 2015; Volume 11. [Google Scholar] [CrossRef]
  39. Telpaz, A.; Webb, R.; Levy, D.J. Using EEG to Predict Consumers’ Future Choices. J. Mark. Res. 2015, 52, 511–529. [Google Scholar] [CrossRef] [Green Version]
  40. Valencia-Romero, A.; Lugo, J.E. Part-Worth Utilities of Gestalt Principles for Product Esthetics: A Case Study of a Bottle Silhouette. J. Mech. Des. 2016, 138, 1–9. [Google Scholar] [CrossRef]
  41. Laohakangvalvit, T.; Ohkura, M. Relationship Between Physical Attributes of Spoon Designs and Eye Movements Caused by Kawaii Feelings. In Proceedings of the AHFE 2017 International Conference on Affective and Pleasurable Design, The Westin Bonaventure Hotel, Los Angeles, CA, USA, 17–21 July 2017; Chung, W.J., Shin, C.S., Eds.; Springer: Cham, Switzerland, 2017; Volume 585, pp. 245–257. [Google Scholar] [CrossRef]
  42. Li, B.R.; Wang, Y.; Wang, K.S. A novel method for the evaluation of fashion product design based on data mining. Adv. Manuf. 2017, 5, 370–376. [Google Scholar] [CrossRef]
  43. Nagai, Y.; Fukami, T.; Kadomatsu, S.; Tera, A. A Study on Product Display Using Eye-Tracking Systems. In Proceedings of the ICoRD 2017 International Conference on Research into Design, Guwahati, India, 9–17 January 2017; Chakrabarti, A., Chakrabarti, D., Eds.; Springer: Singapore, 2017; Volume 1, pp. 547–555. [Google Scholar] [CrossRef]
  44. Dogan, K.M.; Suzuki, H.; Gunpinar, E. Eye tracking for screening design parameters in adjective-based design of yacht hull. Ocean Eng. 2018, 166, 262–277. [Google Scholar] [CrossRef]
  45. She, J.; MacDonald, E.F. Exploring the Effects of a Product’s Sustainability Triggers on Pro-Environmental Decision-Making. J. Mech. Des. 2018, 140. [Google Scholar] [CrossRef]
  46. Maccioni, L.; Borgianni, Y.; Basso, D. Value Perception of Green Products: An Exploratory Study Combining Conscious Answers and Unconscious Behavioral Aspects. Sustainability 2019, 11, 1226. [Google Scholar] [CrossRef] [Green Version]
  47. Tan, Z.; Zhu, Y.; Zhao, J. Research on User’s Perceptual Preference of Automobile Styling. In Proceedings of the AHFE 2018 International Conference on Applied Human Factors and Ergonomics, Orlando, FL, USA, 21–25 July 2018; Fukuda, S., Ed.; Springer: Cham, Switzerland, 2018; Volume 774, pp. 41–52. [Google Scholar] [CrossRef]
  48. Carbon, C.C.; Hutzler, F.; Minge, M. Innovativeness in design investigated by eye movements and pupillometry. Psychol. Sci. 2006, 48, 173–186. [Google Scholar]
  49. Park, J.; DeLong, M.; Woods, E. Exploring product communication between the designer and the user through eye-tracking technology. Int. J. Fash. Des. Technol. Educ. 2012, 5, 67–78. [Google Scholar] [CrossRef]
  50. Lo, C.H.; Wang, I.J.; Huang, S.H.; Chu, C.H. Evaluating appearance-related product prototypes with various facial characteristics. In Proceedings of the 19th International Conference on Engineering Design, (ICED13), Seul, Korea, 19–22 August 2013; Lindemann, V., Srinivasan, Y.S., Kim, S.W., Lee, J., Clarkson, G., Cascini, G., Eds.; The Design Society: Glasgow, Scotland, 2013; Volume 7, pp. 227–236. [Google Scholar]
  51. Köhler, M.; Falk, B.; Schmitt, R. Applying Eye-Tracking in Kansei Engineering Method for Design Evaluations in Product Development. Int. J. Affect. Eng. 2015, 14, 241–251. [Google Scholar] [CrossRef] [Green Version]
  52. Seshadri, P.; Bi, Y.; Bhatia, J.; Simons, R.; Hartley, J.; Reid, T. Evaluations That Matter: Customer Preferences Using Industry-Based Evaluations and Eye-Gaze Data. In Proceedings of the ASME 2016 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Charlotte, NC, USA, 21–24 August 2016; ASME: New York, NY, USA, 2016; Volume 7. [Google Scholar] [CrossRef] [Green Version]
  53. Li, W.; Wang, L.; Wang, L.; Jing, J. A model based on eye movement data and artificial neutral network for product styling evaluation. In Proceedings of the 24th International Conference on Automation & Computing (ICAC), Newcastle upon Tyne, UK, 6–7 September 2018; IEEE: Piscataway, NJ, USA, 2018; Volume 794, pp. 1–6. [Google Scholar] [CrossRef]
  54. Schmitt, R.; Köhler, M.; Durá, J.V.; Diaz-Pineda, J. Objectifying user attention and emotion evoked by relevant perceived product components. J. Sens. Sens. Syst. 2014, 3, 315–324. [Google Scholar] [CrossRef]
  55. Burlamaqui, L.; Dong, A. Eye gaze experiment into the recognition of intended affordances. In Proceedings of the ASME 2017 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference IDETC/CIE 2017, Cleveland, OH, USA, 6–9 August 2017; ASME: New York, NY, USA, 2017; Volume 7. [Google Scholar] [CrossRef]
  56. Kim, J.; Bouchard, C.; Ryu, H.; Omhover, J.F.; Aoussat, A. Emotion finds a way to users from designers: Assessing product images to convey designer’s emotion. J. Des. Res. 2012, 10, 307–323. [Google Scholar] [CrossRef] [Green Version]
  57. Ho, C.; Lu, Y. Can pupil size be measured to assess design products? Int. J. Ind. Ergon. 2014, 44, 436–441. [Google Scholar] [CrossRef]
  58. Hsu, C.; Fann, S.; Chuang, M. Relationship between eye fixation patterns and Kansei evaluation of 3D chair forms. Displays 2017, 50, 21–34. [Google Scholar] [CrossRef]
  59. Green, A.; Chattaraman, V. Creating an Affective Design Typology for Basketball Shoes Using Kansei Engineering Methods. In Proceedings of the AHFE 2018 Advances in Intelligent Systems and Computing, Orlando, FL, USA, 24–28 July 2018; Advances in Affective and Pleasurable Design. Fukuda, S., Ed.; Springer: Cham, Switzerland, 2018; Volume 774, pp. 355–361. [Google Scholar] [CrossRef]
  60. Yoon, S.Y. Usability in Context: A Framework for Analyzing the Impact of Virtual Reality in Design Evaluation Context. In Proceedings of the 11th International Conference on Computer Aided Architectural Design Research in Asia, Kumamoto, Japan, 30 March–2 April 2006; pp. 371–377. [Google Scholar]
  61. Hsiao, S.-W.; Hsu, C.-F.; Lee, Y.-T. An online affordance evaluation model for product design. Des. Stud. 2012, 33, 126–159. [Google Scholar] [CrossRef]
  62. He, C.; Ji, Z.; Gu, J. Research on the Effect of Mechanical Drawings’ Different Marked Way on Browse and Search Efficiency Based on Eye-Tracking Technology. In Proceedings of the 18th International Conference on International Conference on Man-Machine-Environment System Engineering MMESE, Jinggangshan, China, 21–23 October 2017; Long, S., Dhillon, B.S., Eds.; Springer: Singapore, 2017; Volume 456, pp. 515–523. [Google Scholar] [CrossRef]
  63. Suzianti, A.; Rengkung, S.; Nurtjahyo, B.; Al Rasyid, H. An analysis of cognitive˗ based design of yogurt product packaging. Int. J. Technol. 2015, 6, 659. [Google Scholar] [CrossRef]
  64. Du, P.; MacDonald, E.F. A Test of the Rapid Formation of Design Cues for Product Body Shapes and Features. J. Mech. Des. 2018, 140, 071102. [Google Scholar] [CrossRef] [Green Version]
  65. Naef, M.; Payne, J. AutoEval mkII—Interaction design for a VR design review system. In Proceedings of the IEEE Symposium on 3D User Interfaces, Charlotte, NC, USA, 10–11 March 2007; IEEE: Piscataway, NJ, USA, 2007; pp. 45–48. [Google Scholar] [CrossRef]
  66. Papagiannidis, S.; See-To, E.; Bourlakis, M. Virtual test-driving: The impact of simulated products on purchase intention. J. Retail. Consum. Serv. 2014, 21, 877–887. [Google Scholar] [CrossRef]
  67. Ergan, S.; Radwan, A.; Zou, Z.; Tseng, H.A.; Han, X. Quantifying Human Experience in Architectural Spaces with Integrated Virtual Reality and Body Sensor Networks. J. Comput. Civ. Eng. 2019, 33, 04018062. [Google Scholar] [CrossRef]
  68. Valencia-Romero, A.; Lugo, J.E. An immersive virtual discrete choice experiment for elicitation of product aesthetics using Gestalt principles. Des. Sci. 2017, 3, 1–24. [Google Scholar] [CrossRef] [Green Version]
  69. Ferrise, F.; Bordegoni, M.; Graziosi, S.A. Method for designing users’ experience with industrial products based on a multimodal environment and mixed prototypes. Comput. Aided Des. Appl. 2013, 461–474. [Google Scholar] [CrossRef]
  70. Graziosi, S.; Ferrise, F.; Bordegoni, M.; Ozbey, O. A method for capturing and translating qualitative user experience into design specifications: The haptic feedback of appliance interfaces. In Proceedings of the 19th International Conference on Engineering Design, ICED, Seoul, Korea, 19–22 August 2013; Lindemann, U., Srinivasan, V., Yong, S.K., Lee, S.W., Clarkson, J., Cascini, G., Eds.; Design Society: Glasgow, Scotland, 2013; pp. 427–436. [Google Scholar]
  71. Normark, C.J.; Gustafsson, A. Design and evaluation of a personalisable user interface in a vehicle context. J. Des. Res. 2014, 12, 308–329. [Google Scholar] [CrossRef]
  72. Mahut, T.; Bouchard, C.; Omhover, J.F.; Favart, C.; Esquivel, D. Interdependency between user experience and interaction: A Kansei design approach. Int. J. Interact. Des. Manuf. 2018, 12, 105–132. [Google Scholar] [CrossRef]
  73. Mahlke, S.; Rösler, D.; Seifert, K.; Krems, J.F.; Thüring, M. Evaluation of Six Night Vision Enhancement Systems: Qualitative and Quantitative Support for Intelligent Image Processing. Human Factors: J. Hum. Factors Ergon. Soc. 2007, 49, 518–531. [Google Scholar] [CrossRef]
  74. Jeong, G.; Self, J. Mode-of-use Innovation in Interactive Product Development. Arch. Des. Res. 2017, 30, 41–59. [Google Scholar] [CrossRef]
  75. Gaspar, J.; Fontul, M.; Henriques, E.; Silva, A. User satisfaction modeling framework for automotive audio interfaces. Int. J. Ind. Ergon. 2014, 44, 662–674. [Google Scholar] [CrossRef]
  76. Mussgnug, M.; Lohmeyer, Q.; Meboldt, M. Raising designers’ awareness of user experience by mobile eye tracking records. In Proceedings of the 16th International Conference on Engineering and Product Design Education: Design Education and Human Technology Relations, University of Twente, Enschede, The Netherlands, 4–5 September 2014; Bohemia, E., Eger, A., Eggink, W., Kovacevic, A., Parkinson, B., Wits, W., Eds.; The Design Society: Glasgow, Scotland, 2014; pp. 99–104. [Google Scholar]
  77. Mussgnug, M.; Singer, D.; Lohmeyer, Q.; Meboldt, M. Automated interpretation of eye–hand coordination in mobile eye tracking recordings. Künstliche Intell. 2017, 31, 331–337. [Google Scholar] [CrossRef]
  78. Huang, F. Understanding user acceptance of battery swapping service of sustainable transport: An empirical study of a battery swap station for electric scooters, Taiwan. Int. J. Sustain. Transp. 2019, 1–14. [Google Scholar] [CrossRef]
  79. Borgianni, Y.; Maccioni, L.; Basso, D. Exploratory study on the perception of additively manufactured end-use products with specific questionnaires and eye-tracking. Int. J. Interact. Des. Manuf. IJIDeM 2019, 13, 743–759. [Google Scholar] [CrossRef]
  80. Gaspar, J.; Fontul, M.; Henriques, E.; Ribeiro, A.; Silva, A.; Valverde, N. Psychoacoustics of in-car switch buttons: From feelings to engineering parameters. Appl. Acoust. 2016, 110, 280–296. [Google Scholar] [CrossRef]
  81. Lin, F.-H.; Tsai, S.-B.; Lee, Y.-C.; Hsiao, C.-F.; Zhou, J.; Wang, J.; Shang, Z. Empirical research on Kano’s model and customer satisfaction. PLoS ONE 2017, 12, e0183888. [Google Scholar] [CrossRef] [Green Version]
  82. Hurley, R.A.; Galvarino, J.; Thackston, E.; Ouzts, A.; Pham, A. The effect of modifying structure to display product versus graphical representation on packaging. Packag. Technol. Sci. 2013, 26, 453–460. [Google Scholar] [CrossRef]
  83. Borgianni, Y.; Maccioni, L. Review of the use of neurophysiological and biometric measures in experimental design research. Artif. Intell. Eng. Des. Anal. Manuf. AIEDAM 2020, in press. [Google Scholar] [CrossRef] [Green Version]
  84. Nonis, F.; Dagnes, N.; Marcolin, F.; Vezzetti, E. 3D Approaches and Challenges in Facial Expression Recognition Algorithms—A Literature Review. Appl. Sci. 2019, 9, 3904. [Google Scholar] [CrossRef] [Green Version]
  85. Pusiol, G.; Esteva, A.; Hall, S.S.; Frank, M.; Milstein, A.; Fei-Fei, L. Vision-Based Classification of Developmental Disorders Using Eye-Movements. In Proceedings of the international Conference on Medical Image Computing and Computer-Assisted Intervention MICCAI, Shenzhen, China, 13–17 October 2016; Ourselin, S., Joskowicz, L.M., Sabuncu, R., Unal, G., Wells, W., Eds.; Springer: Cham, Switzerland, 2016; Volume 9901, pp. 317–325. [Google Scholar] [CrossRef]
  86. Kok, E.M.; Jarodzka, H. Before your very eyes: The value and limitations of eye tracking in medical education. Med. Edu. 2017, 51, 114–122. [Google Scholar] [CrossRef]
  87. Fox, S.E.; Faulkner-Jones, B.E. Eye-tracking in the study of visual expertise: Methodology and approaches in medicine. Front. Learn. Res. 2017, 5, 43–54. [Google Scholar] [CrossRef]
  88. Mark, J.; Curtin, A.; Kraft, A.; Sands, T.; Casebeer, W.D.; Ziegler, M.; Ayaz, H. Eye Tracking-Based Workload and Performance Assessment for Skill Acquisition. In Proceedings of the AHFE 2019 International Conference on Neuroergonomics and Cognitive Engineering, Washington, DC, USA, 24–28 July 2019; Ayaz, H., Ed.; Springer: Cham, Switzerland, 2019; Volume 953, pp. 129–141. [Google Scholar] [CrossRef]
  89. Montero Perez, M. Pre-learning vocabulary before viewing captioned video: An eye-tracking study. Lang. Learn. J. 2019, 47, 460–478. [Google Scholar] [CrossRef]
  90. Stone, A.; Bosworth, R.G. Exploring Infant Sensitivity to Visual Language using Eye Tracking and the Preferential Looking Paradigm. J. Vis. Exp. JoVE 2019, 147, 1–9. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  91. Gerpott, F.H.; Lehmann-Willenbrock, N.; Silvis, J.D.; Van Vugt, M. In the eye of the beholder? An eye-tracking experiment on emergent leadership in team interactions. Leadersh. Q. 2018, 29, 523–532. [Google Scholar] [CrossRef]
  92. Foulsham, T.; Dewhurst, R.; Nyström, M.; Jarodzka, H.; Johansson, R.; Underwood, G. Holmqvist, K. Comparing scanpaths during scene encoding and recognition: A multi-dimensional approach. J. Eye Mov. Res. 2012, 5, 1–14. [Google Scholar] [CrossRef]
  93. Petersen, H.; Nielsen, J. The eye of the user: The influence of movement on users’ visual attention. Digit. Creat. 2002, 13, 109–121. [Google Scholar] [CrossRef]
  94. Hernández-Méndez, J.; Muñoz-Leiva, F. What type of online advertising is most effective for eTourism 2.0? An eye tracking study based on the characteristics of tourists. Comput. Hum. Behav. 2015, 50, 618–625. [Google Scholar] [CrossRef]
  95. Berni, A.; Maccioni, L.; Borgianni, Y. An Eye-Tracking supported investigation into the role of forms of representation on design evaluations and affordances of original product features. In Proceedings of the Design Society: DESIGN Conference, Cavtat, Croatia, 18–21 May 2020. [Google Scholar]
  96. Del Fatto, V.; Dignös, A.; Raimato, G.; Maccioni, L.; Borgianni, Y.; Gamper, J. Visual time period analysis: A multimedia analytics application for summarizing and analyzing eye-tracking experiments. Multimed. Tools Appl. 2019, 78, 32779–32804. [Google Scholar] [CrossRef]
  97. Van Os, G.; van Beurden, K. Emogram: Help (student) design researchers understanding user emotions in product design. In Proceedings of the 21st International Conference on Engineering and Product Design Education (E&PDE 2019), University of Strathclyde, Glasgow, UK, 12–13 September 2019. [Google Scholar] [CrossRef]
  98. Tornincasa, S.; Vezzetti, E.; Moos, S.; Violante, M.G.; Marcolin, F.; Dagnes, N.; Ulrich, L.; Fantini Tregnaghi, G. 3D Facial Action Units and Expression Recognition using a Crisp Logic. Comput. Aided Des. Appl. 2019, 16, 256–268. [Google Scholar] [CrossRef]
  99. Nigam, R.; Kumar, N.; Mondal, S. Emotion detection of human face. Int. J. Eng. Adv. Technol. 2019, 9, 5521–5524. [Google Scholar] [CrossRef]
  100. Jaiswal, S.; Virmani, S.; Sethi, V.; De, K.; Roy, P.P. An intelligent recommendation system using gaze and emotion detection. Multimed. Tools Appl. 2019, 78, 14231–14250. [Google Scholar] [CrossRef]
Figure 1. Pictures of the leveraged creative products.
Figure 1. Pictures of the leveraged creative products.
Applsci 10 01480 g001
Figure 2. Procedure scheme, where the symbol P (V) means that the product shown is in the form of a picture (video).
Figure 2. Procedure scheme, where the symbol P (V) means that the product shown is in the form of a picture (video).
Applsci 10 01480 g002
Figure 3. Areas of Interest studied for each product.
Figure 3. Areas of Interest studied for each product.
Applsci 10 01480 g003
Figure 4. Boundary frames for the creation of dynamic Areas of Interest (AOIs) for the product OnPot.
Figure 4. Boundary frames for the creation of dynamic Areas of Interest (AOIs) for the product OnPot.
Applsci 10 01480 g004
Figure 5. Violin plots highlighting the most significant differences on Total Visit Duration (TVD) Diff% between forms of representation for the product Equilibrium.
Figure 5. Violin plots highlighting the most significant differences on Total Visit Duration (TVD) Diff% between forms of representation for the product Equilibrium.
Applsci 10 01480 g005
Figure 6. Violin plots highlighting the most significant differences on TVD Diff% between forms of representation for the product Flame.
Figure 6. Violin plots highlighting the most significant differences on TVD Diff% between forms of representation for the product Flame.
Applsci 10 01480 g006
Figure 7. Violin plots highlighting the most significant differences on TVD Diff% between forms of representation for the product Stairs.
Figure 7. Violin plots highlighting the most significant differences on TVD Diff% between forms of representation for the product Stairs.
Applsci 10 01480 g007
Figure 8. Violin plots highlighting the most significant differences on TVD Diff% between forms of representation for the product OnPot.
Figure 8. Violin plots highlighting the most significant differences on TVD Diff% between forms of representation for the product OnPot.
Applsci 10 01480 g008
Figure 9. Violin plots highlighting the most significant differences on TVD Diff% between forms of representation for the product Hood.
Figure 9. Violin plots highlighting the most significant differences on TVD Diff% between forms of representation for the product Hood.
Applsci 10 01480 g009
Figure 10. Violin plots highlighting the most significant differences on TVD Diff% between forms of representation for the product Tire.
Figure 10. Violin plots highlighting the most significant differences on TVD Diff% between forms of representation for the product Tire.
Applsci 10 01480 g010
Table 1. Organization of literature contributions with respect to representation forms and evaluations.
Table 1. Organization of literature contributions with respect to representation forms and evaluations.
StimuliForm of RepresentationUser ExperienceSatisfactionCognitive PerceptionPreferencesAttractivenessValue PerceptionAffordancesEmotions
StaticText[1,13][13][14][1,15]
StaticImages[1,7,13,16,17,18,19,20][2,16,21,22,23][19,20,24,25,26,27,28][14,18,21,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47][1,15,16,17,18,20,22,23,25,37,40,41,42,44,48,49,50,51,52,53][22,24,38,54][26,42,55][34,47,51,54,56,57,58,59]
StaticText + Images[2,7,8,13,60,61][2,8,62][8,13][14,34][8][8] [34]
DynamicVideo[7]
DynamicText + Video[7,13] [13]
DynamicVirtual Prototype[2][2,22] [17,63,64]
DynamicVirtual Reality[60,65,66,67][60,66][27,67,68][68][27,67,68][66] [6,67]
PhysicalAugmented Reality/
Mixed Reality/
Mixed Prototype
[2,69,70,71][2,71][72][69,70] [71][72]
PhysicalPrototype[13,73][73,74][13][74][6,73] [6,74]
PhysicalEnd-use product[20,75,76,77,78,79][75,78,80,81][20,75,80][45,75,80,82][20,75,79,82][20,75][79][75,78]
Table 2. AOIs created and coded, and reasons behind their presence. IDs correspond to those of Figure 3.
Table 2. AOIs created and coded, and reasons behind their presence. IDs correspond to those of Figure 3.
ProductAOI IDAOI NameReason Behind Studying the AOI
EquilibriumAHandIt suggests where/how the user should hold/handle the product
EquilibriumBHinge-1Thanks to this feature, the product can perform its original function
EquilibriumCHinge-2Thanks to this feature, the product can perform its original function
EquilibriumDFruitIt is the object on which the product’s main function is performed
EquilibriumEWaterIt is the object that undergoes the product’s main function
FlameFPotIt is the object on which the product’s main function is performed
FlameGFlameThanks to this feature, the product can perform its original function
FlameHHandleIt suggests where/how the user should hold/handle the product
StairsIUserIt suggests how the user should use the product
StairsJFrame-1Thanks to this feature, the product can perform its original function
StairsKFrame-2Thanks to this feature, the product can perform its original function
StairsLStairsIt is the object that performs the product’s main function
OnPotMHandIt suggests where/how the user should hold/handle the product
OnPotNCapIt is the object on which the product’s main function is performed
OnPotOOnPotIt is the object that performs the product’s main (and original) function
HoodPHoodIt is the object that performs the product’s main (and original) function
HoodQSteam-1It is the object on which the product’s main function is performed
HoodRSteam-2It is the object on which the product’s main function is performed
TireSRimIt is a structural part that makes it possible to understand the product better
TireTTireIt is the object that performs the product’s main (and original) function
TireUNail-1It is an object that makes it possible to understand the product’s original function
TireVNail-2It is an object that makes it possible to understand the product’s original function
TireWNail-3It is an object that makes it possible to understand the product’s original function
Table 3. Summary of results concerning the Total Visit Duration for each Area of Interest present in both pictures and videos of the selected creative products (*** = p_value < 0.001, ** = p_value < 0.01, * = p_value < 0.05).
Table 3. Summary of results concerning the Total Visit Duration for each Area of Interest present in both pictures and videos of the selected creative products (*** = p_value < 0.001, ** = p_value < 0.01, * = p_value < 0.05).
ProductAOI
ID
AOI NameFormTVD SumTVD AverageTVD SDTVD Diff%Significant Increase
EquilibriumAHandPicture8.570.610.51+591**
EquilibriumAHandVideo1.240.090.23−86
EquilibriumBHinge-1Picture3.430.250.38−82
EquilibriumBHinge-1Video18.771.340.92+447***
EquilibriumCHinge-2Picture1.760.130.28+53
EquilibriumCHinge-2Video1.150.080.13−35
EquilibriumDFruitPicture17.441.250.95−73
EquilibriumDFruitVideo65.764.702.06+277***
EquilibriumEWaterPicture5.790.410.50−82
EquilibriumEWaterVideo32.002.291.14+453***
FlameFPotPicture9.570.680.90+28
FlameFPotVideo7.450.530.59−22
FlameGFlamePicture18.761.340.91−72
FlameGFlameVideo66.194.731.09+253***
FlameHHandlePicture29.092.081.09+109**
FlameHHandleVideo13.920.990.52−52
StairsIUserPicture14.101.010.37−26
StairsIUserVideo19.041.361.06+35
StairsJFrame-1Picture4.320.310.42+13
StairsJFrame-1Video3.810.270.29−12
StairsKFrame-2Picture1.610.120.11+388*
StairsKFrame-2Video0.330.020.06−80
StairsLStairsPicture31.272.231.69+13
StairsLStairsVideo27.721.981.05−11
OnPotMHandPicture9.060.650.63−23
OnPotMHandVideo11.760.840.52+30
OnPotNCapPicture17.581.260.44−85
OnPotNCapVideo119.608.541.95+580***
OnPotOOnpotPicture58.134.151.23+49**
OnPotOOnpotVideo39.132.801.15−33
HoodPHoodPicture13.740.980.52−74
HoodPHoodVideo53.533.821.86+290***
HoodQSteam-1Picture13.410.960.95−59
HoodQSteam-1Video32.812.341.2+145**
HoodRSteam-2Picture4.760.340.53−15
HoodRSteam-2Video5.600.400.36+18
TireSRimPicture12.940.920.54−66
TireSRimVideo37.862.701.16+193***
TireTTirePicture69.034.931.24−57
TireTTireVideo159.2911.382.51+131***
TireUNail-1Picture4.020.290.32−39
TireUNail-1Video6.580.470.45+64
TireVNail-2Picture7.820.560.49−65
TireVNail-2Video22.541.610.71+188***
TireWNail-3Picture0.000.000.00−100
TireWNail-3Video0.590.040.13

Share and Cite

MDPI and ACS Style

Berni, A.; Maccioni, L.; Borgianni, Y. Observing Pictures and Videos of Creative Products: An Eye Tracking Study. Appl. Sci. 2020, 10, 1480. https://doi.org/10.3390/app10041480

AMA Style

Berni A, Maccioni L, Borgianni Y. Observing Pictures and Videos of Creative Products: An Eye Tracking Study. Applied Sciences. 2020; 10(4):1480. https://doi.org/10.3390/app10041480

Chicago/Turabian Style

Berni, Aurora, Lorenzo Maccioni, and Yuri Borgianni. 2020. "Observing Pictures and Videos of Creative Products: An Eye Tracking Study" Applied Sciences 10, no. 4: 1480. https://doi.org/10.3390/app10041480

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop