Next Article in Journal
Application of Wavelet Transform to Damage Identification in the Steel Structure Elements
Next Article in Special Issue
A Novel Framework for Identifying Customers’ Unmet Needs on Online Social Media Using Context Tree
Previous Article in Journal
Thermal Exchange and Skid Resistance of Chip Seal with Various Aggregate Types and Morphologies
Previous Article in Special Issue
The Onset Threshold of Cybersickness in Constant and Accelerating Optical Flow
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Measuring Anticipated and Episodic UX of Tasks in Social Networks

by
Luis Martín Sánchez-Adame
,
José Fidel Urquiza-Yllescas
and
Sonia Mendoza
*
Computer Science Department, CINVESTAV-IPN, Mexico City 07360, Mexico
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(22), 8199; https://doi.org/10.3390/app10228199
Submission received: 30 October 2020 / Revised: 13 November 2020 / Accepted: 17 November 2020 / Published: 19 November 2020
(This article belongs to the Special Issue User Experience for Advanced Human–Computer Interaction)

Abstract

:
Today, social networks are crucial commodities that allow people to share different contents and opinions. In addition to participation, the information shared within social networks makes them attractive, but success is also accompanied by a positive User eXperience (UX). Social networks must offer useful and well-designed user-tools, i.e., sets of widgets that allow interaction among users. To satisfy this requirement, Episodic User eXperience (EUX) yields reactions of users after having interacted with an artifact. Anticipated User eXperience (AUX) grants the designers the capacity to recollect users’ aspirations, assumptions, and needs in the initial development phase of an artifact. In this work, we collect UX perceived in both periods to contrast user expectations and experiences offered on social networks, in order to find elements that could improve the design of user-tools. We arrange a test where participants ( N = 20 ) designed prototypes on paper to solve tasks and then did the same tasks on online social networks. Both stages are assessed with the help of AttrakDiff, and then we analyze the results through t-tests. The results we obtained suggest that users are inclined towards pragmatic aspects of their user-tools expectations.

1. Introduction

The popularity of social networks has increased in recent years [1], especially due to the COVID-19 pandemic [2,3]. However, they are not a new topic but are much less unknown. Social networks have been studied by Computer Science researchers for a long time and from different angles, that is why we can find several definitions in the state-of-the-art [4,5,6,7,8]. Among all, we adopted the one by Boyd and Ellison [9]—“web-based services that allow individuals to (1) construct a public or semi-public profile within a bounded system, (2) articulate a list of other users with whom they share a connection, and (3) view and traverse their list of connections and those made by others within the system. The nature and nomenclature of these connections may vary from site to site”—because it denotes the main elements of social networks and their interaction.
Sociability and usability are vital factors for any social network. Sociability refers, of course, to the contact and exchange of information among users, whereas usability enables technology to allow those exchanges [10,11].
A result of sociability is participation. Therefore, several studies have been carried out in order to understand the motives of individuals to engage in a social network [12,13,14,15]. We believe that the progressing of any social network heavily relies on: (1) the collaboration among its users to create contents and make contributions to the community [16], and (2) the user interaction with businesses, organizations, colleagues, family members, and friends to create together their production and consumption experience and meet their necessities [17,18,19].
While it is true that User eXperience (UX) is a crucial factor for interaction among users on any digital platform, it is not the only aspect to consider. Social networks are a complex phenomenon. Therefore, it is useless to oversimplify them and try to study them from a single front [20]. For example, since the eighties, Grudin studied why collaborative work applications fail [21]. A perfect example that there is no formula for success is that of Google+. Despite having elements of good design, it never succeeded and ended up closing [22].
Although UX is not the only aspect that should concern developers, it is essential to help make interactions among members of a social network as seamless as possible. The two components of UX are hedonism and pragmatism [23,24]. The hedonic components refer to the preferences, convictions, sensations, and conclusions of users that arise from the anticipated or episodic usage of a system, product, or service. The pragmatic components come from the features of the assessed system, such as functionality, interactive behavior, supporting capabilities, usability, and performance [25].
While the core of UX is the current experience of usage, this is not enough to completely cover all the relevant issues that can be studied. UX is a highly dynamic concept, since it changes continuously when interacting with an artifact [26]. People can have diverse and very different experiences before, during, and after interacting with a product [27]. Consequently, it is a critical design aspect to be able to measure the UX of an artifact at multiple times [28].
We can explain the concept of UX over time through four periods. Each period is dynamic and can be viewed as an iterative process within and among those stages [27]:
  • Anticipated UX (AUX): Obtained before the use of an artifact from imagination, expectations, and existing experiences.
  • Momentary UX (MUX): Perceived during the usage period of an artifact.
  • Episodic UX (EUX): Conceived after the use of an artifact through reflections of the experience.
  • Cumulative UX (CUX): Determined over time by the recollection of multiple periods of use.
Periods are essential because user responses may be different, e.g., when measuring momentary UX, it can result in a visceral response from the user. While if UX is measured some time after the use of an artifact, the user can remember more positive things and suppress the negative ones [29]. In this way, a study that considers more than one period could be more enriching.
While EUX is simply the experience that is obtained after having used a system, product, or service [30], AUX has to do with attitudes and experiences that the user assumes to happen when envisioning using an artifact [31]. Thus, the goal of an AUX assessment is recognizing whether a determined idea offers the type of UX anticipated by developers for potential users [32]. Making AUX trials has been established worthily, even if there are not many research works on this subject [27,33,34,35].
This paper gives continuity and expands the research that we have previously published on this topic [36,37]. The objective of the present work was to know whether there are and which are the differences between User eXpectations (AUX) and the experiences they find on social networks (EUX). Identifying these contrast elements could help improve the design of the user-tools. Thus, we propose the following hypothesis for our study:
There is no significant difference in perceived UX between the prototypes imagined by the participants and the actual social networks performing the same tasks.
To confirm or refute this hypothesis, we propose a method that allows us to assess the AUX and EUX of daily tasks on social networks: sending messages, sharing multimedia, and doing searches. Our participants ( N = 20 ) completed these tasks in two phases. In the former phase, they had to make a paper prototype with the elements they considered necessary to solve the task; once their prototype was finished, they evaluated it with the AttrakDiff questionnaire [38]. In the latter phase, the participants solved the task in real social networks and, in the same way, they evaluated it with AttrakDiff. Our main finding is that user expectations are mainly composed of pragmatic aspects.
The organization of this article is as follows. First, we present a brief analysis of related works (Section 2). After describing the research methodology that we follow to develop our proposal (Section 3), we explain our assessment method (Section 4). Subsequently, we report all the details of our tests (Section 5), followed by the results of said tests (Section 6). After that, we depict the discussion of our results, as well as the implications and limitations of our study (Section 7). Finally, we expose our conclusions and some proposals for future work (Section 8).

2. Related Work

In this section, we present a brief review of some outstanding works that involve AUX and EUX. We classify them into these two groups because it seems that this is the trend in most UX work. The former group are researchers who study popular systems in the market and then propose theories (Section 2). The latter group are those who, after studying theoretical works, use their knowledge to propose changes in practical systems (Section 2). We consider that our work is a hybrid approach, trying to bring together the best of both paths.

2.1. From Practice to Theory

Practice is vital, as it allows collecting people’s opinions and reactions. Aladwan et al. [39] designed a framework through review searches and constructed a prototype that describes user anticipations and experiences, using instructional fitness applications. The main limitation of this work is the difficulty in unraveling ambiguous user reviews.
Although, in general, qualitative evaluations are complicated to analyze because they precisely lend themselves to ambiguities, they are an indispensable resource if the investigation is about transferring real-world interactions to a virtual environment. Such is the case of Moser et al. [40] who organized workshops for children around the world. Through various types of activities, they managed to gather children’s expectations and idealizations regarding games. Although they detailed the way to capture AUX, they did not make comparisons, nor propose elements for the design of GUIs.
The works of Margetis et al. [41], and Zhang et al. [42] also fall into this area of gathering the users’ know-how. The former ones created an augmented reality (AR) system that facilitates reading and writing in books without being invasive to users. In addition to a heuristic evaluation, there is no evidence of AUX evaluation, only of EUX after testing the prototype. The latter authors designed a card game that encourages the practice of people who are learning a foreign language. Even though in their design they did an AUX study, there are no contrasts with EUX.
User expectations are also gathered when new environments are studied. For example, Kukka et al. [43] investigated the integration of Facebook content in three-dimensional applications. They created design guidelines based on the problems they could identify in this kind of environment. Being a preliminary investigation, they did not compare AUX and EUX. Another example is Wurhofer et al. [44] who examined the context of UX motorists. Through a study of cumulative UX, they compared expectations against the real experiences of drivers. Despite that this is a study of UX over time, it does not include GUIs.

2.2. From Theory to Practice

Theory is essential because it identifies and proposes elements that can be used to design and evaluate systems. Such is the case of Magin et al. [45] who described possible factors that cause a negative UX using apps. Through a prototype app, AUX and EUX were measured by the participants. They concluded that the lack of usability causes negative emotions. Similarly, Sato et al. [46] reported a series of elements used in multi-agent systems that can possibly be applied in Communities of Practice (CoP). Though the impact that these elements would have on UX can be deduced, they did not evaluate UX.
The works gathered here are a sample of how AUX and EUX studies can be applied, as well as their worth. Although these studies present elements that stand out, particularly in AUX or EUX, neither describes which dimension (or dimensions) are more critical for one or the other period. Table 1 summarizes and compares each of the works analyzed in this section.

3. Research Methodology

As a guide to carry out our research, we use the Design Science Research Methodology (DSRM) process model by Peffers et al. [47]. This methodology was selected because it has been used in works that are under the same UX study spectrum. For example, Carey et al. [48] used this methodology to develop and validate their interactive evaluation instrument—their goal was to improve the process for mobile service innovation. Strohmann et al. [49] followed DSRM to create recommendations for the representation and interaction design of virtual in-vehicle assistants. Lastly, Kumar et al. [50] used this methodology to design an app that provides remote students with a learning support.
The DSRM iterative process consists of a research entry point and six stages [47]. The initiation point could be problem-centered, objective-centered, design-and-development-centered, or client/content-centered. The six stages of the methodology are:
  • Identify problem and motivate: define the problem, show importance.
  • Define objectives of a solution: what would a better artifact accomplish?
  • Design and development: solution artifact.
  • Demonstration: find a suitable context. Use artifact to solve the problem.
  • Evaluation: observe how effective and efficient the artifact is. Iterate back to design.
  • Communication: scholarly and professional publications.
In our case, we selected the research entry point objective-centered initiation of DSRM, given that our aim was to help improve the design of user-tools. Regarding the first step, identify problem and motivate, we have already highlighted the role that UX plays in the design of user-tools within social networks. The second step of the methodology, define objectives of a solution, concerns the construction of the assessment method, whose objective is to compare between AUX and EUX to find elements of contrast. The third step, design and development, refers to the specification of the proposed assessment method. The fourth and fifth steps, demonstration and evaluation, are respectively the tests we prepared and the outcomes we achieved. The final step, communication, is exposed along with this article. To refine the proposed AUX and EUX assessment method, we will initiate succeeding iterations in the design and development step.

4. Assessment Method

As already described in Section 3, this segment presents stages two and three of DSRM applied to our proposal, i.e., define objectives of a solution, and design and development.

4.1. Define Objectives of a Solution

Social networks have problems in the two areas that comprise them: technological (the platform that supports them) and social (misinformation problems, lack of motive, and guidance) [51]. User-tools can help to solve the problems in these areas (see Figure 1), which are vital in a successful social network [52,53,54,55].
User-tools are groups of widgets that make up the GUI of a social network, in order to allow users to perform tasks and communicate with each other, e.g., friend lists, newsfeeds, chats, and publishing menus. The granularity of user-tools is dictated by activities, i.e., a specific set of widgets, that allows solving a specific activity, conforms a user-tool.
As we have been mentioning, user-tools are the elements that allow interaction among users on a social network, so its design should be a primary issue. For this, our task focuses on contrasting AUX and EUX, since with this we hope to identify which dimensions of UX have the most significant weight in each period. Therefore, we introduce a six step assessment method (see Figure 2) to be explained in the following subsection.

4.2. Design and Development

Here, we describe each step of our assessment method. To demonstrate how our proposal works, we take the basic case when one person uses a chat to make contact with another person:
  • Set Goals: This step is about the objectives that developers need to achieve, e.g., a chat must allow users to communicate effectively with each other.
  • Identify Tasks: It refers to the stages that the user has to follow with the aim of attaining the aforementioned objectives, e.g., a user has to recognize the receiver of the message, display the direct message option or window, compose the message and finally send it.
  • Identify User-tools: This step involves determining which user-tools are available to accomplish the previously identified tasks, e.g., avatars, user profiles, lists, buttons, commands, and text boxes.
  • Assess AUX: It concerns an AUX evaluation over the prototyped artifact. This stage can be done with various tools, e.g., low-fidelity prototypes [56,57], or techniques such as The Wizard of Oz [58,59]. Nevertheless, the important thing is to stimulate the creativity of participants, so that we can obtain their idealizations and expectations. To know what aspects should be taken into account at this stage, we rely on the bases proposed by Yogasara et al. [31]:
    Intended Use: It is about the practical connotation of each user-tool, e.g., the functioning of a chat from the user’s point of view.
    Positive Anticipated Emotion: It refers to agreeable feelings that the user expects undergoing as a result of the interaction with a user-tool, e.g., satisfaction after sending a message, happiness when the answer comes, generally pleased for not receiving errors, or any other type of alert.
    Desired Product Characteristics: As for this aspect, we accommodated the principles suggested by Morville [60] to our case of study. These principles specify that a user-tool must be worthy, functional, helpful, attractive, attainable, honest, and discoverable.
    User Characteristics: It concerns the mental and physical faculties of users, e.g., developing a generic chat does not imply the same endeavor for developing one intended for children or for seniors, since each group has specific needs.
    Experiential Knowledge: We need to know the background of users, because they rely on their experience to gather information, then compare and contrast, e.g., a user might ask whether the new chat is more suitable than the one provided by Facebook.
    Favorable Existing Characteristics: This aspect is about the properties that users have identified in the past as assertive in comparable tools, e.g., a user could think that they enjoy the chat from another platform thanks to the response time, availability, and ease of use.
  • Assess EUX: This step involves conducting an EUX assessment over the developed artifact. For this step, we need at least a mid-fidelity prototype [61,62], i.e., something so that participants can already experience the tool on a PC or a mobile device. However, to make the comparison of results achievable, it is vital to evaluate all the aspects taken into account for the AUX assessment, e.g., if NASA TLX questionnaire [63] was used in the AUX assessment, it is necessary to reapply it, this time for EUX, being careful to measure similar parts or functionalities between both stages.
  • Compare Results: Once AUX and EUX assessments were carried out, the results have to be contrasted, so that developers can make resolutions on the design of user-tools, placing side by side the idealizations of users and reality, and examining whether their propositions were developed or not, e.g., compare the evaluations of the NASA TLX questionnaire of the prototype and developed chat.

5. Demonstration and Evaluation

This section represents steps 4 and 5 of the DSRM methodology. It details the Materials (Section 5.1) and Method (Section 5.2) that we used in our tests.

5.1. Materials

To carry out our tests, we use basic materials. For the development of prototypes, we have stationery such as sheets of paper, pens, pencils, and markers of various colors. Whereas for social media tests, we used a 15-inch laptop with internet access, and Firefox as a web browser. For each social network, we created a new user profile.
An essential factor that can compromise the validity and reliability of a study is improvisation. Choosing the wrong instrument invalidates the results, no matter how rigorous a study’s proposed methodology was [64,65]. That is why we weigh in on the various factors that could affect our tests. AttrakDiff, since its original proposal in 2003 [38], has been used in multiple tests to measure UX based on its pragmatic and hedonic factors [23]. In each study, experts have used this tool, and it has been tested for validity and reliability in different contexts [66,67,68,69,70,71,72,73], it has been translated into various languages [26], and it has been modified to suit the specific needs of particular experiments [74]. In addition, it is simple to answer and does not represent a burden for participants [75]. All these results made us choose AttrakDiff as a valid tool to study UX.
The AttrakDiff full questionnaire is composed of 28 semantic pairs, i.e., pairs of words that make a strong contrast to each other (e.g., good-bad). Through these semantic pairs, the questionnaire measures the following aspects [76]:
  • Pragmatic Quality: It refers to the perceived quality of manipulation, i.e., effectiveness and efficiency of use.
  • Hedonic Quality—Identity: It indicates the user’s self-identification with the artifact.
  • Hedonic Quality—Stimulation: It means the human need for individual development, i.e., improvement of knowledge and skills.
  • Attractiveness: It reports the overall worth of an artifact based on perceived quality.
The hedonic and pragmatic dimensions are autonomous of each other and provide evenly to the UX evaluation [23]. We use a printed version, in English, of the questionnaire available on the official website of the tool (http://attrakdiff.de/index-en.html). All participants had the same materials at their disposal.

5.2. Method

Since we try to study the user-tools of social networks, and we have one independent variable with two factors, prototypes and social networks, our tests follow a basic design [77]. Moreover, since we had only one group of participants who were exposed to both factors, our tests have a within-group design [77].
Our only dependent variable is UX, of course, but since it is a latent variable and therefore cannot be measured directly [78], we have AtrakDiff, which with its four dimensions helps us measure the UX perceived by our participants (cf. Section 5.1).
Finally, our control variables are the environment where we carried out the tests, since all the participants were exposed to the same conditions (e.g., materials, noise and light levels, desk, chair, and room). The characteristics of our participants were also controlled (cf. Section 5.2.1). Table 2 summarizes the variables of our tests. The method for conducting our tests has been widely used by various authors in similar contexts [79,80,81,82].

5.2.1. Participants

We used an opportunistic sample to recruit our participants, given that they are all members of our department. All participants gave their informed consent for inclusion before they participated in the study. In addition, the study was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by the Ethics Committee of our department.
Our testing group was composed of 20 participants (five of whom were females), whose average age was 28.15, and the maximum and minimum ones were 38 and 20, respectively. We made the decision of limiting their age to a range between 20 and 40 years, in order to prevent our results from being biased by participants with particular needs (e.g., oversimplify the language and instructions used or make the fonts of the GUIs larger). Although we know that it is a rather small sample, it is within the average for this kind of test [72].
Participants were selected because of their familiarity with social networks. We think that people unconnected to such platforms constrain their potential to perform the assigned tasks, causing an invalidating impact on our study. Moreover, we believe that better results will be obtained if participants have experience with social networks.

5.2.2. Procedure

We carried out the AUX and EUX assessment of user-tools in a peaceful ambiance to limit outer sources of noise in our study. Each volunteer individually participates in the testing sessions, which were conducted by a moderator in situ.
As the first step of our tests, participants filled a questionnaire about their demographic information and former contact with social networks. Afterwards, participants performed the tasks and assessments.
Each session had a length of around 40 min. We run the tests in 20 days, i.e., one participant per day. All tests were done around 10 a.m.; we did this to try to have a similar state in each participant.
The results of the aforementioned questionnaire reported that YouTube is proven to be the most used platform by our participants with 100% of usage. As for Facebook, it got a moderated use with 47%, and Reddit was the least used with 2%. Therefore, we decided to use these three platforms to asses EUX.
First of all, we said that our goal was to improve the design of user-tools through the contrast of AUX and EUX. To achieve this, we devised the following three tasks that represent common activities within social networks. Participants would have to complete each one twice, one for AUX, and one for EUX, during the trial:
  • Message: Transmit a private message to another user.
  • Publication: Share multimedia.
  • Search: Look for somebody or for a certain theme.
To identify the required user-tools for accomplishing each task, we analyzed different ways, e.g., giving them user-tools made up of paper cut-outs. Nevertheless, if we provided participants with a predetermined set of user-tools, they would have prejudices, i.e., we would obtain very similar results between each participant, including the possibility of identical prototypes, consequently limiting their feedback significantly. Thus, for the AUX assessment step, the best alternative was that each participant created their own user-tools.
The next two steps of our method are the AUX and EUX assessments of user-tools:
  • Prototypes construction: First, we asked participants to imagine that they took the role of a Web designer with the aim of creating a novel GUI for a social network. Afterwards, relying on their experience, they had to create three paper prototypes, corresponding to the three tasks previously defined. Participants had to draw GUI elements to solve the tasks, just as if they were designing a website GUI. In our pilot tests, we obtained prototypes similar to the one depicted in Figure 3a, so we resolved to design a canvas to make it easier for the participants to create their prototypes. Figure 3b–d show random samples of prototypes in our actual tests. When they concluded the construction and description of each prototype, participants had to assess them with the AttrakDiff questionnaire. Therefore, this stage allowed participants to explain their decisions about how they conceive the behavior of the GUI, the rationale behind their designs, and the user-tools required to accomplish each task. In this manner, we assess the AUX of user-tools.
  • Tasks using online social networks: Once the three prototypes and their assessments were concluded, we asked participants to carry out the same three tasks, but now using online social networks. Hence, on Reddit, participants transmitted a private message to another user; on Facebook, they shared multimedia, and on YouTube, they sought for a somebody or for a certain topic. Like in the previous stage, after finishing each task, they had to assess it through the AttrakDiff questionnaire. Just like that, we assess the EUX of user-tools.
In this way, and taking into account that each participant made six evaluations, we finished with 120 questionnaires: 60 corresponding to AUX and 60 to EUX.

6. Results

Seven semantic pairs correspond to each dimension of AttrakDiff. The ratings go from one to seven, and the higher, the better. Table 3 contains the results from the 120 questionnaires, the means ( μ ), and the standard deviations ( σ ) of each dimension for the three tasks.
Figure 4 is the graphical representation of the results. For all plots, the X-axis contains the four dimensions of AttrakDiff, and the Y-axis measures their averages. As the legend indicates, clear bars are the measurements of AUX, while dark bars represent the results of EUX in our three tasks: Figure 4a contrasts the results for messages, Figure 4b does the same for publications, and Figure 4c for searches.
In assessing these results, we also look at the reliability scores for the different dimensions. Table 4 shows the Cronbach’s alpha values for the AttrakDiff dimensions in each task ( α level = 0.05 ).
To contrast the results of the tests, and given that the design we have is within-groups with an independent variable of two factors, the statistical analysis we performed was a paired-samples t-test [83]. In this way, we determined whether there are significant differences between the means of each dimension of AttrakDiff in the AUX and EUX tests for each task (see Table 5). To obtain all the statistical analyzes, we use the R language.

7. Discussion

Table 3 clearly shows that the paper prototypes were better evaluated than their counterpart in Reddit. Moreover, Figure 4a reveals something similar. The prototypes for messages were the only ones where the assessment of AUX exceeded that of EUX. This is likely because, for most participants, this was their first time using Reddit. It can also be attributed to Reddit offering a negative UX, since it was not so easy for participants to use their previous experiences on a new platform.
Even though participants were free to design their user-tools at their convenience, based on their experience, real social networks gave them more satisfying experiences. Figure 4b,c show that, indeed, all the dimensions were superior in social networks, although it is interesting that there is a difference, but not that much. An intriguing observation is that the participants were quite incisive in criticizing their prototypes, i.e., they complained that they did not do a good job, because they did not have the experience or knowledge necessary to design a GUI.
In general, we can say that the reliability of data is good, since most of the dimensions obtained good results (>0.7) as can be seen in Table 4. The result that stands out the most is that of the Hedonic Quality—Identity dimension, as none of the tests was significant. This could come to mean that AttrakDiff has a weakness to measure the Identity dimension. Of course, we would need more evidence to verify or refute that assumption.
Table 5 suggests which results of the t-test with paired-samples allows us to reject our null hypothesis. The comparison between AUX and EUX of the messages task were significant in the dimensions of Pragmatic Quality, Identity, and Attractiveness. For the publications task, only Identity was significant, while for the search task, Pragmatic Quality, Identity, and Attractiveness were significant. These significant dimensions indicate that, in these tests, we can refute the null hypothesis, because there is a significant difference between the UX perceived by the participants between the prototypes and the social networks. It is interesting to note that the only dimension that was consistently not significant in any task was Stimulation.
According to Aladwan et al. [39], when users of fitness applications were physically stressed by exercise and tried to use said apps with no avail, their stress increased, as their expectations were not met. This makes sense with our findings, since it is likely that, in an altered state of mind, users will need to rely on pragmatic elements that are familiar to them. Something similar happens with the tests carried out by Kukka et al. [43], Margetis et al. [41], Wurhofer et al. [44], and Zhang et al. [42], as their participants focused on interactions that they considered safe, when they found themselves in an unfamiliar environment.
Magin et al. [45] studied the possible sources of negative emotions in UX (e.g., anger, sadness, and confusion). They determined that a significant part came from instrumental elements, i.e., usability, which agrees with our findings, since users expect things like that a button is active under certain circumstances or that a selected item can be removed, i.e., practical tasks.
The work by Moser et al. [40] is interesting because the expectations they measured came from children. It seems that their imagination was more oriented to hedonic aspects, mainly self-identification, since they cared that the games reflected their personality and decisions. It is striking because it goes against our findings: perhaps the AUX perceived by children has more weight in the hedonic factors, which could indicate a future path of investigation.

7.1. Implications

An exciting result we obtained was that the Stimulation means were not significant, as it could indicate that participants thought about basic user-tools to make their prototypes and found similarly essential elements in social networks. Now, we know that if we want to draw more reliable conclusions from this, we will need to do more research. However, we could speculate that the experience and imagination of the participants are limited to the essential elements that are commonly found in all GUIs, i.e., they prefer to play it safe. Users are looking for security rather than looking for new experiences when testing new GUIs, so Stimulation could become a more decisive factor when they are already familiar with GUIs.
Such behavior could also indicate that user expectations are more grounded in pragmatic aspects than in hedonic ones. This could have significant implications. For example, it would imply that, when creating new GUIs, designers have to pay more attention to including basic user-tools that allow users to efficiently complete tasks, since user expectations would be mainly focused on practical aspects, e.g., that they imagine a button, its action, but not how it looks.

7.2. Limitations

The results presented in this work could have been affected by the sampling of our participants. Given that each evaluation took around 40 min, having a random sample would represent a significant challenge. Our participants did not receive any kind of incentive.
Similarly, the limitations of the within-groups design make it difficult to control the effects of learning and fatigue. We try to alleviate this by offering a comfortable and relaxed environment for the participants and reiterate them that they were helping us to evaluate the systems, and that we were not evaluating them [77].

8. Conclusions and Future Work

UX evaluation is always valuable, regardless of the nature or purpose of the evaluated artifact. In this paper, we proposed a study that compares the AUX and EUX of user-tools through daily tasks in social networks. Our tests revealed that our participants build their expectations with pragmatic criteria, i.e., hedonic and attractiveness aspects were secondary when they were building their prototypes.
Our research contributes to further increasing the understanding of UX, how perceived experiences are measured, and which factors are most relevant at a certain point in an evaluation or development. As we already explained in the discussion (Section 7), our results quantitatively confirmed that AUX seems to be mainly composed of pragmatic aspects. The development of this idea could lead to improving existing evaluation methods and the creation of new ones.
As future work, we intend to replicate our tests, but this time with children. As the work by Moser et al. [40] suggests, children can build prototypes with hedonic aspects in mind, i.e., we would expect to obtain results opposite to what we found. We also consider it essential to use other questionnaires besides AttrakDiff, which would help validate our conclusions quantitatively. While in this work we focus on social networks, our assessment method can be used in multiple areas. To prove this, we will use this proposal to assess a chatbot that attends the teaching-learning process in middle schools.

Author Contributions

Conceptualization, L.M.S.-A. and S.M.; methodology, L.M.S.-A., S.M., and J.F.U.-Y.; formal analysis, L.M.S.-A.; supervision, S.M. and J.F.U.-Y.; validation, S.M. and J.F.U.-Y.; writing—original draft preparation, L.M.S.-A.; writing—review and editing, S.M. and J.F.U.-Y.; funding acquisition, S.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by “Fondo SEP-CINVESTAV de Apoyo a la Investigación (Call 2018)”. Number of project 120 titled “Desarrollo de un chatbot inteligente para asistir el proceso de enseñanza/aprendizaje en temas educativos y tecnológicos”.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Carta, S.; Podda, A.S.; Recupero, D.R.; Saia, R.; Usai, G. Popularity Prediction of Instagram Posts. Information 2020, 11, 453. [Google Scholar] [CrossRef]
  2. Wiederhold, B.K. Social Media Use During Social Distancing. Cyberpsychol. Behav. Soc. Netw. 2020, 23, 275–276. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Király, O.; Potenza, M.N.; Stein, D.J.; King, D.L.; Hodgins, D.C.; Saunders, J.B.; Griffiths, M.D.; Gjoneska, B.; Billieux, J.; Brand, M.; et al. Preventing problematic internet use during the COVID-19 pandemic: Consensus guidance. Compr. Psychiatry 2020, 100, 152–180. [Google Scholar] [CrossRef] [PubMed]
  4. Chen, L.S.; Chang, P.C. Identifying crucial website quality factors of virtual communities. In Proceedings of the International MultiConference of Engineers and Computer Scientists, Hong Kong, China, 17–19 March 2010; Volume 1, pp. 17–19. [Google Scholar]
  5. El Morr, C.; Eftychiou, L. Evaluation Frameworks for Health Virtual Communities. In The Digitization of Healthcare: New Challenges and Opportunities; Menvielle, L., Audrain-Pontevia, A.F., Menvielle, W., Eds.; Palgrave Macmillan UK: London, UK, 2017; pp. 99–118. [Google Scholar]
  6. Lee, F.S.; Vogel, D.; Limayem, M. Virtual community informatics: A review and research agenda. JITTA J. Inf. Technol. Theory Appl. 2003, 5, 47. [Google Scholar]
  7. Preece, J.; Abras, C.; Maloney-Krichmar, D. Designing and Evaluating Online Communities: Research Speaks to Emerging Practice. Int. J. Web Based Commun. 2004, 1, 2–18. [Google Scholar] [CrossRef]
  8. Wang, Y.; Li, Y. Proactive Engagement of Opinion Leaders and Organization Advocates on Social Networking Sites. Int. J. Strateg. Commun. 2016, 10, 115–132. [Google Scholar] [CrossRef]
  9. Boyd, D.M.; Ellison, N.B. Social Network Sites: Definition, History, and Scholarship. J. Comput. Mediat. Commun. 2007, 13, 210–230. [Google Scholar] [CrossRef] [Green Version]
  10. Chen, V.H.H.; Duh, H.B.L. Investigating User Experience of Online Communities: The Influence of Community Type. In Proceedings of the 2009 International Conference on Computational Science and Engineering, Vancouver, BC, Canada, 29–31 August 2009; Volume 4, pp. 509–514. [Google Scholar]
  11. Preece, J. Online Communities: Designing Usability and Supporting Socialbilty; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2000. [Google Scholar]
  12. Jacobsen, L.F.; Tudoran, A.A.; Lähteenmäki, L. Consumers’ motivation to interact in virtual food communities—The importance of self-presentation and learning. Food Qual. Prefer. 2017, 62, 8–16. [Google Scholar] [CrossRef]
  13. Nov, O.; Ye, C. Why Do People Tag?: Motivations for Photo Tagging. Commun. ACM 2010, 53, 128–131. [Google Scholar] [CrossRef]
  14. Tella, A.; Babatunde, B.J. Determinants of Continuance Intention of Facebook Usage Among Library and Information Science Female Undergraduates in Selected Nigerian Universities. Int. J. E-Adopt. (IJEA) 2017, 9, 59–76. [Google Scholar] [CrossRef] [Green Version]
  15. Zhou, T. Understanding online community user participation: A social influence perspective. Internet Res. 2011, 21, 67–81. [Google Scholar] [CrossRef] [Green Version]
  16. Lamprecht, J.; Siemon, D.; Robra-Bissantz, S. Cooperation Isn’t Just About Doing the Same Thing—Using Personality for a Cooperation-Recommender-System in Online Social Networks; Collaboration and Technology; Yuizono, T., Ogata, H., Hoppe, U., Vassileva, J., Eds.; Springer: Berlin/Heidelberg, Germany, 2016; pp. 131–138. [Google Scholar]
  17. Fragidis, G.; Ignatiadis, I.; Wills, C. Value Co-creation and Customer-Driven Innovation in Social Networking Systems; Exploring Services Science; Morin, J.H., Ralyté, J., Snene, M., Eds.; Springer: Berlin/Heidelberg, Germany, 2010; pp. 254–258. [Google Scholar]
  18. Mai, H.T.X.; Olsen, S.O. Consumer participation in virtual communities: The role of personal values and personality. J. Mark. Commun. 2015, 21, 144–164. [Google Scholar] [CrossRef]
  19. McCormick, T.J. A success-Oriented Framework to Enable Co-Created E-Services; The George Washington University: Washington, DC, USA, 2010. [Google Scholar]
  20. Ling, K.; Beenen, G.; Ludford, P.; Wang, X.; Chang, K.; Li, X.; Cosley, D.; Frankowski, D.; Terveen, L.; Rashid, A.M.; et al. Using Social Psychology to Motivate Contributions to Online Communities. J. Comput. Mediat. Commun. 2005, 10. [Google Scholar] [CrossRef]
  21. Grudin, J. Why CSCW Applications Fail: Problems in the Design and Evaluation of Organizational Interfaces. In Proceedings of the 1988 ACM Conference on Computer-supported Cooperative Work, CSCW ’88, Portland, OR, USA, 26–28 September 1988; pp. 85–93. [Google Scholar] [CrossRef]
  22. Talin. Why Google+ Failed. 2019. Available online: https://onezero.medium.com/why-google-failed-4b9db05b973b (accessed on 14 October 2019).
  23. Hassenzahl, M. The hedonic/pragmatic model of user experience. Towards UX Manif. 2007, 10, 10–14. [Google Scholar]
  24. Hassenzahl, M.; Platz, A.; Burmester, M.; Lehner, K. Hedonic and Ergonomic Quality Aspects Determine a Software’s Appeal. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’00, The Hague, The Netherlands, 1–6 April 2000; pp. 201–208. [Google Scholar]
  25. ISO. Ergonomics of Human-System Interaction-Part 210: Human-Centred Design for Interactive Systems; Technical Report; International Organization for Standardization: Geneva, CH, USA, 2010. [Google Scholar]
  26. Lallemand, C.; Gronier, G.; Koenig, V. User experience: A concept without consensus? Exploring practitioners’ perspectives through an international survey. Comput. Hum. Behav. 2015, 43, 35–48. [Google Scholar] [CrossRef]
  27. Roto, V.; Law, E.L.C.; Vermeeren, A.; Hoonhout, J. 10373 Abstracts Collection—Demarcating User eXperience. In Proceedings of the Dagstuhl Seminar on Demarcating User Experience; Hoonhout, J., Law, E.L.C., Roto, V., Vermeeren, A., Eds.; Schloss Dagstuhl—Leibniz-Zentrum fuer Informatik, Germany: Dagstuhl, Germany, 2011; Number 10373 in Dagstuhl Seminar Proceedings. [Google Scholar]
  28. Karapanos, E.; Zimmerman, J.; Forlizzi, J.; Martens, J.B. Measuring the dynamics of remembered experience over time. Modelling user experience—An agenda for research and practice. Interact. Comput. 2010, 22, 328–335. [Google Scholar] [CrossRef]
  29. Kujala, S.; Roto, V.; Väänänen-Vainio-Mattila, K.; Karapanos, E.; Sinnelä, A. UX Curve: A method for evaluating long-term user experience. Feminism and HCI: New Perspectives. Interact. Comput. 2011, 23, 473–483. [Google Scholar] [CrossRef]
  30. Winckler, M.; Bernhaupt, R.; Bach, C. Identification of UX dimensions for incident reporting systems with mobile applications in urban contexts: A longitudinal study. Cogn. Technol. Work 2016, 18, 673–694. [Google Scholar] [CrossRef] [Green Version]
  31. Yogasara, T.; Popovic, V.; Kraal, B.J.; Chamorro-Koc, M. General characteristics of anticipated user experience (AUX) with interactive products. In Proceedings of the IASDR2011: The 4th World Conference on Design Research: Diversity and Unity, Delft, The Netherlands, 31 October–4 November 2011; pp. 1–11. [Google Scholar]
  32. Stone, D.; Jarrett, C.; Woodroffe, M.; Minocha, S. User Interface Design and Evaluation; Morgan Kaufmann Series in Interactive Technologies; Morgan Kaufman: San Francisco, CA, USA, 2005. [Google Scholar]
  33. Bargas-Avila, J.A.; Hornbæk, K. Old Wine in New Bottles or Novel Challenges: A Critical Analysis of Empirical Studies of User Experience. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’11, Vancouver, BC, Canada, 7–12 May 2011; pp. 2689–2698. [Google Scholar]
  34. Karapanos, E.; Zimmerman, J.; Forlizzi, J.; Martens, J.B. User Experience over Time: An Initial Framework. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’09, Boston, MA, USA, 4–9 April 2009; pp. 729–738. [Google Scholar]
  35. Vermeeren, A.P.O.S.; Law, E.L.C.; Roto, V.; Obrist, M.; Hoonhout, J.; Väänänen-Vainio-Mattila, K. User Experience Evaluation Methods: Current State and Development Needs. In Proceedings of the 6th Nordic Conference on Human-Computer Interaction: Extending Boundaries, NordiCHI ’10, Reykjavik, Iceland, 16–20 October 2010; pp. 521–530. [Google Scholar]
  36. Sánchez-Adame, L.M.; Mendoza, S.; González-Beltrán, B.A.; Rodríguez, J.; Meneses Viveros, A. AUX and UX Evaluation of User Tools in Social Networks. In Proceedings of the 2018 IEEE/WIC/ACM International Conference on Web Intelligence (WI), Santiago, Chile, 3–6 December 2018; pp. 104–111. [Google Scholar] [CrossRef]
  37. Sánchez-Adame, L.M.; Mendoza, S.; González-Beltrán, B.A.; Rodríguez, J.; Viveros, A.M. UX Evaluation Over Time: User Tools in Social Networks. In Proceedings of the 2018 15th International Conference on Electrical Engineering, Computing Science and Automatic Control (CCE), Mexico City, Mexico, 5–7 September 2018; pp. 1–6. [Google Scholar] [CrossRef]
  38. Hassenzahl, M.; Burmester, M.; Koller, F. AttrakDiff: Ein Fragebogen zur Messung wahrgenommener hedonischer und pragmatischer Qualität. In Mensch & Computer; Springer: Berlin/Heidelberg, Germany, 2003; pp. 187–196. [Google Scholar]
  39. Aladwan, A.; Kelly, R.M.; Baker, S.; Velloso, E. A Tale of Two Perspectives: A Conceptual Framework of User Expectations and Experiences of Instructional Fitness Apps. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI ’19, Glasgow, Scotland, UK, 4–9 May 2019; pp. 394:1–394:15. [Google Scholar] [CrossRef] [Green Version]
  40. Moser, C.; Chisik, Y.; Tscheligi, M. Around the World in 8 Workshops: Investigating Anticipated Player Experiences of Children. In Proceedings of the First ACM SIGCHI Annual Symposium on Computer-human Interaction in Play, CHI PLAY ’14, Toronto, ON, Canada, 18–22 October 2014; pp. 207–216. [Google Scholar]
  41. Margetis, G.; Zabulis, X.; Koutlemanis, P.; Antona, M.; Stephanidis, C. Augmented interaction with physical books in an Ambient Intelligence learning environment. Multimed. Tools Appl. 2013, 67, 473–495. [Google Scholar] [CrossRef]
  42. Zhang, E.; Culbertson, G.; Shen, S.; Jung, M. Utilizing Narrative Grounding to Design Storytelling Games for Creative Foreign Language Production. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI ’18, Montreal, QC, Canada, 21–26 April 2018; pp. 197:1–197:11. [Google Scholar]
  43. Kukka, H.; Pakanen, M.; Badri, M.; Ojala, T. Immersive Street-level Social Media in the 3D Virtual City: Anticipated User Experience and Conceptual Development. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing, CSCW ’17, Portland, OR, USA, 25 February–1 March 2017; pp. 2422–2435. [Google Scholar]
  44. Wurhofer, D.; Krischkowsky, A.; Obrist, M.; Karapanos, E.; Niforatos, E.; Tscheligi, M. Everyday Commuting: Prediction, Actual Experience and Recall of Anger and Frustration in the Car. In Proceedings of the 7th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI ’15, Nottingham, UK, 1–3 September 2015; pp. 233–240. [Google Scholar]
  45. Magin, D.P.; Maier, A.; Hess, S. Measuring Negative User Experience. In Design, User Experience, and Usability: Users and Interactions; Marcus, A., Ed.; Springer: Berlin/Heidelberg, Germany, 2015; pp. 95–106. [Google Scholar]
  46. Sato, G.Y.; de Azevedo, H.J.S.; Barthès, J.P.A. Agent and multi-agent applications to support distributed communities of practice: A short review. Auton. Agents Multi-Agent Syst. 2012, 25, 87–129. [Google Scholar] [CrossRef]
  47. Peffers, K.; Tuunanen, T.; Rothenberger, M.; Chatterjee, S. A Design Science Research Methodology for Information Systems Research. J. Manag. Inf. Syst. 2007, 24, 45–77. [Google Scholar] [CrossRef]
  48. Carey, K.; Helfert, M. An Interactive Assessment Instrument to Improve the Process for Mobile Service Application Innovation. In HCI in Business; Fui-Hoon Nah, F., Tan, C.H., Eds.; Springer International Publishing: Berlin/Heidelberg, Germany, 2015; pp. 244–255. [Google Scholar]
  49. Strohmann, T.; Höper, L.; Robra-Bissantz, S. Design Guidelines for Creating a Convincing User Experience with Virtual In-vehicle Assistants. In Proceedings of the 52nd Hawaii International Conference on System Sciences, Maui, HI, USA, 8–11 January 2019; pp. 4813–4822. [Google Scholar]
  50. Kumar, B.A.; Chand, S. Mobile App to Support Teaching in Distance Mode at Fiji National University: Design and Evaluation. Int. J. Virtual Pers. Learn. Environ. (IJVPLE) 2018, 8, 25–37. [Google Scholar] [CrossRef] [Green Version]
  51. Koh, J.; Kim, Y.G.; Butler, B.; Bock, G.W. Encouraging Participation in Virtual Communities. Commun. ACM 2007, 50, 68–73. [Google Scholar] [CrossRef]
  52. Apostolou, B.; Bélanger, F.; Schaupp, L.C. Online communities: Satisfaction and continued use intention. Inf. Res. 2017, 22, 774. [Google Scholar]
  53. Hummel, J.; Lechner, U. Social profiles of virtual communities. In Proceedings of the 35th Annual Hawaii International Conference on System Sciences, Big Island, HI, USA, 10 January 2002; pp. 2245–2254. [Google Scholar]
  54. Iriberri, A.; Leroy, G. A Life-cycle Perspective on Online Community Success. ACM Comput. Surv. 2009, 41, 11:1–11:29. [Google Scholar] [CrossRef]
  55. Preece, J. Sociability and usability in online communities: Determining and measuring success. Behav. Inf. Technol. 2001, 20, 347–356. [Google Scholar] [CrossRef]
  56. Virzi, R.A. What can you Learn from a Low-Fidelity Prototype? Proc. Hum. Factors Soc. Annu. Meet. 1989, 33, 224–228. [Google Scholar] [CrossRef]
  57. Walker, M.; Takayama, L.; Landay, J.A. High-Fidelity or Low-Fidelity, Paper or Computer? Choosing Attributes when Testing Web Prototypes. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2002, 46, 661–665. [Google Scholar] [CrossRef]
  58. Maulsby, D.; Greenberg, S.; Mander, R. Prototyping an Intelligent Agent Through Wizard of Oz. In Proceedings of the INTERACT ’93 and CHI ’93 Conference on Human Factors in Computing Systems, CHI’93, Amsterdam, The Netherlands, 24–29 April 1993; pp. 277–284. [Google Scholar] [CrossRef] [Green Version]
  59. Davis, R.C.; Saponas, T.S.; Shilman, M.; Landay, J.A. SketchWizard: Wizard of Oz Prototyping of Pen-based User Interfaces. In Proceedings of the 20th Annual ACM Symposium on User Interface Software and Technology, UIST ’07, Newport, RI, USA, 7–10 October 2007; pp. 119–128. [Google Scholar] [CrossRef]
  60. Morville, P. Experience Design Unplugged. In ACM SIGGRAPH 2005 Web Program; ACM: New York, NY, USA, 2005. [Google Scholar]
  61. Coyette, A.; Kieffer, S.; Vanderdonckt, J. Multi-fidelity Prototyping of User Interfaces. Human-Computer Interaction—INTERACT 2007; Baranauskas, C., Palanque, P., Abascal, J., Barbosa, S.D.J., Eds.; Springer: Berlin/Heidelberg, Germany, 2007; pp. 150–164. [Google Scholar]
  62. D-LABS. Medium-Fidelity-Prototyping. 2019. Available online: https://www.d-labs.com/en/services-and-methods/medium-fidelity-prototyping.html (accessed on 14 October 2019).
  63. Hart, S.G.; Staveland, L.E. Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. In Advances in Psychology; Elsevier: Amsterdam, The Netherlands, 1988; Volume 52, pp. 139–183. [Google Scholar]
  64. Hernández-Sampieri, R.; Torres, C.P.M. Metodología de la Investigación; McGraw-Hill Interamericana: Ciudad de México, México, 2018; Volume 4. [Google Scholar]
  65. Kothari, C.R. Research Methodology: Methods and Techniques; New Age International: New Delhi, India, 2004. [Google Scholar]
  66. Isleifsdottir, J.; Larusdottir, M. Measuring the User Experience of a Task Oriented Software. In Proceedings of the international Workshop on Meaningful Measures: Valid Useful User Experience Measurement, Reykjavik, Iceland, 18 June 2008; Volume 8, pp. 97–101. [Google Scholar]
  67. Takahashi, L.; Nebe, K. Observed Differences between Lab and Online Tests Using the AttrakDiff Semantic Differential Scale. J. Usability Stud. 2019, 14, 65–75. [Google Scholar]
  68. Hassenzahl, M.; Monk, A. The Inference of Perceived Usability From Beauty. Hum. Comput. Interact. 2010, 25, 235–260. [Google Scholar] [CrossRef]
  69. Braun, P. Attrakdiff, I feel so I am ? Measuring affects tested by digital sensors. In Digital Klee Esquisses Pédagogiques. Enquête sur le futur de la forme. Présent Composé (Rennes); Les Presses du Réel (Dijon): Dijon, France, 2020; pp. 140–154. [Google Scholar]
  70. Ribeiro, I.M.; Providência, B. Quality Perception with Attrakdiff Method: A Study in Higher Education. In Advances in Design and Digital Communication; Martins, N., Brandão, D., Eds.; Springer International Publishing: Cham, The Netherlands, 2021; pp. 222–233. [Google Scholar]
  71. Klaassen, R.; op den Akker, R.; Lavrysen, T.; van Wissen, S. User preferences for multi-device context-aware feedback in a digital coaching system. J. Multimodal User Interfaces 2013, 7, 247–267. [Google Scholar] [CrossRef] [Green Version]
  72. Díaz-Oreiro, I.; López, G.; Quesada, L.; Guerrero, L.A. Standardized Questionnaires for User Experience Evaluation: A Systematic Literature Review. Proceedings 2019, 31, 1014. [Google Scholar] [CrossRef] [Green Version]
  73. Lallemand, C.; Koenig, V. Measuring the Contextual Dimension of User Experience: Development of the User Experience Context Scale (UXCS). In Proceedings of the 11th Nordic Conference on Human-Computer Interaction: Shaping Experiences, Shaping Society, NordiCHI ’20, Tallinn, Estonia, 25–29 October 2020. [Google Scholar] [CrossRef]
  74. Isomursu, P.; Virkkula, M.; Niemelä, K.; Juntunen, J.; Kumpuoja, J. Modified AttrakDiff in UX Evaluation of a Mobile Prototype. In Proceedings of the International Conference on Advanced Visual Interfaces, AVI ’20, Salerno, Italy, 28 September–2 October 2020. [Google Scholar] [CrossRef]
  75. Walsh, T.; Varsaluoma, J.; Kujala, S.; Nurkka, P.; Petrie, H.; Power, C. Axe UX: Exploring Long-term User Experience with iScale and AttrakDiff. In Proceedings of the 18th International Academic MindTrek Conference: Media Business, Management, Content & Services, AcademicMindTrek ’14, Tampere, Finland, 4–6 November 2014; pp. 32–39. [Google Scholar]
  76. Hu, J.; Le, D.; Funk, M.; Wang, F.; Rauterberg, M. Attractiveness of an Interactive Public Art Installation; Distributed, Ambient, and Pervasive Interactions; Streitz, N., Stephanidis, C., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; pp. 430–438. [Google Scholar]
  77. Lazar, J.; Feng, J.H.; Hochheiser, H. Chapter 3—Experimental design. In Research Methods in Human Computer Interaction, 2nd ed.; Lazar, J., Feng, J.H., Hochheiser, H., Eds.; Morgan Kaufmann: Boston, MA, USA, 2017; pp. 45–69. [Google Scholar] [CrossRef]
  78. Sauro, J. Measuring the Quality of the Website User Experience; University of Denver: Denver, CO, USA, 2016. [Google Scholar]
  79. Bevan, N.; Liu, Z.; Barnes, C.; Hassenzahl, M.; Wei, W. Comparison of Kansei Engineering and AttrakDiff to Evaluate Kitchen Products. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems, CHI EA ’16, San Jose, CA, USA, 7–12 May 2016; pp. 2999–3005. [Google Scholar] [CrossRef]
  80. Merz, B.; Tuch, A.N.; Opwis, K. Perceived User Experience of Animated Transitions in Mobile User Interfaces. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems, CHI EA ’16, San Jose, CA, USA, 7–12 May 2016; pp. 3152–3158. [Google Scholar] [CrossRef]
  81. Aula, A.; Khan, R.M.; Guan, Z. How Does Search Behavior Change as Search Becomes More Difficult? In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’10, Atlanta, GA, USA, 10–15 April 2010; pp. 35–44. [Google Scholar] [CrossRef]
  82. Chin, J.; Fu, W.T. Interactive Effects of Age and Interface Differences on Search Strategies and Performance. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’10, Atlanta, GA, USA, 10–15 April 2010; pp. 403–412. [Google Scholar] [CrossRef] [Green Version]
  83. Lazar, J.; Feng, J.H.; Hochheiser, H. Chapter 4—Statistical analysis. In Research Methods in Human Computer Interaction, 2nd ed.; Lazar, J., Feng, J.H., Hochheiser, H., Eds.; Morgan Kaufmann: Boston, MA, USA, 2017; pp. 71–104. [Google Scholar] [CrossRef]
Figure 1. Design elements and influence of user-tools.
Figure 1. Design elements and influence of user-tools.
Applsci 10 08199 g001
Figure 2. Steps of the Anticipated User eXperience (AUX) and Episodic UX (EUX) assessment methods.
Figure 2. Steps of the Anticipated User eXperience (AUX) and Episodic UX (EUX) assessment methods.
Applsci 10 08199 g002
Figure 3. Samples of prototypes from the pilot tests (a) and from the actual tests (bd).
Figure 3. Samples of prototypes from the pilot tests (a) and from the actual tests (bd).
Applsci 10 08199 g003
Figure 4. AttrakDiff results for messages (a), publications (b), and searches (c).
Figure 4. AttrakDiff results for messages (a), publications (b), and searches (c).
Applsci 10 08199 g004
Table 1. Synthesis of related works.
Table 1. Synthesis of related works.
WorkHighlightsAUX and EUX
Evaluation
LimitationsContext
[39]Framework for user anticipationsOnline reviewsAmbiguous user reviewsMobile apps
[40]Envisioned gameplay ideasWorkshopsLack of generalizationsGames
[41]AR system for booksQuestionnairesAbsence of end-users evaluationsAugmented reality
[42]Card game to practice a foreign languagePaper prototypesNo results overtimeLanguage learning
[43]Social networks user-tools in 3D applicationsPaper prototypesNo contrast between AUX and EUX3D applications
[44]Drivers’ UX over timeInterviewsEvaluation involves many resourcesDriving UX
[45]Aspects that cause deficient UXQuestionnairesPreliminary studyMobile apps
[46]Elements of multi-agent systems for CoPNoneUX analysis are not presentedCoP
Table 2. Variables of our study.
Table 2. Variables of our study.
Independent VariableDependent VariableControl Variables
User-tools
(prototypes and social networks)
UX
(Pragmatic Quality, Hedonic Quality—Identity,
Hedonic Quality—Stimulation, Attractiveness)
Ambient,
and Participants
Table 3. AttrakDiff dimensions results.
Table 3. AttrakDiff dimensions results.
Pragmatic QualityIdentityStimulationAttractiveness
MessageAUX μ 5.654.753.425.17
σ 0.290.950.530.29
EUX μ 3.223.503.523.30
σ 0.490.440.510.31
PublicationAUX μ 5.574.773.705.10
σ 0.440.690.450.48
EUX μ 5.975.353.995.74
σ 0.231.081.190.29
SearchAUX μ 5.514.672.984.96
σ 0.541.020.570.52
EUX μ 6.235.423.756.02
σ 0.451.051.310.26
Table 4. AttrakDiff dimensions reliability analysis (Cronbach’s alpha values).
Table 4. AttrakDiff dimensions reliability analysis (Cronbach’s alpha values).
MessagePublicationSearch
DimensionAUX
(0.82)
EUX
(0.87)
AUX
(0.83)
EUX
(0.83)
AUX
(0.78)
EUX
(0.83)
Pragmatic Quality0.790.870.800.620.830.70
Identity0.560.650.530.670.620.57
Stimulation0.920.830.940.760.860.77
Attractiveness0.810.930.800.860.760.93
Table 5. p values for paired-samples t-tests (comparisons between AUX and EUX in each dimension).
Table 5. p values for paired-samples t-tests (comparisons between AUX and EUX in each dimension).
DimensionMessagePublicationSearch
Pragmatic Quality2.79 × 10−6 *0.180.01 *
Identity3.08 × 10−5 *0.05 *0.003 *
Stimulation0.820.530.08
Attractiveness6.53 × 10−6 *0.060.0005 *
* p 0.05 significant.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sánchez-Adame, L.M.; Urquiza-Yllescas, J.F.; Mendoza, S. Measuring Anticipated and Episodic UX of Tasks in Social Networks. Appl. Sci. 2020, 10, 8199. https://doi.org/10.3390/app10228199

AMA Style

Sánchez-Adame LM, Urquiza-Yllescas JF, Mendoza S. Measuring Anticipated and Episodic UX of Tasks in Social Networks. Applied Sciences. 2020; 10(22):8199. https://doi.org/10.3390/app10228199

Chicago/Turabian Style

Sánchez-Adame, Luis Martín, José Fidel Urquiza-Yllescas, and Sonia Mendoza. 2020. "Measuring Anticipated and Episodic UX of Tasks in Social Networks" Applied Sciences 10, no. 22: 8199. https://doi.org/10.3390/app10228199

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop