Next Article in Journal
IoT-Based LPG Level Sensor for Domestic Stationary Tanks with Data Sharing to a Filling Plant to Optimize Distribution Routes
Next Article in Special Issue
Reinforcement Learning-Based Dynamic Fuzzy Weight Adjustment for Adaptive User Interfaces in Educational Software
Previous Article in Journal
Advancing Healthcare Through the Integration of Digital Twins Technology: Personalized Medicine’s Next Frontier
Previous Article in Special Issue
A Latency Composition Analysis for Telerobotic Performance Insights Across Various Network Scenarios
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Comparative Study of the User Interaction Behavior and Experience in a Home-Oriented Multi-User Interface (MUI) During Family Collaborative Cooking

Graduate School of Science and Engineering, Chiba University, Chiba 2638522, Japan
*
Author to whom correspondence should be addressed.
Future Internet 2024, 16(12), 478; https://doi.org/10.3390/fi16120478
Submission received: 16 October 2024 / Revised: 17 December 2024 / Accepted: 18 December 2024 / Published: 20 December 2024
(This article belongs to the Special Issue Advances and Perspectives in Human-Computer Interaction—2nd Edition)

Abstract

:
This study sought to ascertain the necessity of crafting specialized multi-user interfaces for scenarios involving multiple users and to provide guidance for the design of multi-user human–computer interactions by identifying the disparities in the interaction behavior and user experience when employing a conventional one-user interface (OUI) recipe versus a multi-user interface (MUI) recipe in the context of family collaborative cooking. To address this objective, this study employed a before-and-after comparison approach. Subsequently, adult users submitted self-assessments of their experiences using the OUI and MUI. The evaluation tools included a user experience survey questionnaire and a Likert seven-point scale, including aspects such as visual confirmation, content, operation, and satisfaction. Post-experiment interviews were also conducted with family members. The MUI exhibited greater effectiveness in terms of visual confirmation, with the “layout” assuming a role analogous to that of “text” in facilitating visual confirmation. Moreover, the operation of the MUI was found to be somewhat enjoyable. Nevertheless, no significant disparities were observed between the OUI group and the MUI group concerning content readability and most operational aspects. Furthermore, the users described their satisfaction with the MUI to be superior to that of the OUI, offering fun, convenience, and a clear appearance. Findings from my research clearly demonstrate that it is both valuable and essential to design a dedicated MUI.

Graphical Abstract

1. Introduction

The concepts of multi-user interfaces (where multiple users concurrently employ the same user interface) and multi-user interactions have existed and evolved over a long period of time. The conceptual origins of these ideas can be traced back to operating systems such as Unix, which permits multiple users to access a computer simultaneously. Subsequently, this concept was expanded to include “single-display groupware”, which involves the development of applications for users in the same physical location to collaboratively use on computers with a single shared display and multiple input devices [1].
With the passage of time, this concept was extended further and found application in interactive touch devices, leading to widespread adoption. Previous studies have predominantly focused on investigating the technical capabilities of multi-user devices and interfaces, covering aspects such as performance (smoothness and animation effects) on larger screens [2], interaction techniques [3], and interactive mode [4]. This paved the way for MUI to move from the lab to real-world applications, letting it begin to be applied in commercial fields such as large-scale advertising [5] and exhibition information island for tourists [6].
However, one of the most important things that cannot be ignored is the excellent performance of multi-user interfaces in multi-person collaboration and supporting group activities, such as the use of interactive blackboards/whiteboards in the education industry to enhance the atmosphere of teaching and learning [2,7], in the construction of new ways of communicating and transferring information between the staff and the tourists in the tourism industry [6,8], and in facilitating the medical discussions between doctors and patients with hearing impairment [9], and so on.
We can find that previous research on multi-user interfaces has focused on large desktop screens, and efforts have been made to facilitate interactions between users who are unfamiliar with each other or who are not close to each other.
In recent years, however, researchers have increasingly begun to focus on target groups such as families, couples, and generations. Because of the increasing popularity of handheld touchscreen devices (such as iPads, Microsoft Surfaces, and smart tablets), the need for multiuser interaction is becoming more common in more intimate family environments. Connell, S.L.’s team revealed the possibility of parents’ acceptance of the shared use of controllable media technologies with their young children [10]. Sheehan, K.J., and associates showed that parents and children engage in high-quality interactions during joint participation in coding game applications (apps) [11]. In addition to these needs and behaviors of parents and children playing and learning together, but also in helping older members of the family to learn, Xu and Liu, on the other hand, found that interface design engages family members and plays an important role in increasing motivation and effectiveness in the intergenerational learning process [12].
Beyond that, Zeng, E et al. conducted a study on the collective management of smart home system interfaces by multiple family members from the perspective of security and privacy [13]. Jeni Paay et al. verified that family members sharing recipes and cooking together also promotes physical interaction [14] and so on. However, most of these studies still rely on traditional single-user interfaces.
This study therefore wishes to explore the need to create multi-user interfaces on medium-sized screens in a home multi-user environment. To investigate this, family members were selected to participate as multi-user participants in a “collaborative cooking” experiment, using the iPad, a popular medium-sized screen used in the home, as a device case and a recipe application as the multi-user test interface in the home context.
The selection of recipe interfaces as this study’s focus arises from the fact that the prevailing recipe interfaces designed for domestic settings are tailored to single users, while Chinese families are traditionally seen cooking together and sharing meals. Because family cooking, such as making dumplings, happens to be a multi-threaded collaborative task, we created multi-user recipe interfaces for collaborative cooking based on traditional recipe apps, with a shift from the task allocation mode to the user interface allocation mode.
The family object of this study focuses on the parents and one child, and this family structure is relatively simple but basic, which also meets the requirements of multiple users, and there is a natural age gap between parents and their children.
By determining the differences in the interaction behavior and user experience when using the conventional one-user interface (OUI) and the multi-user interface (MUI) in the context of family activities, we explore the advantages and disadvantages of MUIs in terms of user experience. Our aim is to highlight the need for and the feasibility of crafting MUIs that are customized for multi-user family scenarios and provide insights for the future design of MUIs.
The paper is structured as follows: Section 2 describes in detail the experimental setup and the design of the questionnaire, including all the parameters and methods used to extract the results. Section 3 presents the evaluation results with necessary explanations. Finally, Section 4 summarizes the conclusions we draw from this study and its limitations.

2. Method

2.1. Participants

This study adopted a before-and-after comparison approach.
It included 33 participants, including 22 adults and 11 children, originating from 11 families in Eastern China, with each family unit comprising a child and a pair of parents (Users’ age information is shown in Table 1). All the participants took part in both the pre-test (utilizing the OUI and MUI) and post-test (user experience surveys) phases.
Written consent was obtained from the adult participants, and the parents agreed to allow their children to participate in the experiment.

2.2. Environment, Devices, and Interfaces

The interface content was founded on a traditional dumpling recipe (see Figure 1), and the interface framework was structured based on the standard layout of existing recipe app interfaces, excluding commercial modules such as advertisements. This framework included the following components, as described in Figure 2: (1) a fixed area for textual descriptions of ingredients and tools; (2) a sliding area containing sequential numbers, images (GIFs), and textual descriptions of each step, which represented the operational steps; and (3) several functional buttons, such as “Back” and “Next”.
Using this content and framework, we developed both an OUI recipe demonstration and an MUI recipe demonstration.
OUI Recipe Interface: As depicted in Figure 3, there exists an ingredient list area and a cooking step area. The ingredient list area remains stationary, featuring solely textual content, distinguished by three colors corresponding to the text of the 3 steps. The cooking step area permits sliding and touch interactions and comprises images, dynamic images (GIFs), and text.
MUI Recipe Interface: As shown in Figure 4, the cooking step exploits the advantages of the dumpling-making process to skillfully assign the dumpling-wrapping and dumpling-filling steps to multiple users and finally complete the process of dumpling production and cooking together. The recipe, cooking steps, and interaction elements are organized into three columns, corresponding to 3 family members. Each user zone consists of an upper ingredient list area and a lower cooking step area. The ingredient list area is static, featuring only textual content, differentiated by three colors corresponding to the text of the 3 steps. The cooking step area is intended for interaction through sliding and touch and comprises images, dynamic images (GIFs), and text.

2.3. Tasks

The experiments took place in the homes of the participants and utilized an 11-inch iPad Pro as the display device, coupled with an adjustable stand. The participants engaged in two separate experiments, with a maximum interval of 1 week between them. The first experiment involved the utilization of a traditional OUI, while the second experiment involved using the new MUI. In both experiments, the participants were required to employ the respective interfaces and follow the provided steps to cook. The behaviors of both the adult and child participants were observed (see Figure 5). To protect the privacy of the testers, the face was covered with a gray pattern. Following each evaluation session, the adult users individually reported their user experiences with the OUI and MUI following the surveys presented in Section 2.4. Brief interviews were conducted with the children. The questionnaire was conducted immediately after the completion of the experiment, which took about 5–10 min to complete, and the interview was conducted after the completion of the questionnaire, which took about 5–10 min.

2.4. User Experience Evaluation Model

Prior research has established comprehensive models for the evaluation of user experiences. In 2010, Yeh introduced a user experience model based on three key elements: ease, effectiveness, and enjoyment (3e) [15]. In 2013, Rachel Harrison and colleagues proposed the PACMAD user experience model, which assesses the user experience across seven dimensions: memorability, learnability, cognitive load, effectiveness, efficiency, errors, and satisfaction [16]. In 2015, Esraa Shawgi and her team presented a usability measurement model that measures the usability of user interfaces in terms of accessibility, navigability, understandability, learnability, operability, and attractiveness [17]. In this study, elements from these models were consolidated to form a new user experience evaluation system centered on four dimensions: visual confirmation, contents, the operation process, and satisfaction (see Figure 6).
These four dimensions were further subdivided into 18 indicators, with additional questions to assess the aspects of the recipes in the MUI. User experience evaluation questionnaires and a Likert seven-point scale (where 1 indicates “strongly disagree or very rare” and 7 indicates “strongly agree or very frequent”) were used. This led to the creation of a user experience scale for the comparative experiment, as displayed in Table 2.
The data obtained from the OUI and MUI questionnaires were analyzed using SPSS Statistics 29.0.1.0. of IBM.

3. Results and Discussion

3.1. Visual Confirmation

The Wilcoxon test in SPSS 29.0.1.0, which is suitable for small sample data, as in this case, was used to compare the pre- and post-overall data distribution obtained from the OUI and MUI questionnaires. The results indicate that the significance level of EQ1 (easy to differentiate the three ingredient list area) was 0.028 and that of EQ5 (easy to differentiate the 3 cooking step area) was 0.014, as listed in Table 3.
As is shown in Table 4, for EQ1, the significance level (p < 0.05) reveals that the MUI was easier to distinguish than the OUI concerning the ingredient list area (fixed area) in this case.
In relation to EQ5, the statistical analysis (with a p < 0.05) and the average mean suggest that the MUI performed better than the OUI in terms of distinguishing the cooking step area, which was the sliding area, in this specific scenario.

Three Elements in Visual Confirmation

Based on the comparative analysis of EQ1 and EQ5, we further explored the related questions to determine the influences of the “color”, “layout”, and “content” on the visual confirmation in the OUI and MUI scenarios.
From a data standpoint, the average scores for “content” were 5.73, 5.82, 5.27, and 6.05, as depicted in Figure 7. These data illustrate that the “content” presented a clear advantage in both the OUI and MUI scenarios, irrespective of the particular area.
The Kruskal–Wallis test results obtained via SPSS are shown in Table 5. They showed that the OUI and MUI had different outcomes when the users interacted with the fixed areas in the interfaces (i.e., the ingredient list area). Within the OUI, the significance level between “color” and “layout” was 1.000, while the significance level between “color” and “content” was 0.000 (p < 0.001) and that between “layout” and “content” was 0.003 (p < 0.01). In contrast, no significance was found among the three elements in the MUI.
Our data show that, in the ingredient list area, the “color” had the least impact in the OUI, while the “color” and “layout” played a role in the MUI (the lack of difference implies that their functions were similar).
However, when the users interacted with the sliding areas (i.e., the cooking step area) in the interfaces, the K–W test results, listed in Table 5, showed similar outcomes for the OUI and MUI. In the OUI, the significance level between “color” and “layout” was 1.000, that between “color” and “content” was 0.000 (p < 0.001), and that between “layout” and “content” was 0.001 (p < 0.01). In the MUI, the significance level between “color” and “layout” was 1.000, whereas that between “color” and “content” was 0.006 (p < 0.01) and that between “layout” and “content” was 0.016 (p < 0.05).
Therefore, the cooking step area involved frequent interaction, requiring the users to frequently slide the page. Thus, “content” remained the primary choice for visual confirmation in both the OUI and MUI. In the MUI, the influence of the “content” and “color” remained consistent, even though the “color” and “layout” provided slightly better support to the users compared to the OUI.
As observed, the test users were accustomed to performing visual confirmation primarily through the “content” throughout the entire interaction, while the “color” and “layout” served as secondary supporting elements.

3.2. Content

Utilizing the Wilcoxon method within the SPSS software, we obtained four significance values (Sig.) of 0.786, 0.858, 0.252, and 0.218, with all the p-values exceeding 0.05 (see Table 6). Furthermore, as presented in Table 7, the mean closely approached or exceeded a value of 6.
Compared to OUI, it becomes evident that the MUI displays different users’ content on a single interface page, necessitating the redesign of the UI. This situation has the potential to lead to misinterpretation or confusion due to the presence of multiple content streams. Nevertheless, our analysis of the questionnaires reveals that the readability of the content, including text, images, and GIFs, was not significantly affected in this scenario.
This indicates that the variation between the two interfaces has no effect on the information’s readability or the reading experience of users, including that of text, images, and GIFs.

3.3. Operation Process

Concerning the user experience in the operation process, we conducted a comparative evaluation across eight aspects, as can be seen from EQ13 to EQ20 in Table 2.
As described in Figure 8, the results of the SPSS analysis indicate that all significance levels were p > 0.05, except for that of EQ19 (Operating Efficiency with the Next-step button), which exhibited a significance level of p < 0.05.
Based on the results of EQ13–EQ14, the users’ familiarity with tap and swipe operations remained unaffected by the MUI. Furthermore, the findings for EQ15, EQ16, and EQ17 suggest that the users could easily locate the “Back” button and “Next-step” button and effectively control the step flow in both the OUI and MUI. Concerning EQ18 and EQ20, the users’ efficiency in utilizing navigation components such as the “Back” button and vertical swipe operation was not influenced by the MUI.
However, it is worth noting that the significance value of Question 19 was below 0.05, indicating a significant difference in the use of the “Next-step” button. The worst experience on MUI shows that the design of commonly used basic elements, such as the “Next-Step” button, should be fixed in the interface and should not be displayed after both sides have completed tasks, and it is not recommended that buttons keep moving as the cooking step page is swiped.

3.4. Satisfaction/Total Value

Using the Wilcoxon method in SPSS, the four Sig. values were found to be 0.142, 0.327, 0.002, and 0.256, as shown in Table 8.
The results for EQ21 don’t suggest that the users perceived the design of the MUI to be worse than that of the single-person interface, still offering convenience and a clear appearance. Furthermore, based on the results for EQ22, the users perceived the content of both user interfaces (UIs) to be equally useful, with no notable distinction. The MUI did not compromise the content’s usefulness. Moreover, based on the results for EQ23, the users considered operating an MUI to be more interesting than that of an OUI. It is noteworthy that almost all users were utilizing an MUI recipe app for the first time, which created a sense of novelty. Moreover, the outcomes from EQ24 demonstrate that there was no observable difference in the sense of accomplishment between the two interfaces. The users reported that the operation of the MUI did not present difficulties or excessive challenges.

3.5. Results and Discussion of Child Users

In addition, when we collated the video records, we identified some of the characteristics of the child users. From the records, the following observations about the child users were made.
Firstly, it was observed that the children participating in the test were very unfamiliar with the use of a recipe to prepare dumplings. This unfamiliarity extended to both the use of the recipe app and the actual process of preparing dumplings, representing first-time experiences (as shown in Table 9). The children struggled to understand the functions of different areas of the interface, resulting in almost no use of or consideration of visual confirmation. This is similar to the phenomenon that Druin [18] and Nikita Soni [19] previously found that children have a cognitive load on texts and abstract symbols in the interface.
Secondly, the language and behavioral instructions of their parents significantly influenced the children’s engagement with the content. Given their limited text-reading abilities, the children primarily reviewed the pictures and GIFs, imitating them through guidance and demonstrations from their parents. A user interface that is not highly visual is unattractive to children [18]. Hence, even in a multi-user interface with text sections specifically designed for children, these features were largely ineffective and were essentially ignored by the parents.
The children’s initial interactions with the screens were predominantly characterized by tentative exploration. As the experiment progressed, their behavior varied. At times, the children actively navigated the interface while seeking parental input, and, at other times, they seemed compelled to obtain parental consent before engaging with the screen. The children’s interactions with the devices often occurred in response to requests from their parents. For example, there were five families in the experiment, when parents’ hands were unclean, they asked their children to assist by swiping the screen to the next page. Moreover, we observed that when parents could not browse the screen closely, the children in the higher grade would read the recipe text required by their parents to their parents, which was another indirect interaction with the interface in addition to the substitute operation. The children displayed no reluctance regarding these repetitive and simple tasks, showing eagerness and happiness to collaborate with their parents. Similarly, Kelly J. Sheehan and her team have observed in their research that parents and children have high-quality interactions when using apps together [11].
Finally, the child users demonstrated considerable joy during the activity of preparing dumplings with their parents. The task of interface manipulation was not distinctly separated from the actual cooking process. It was observed that the children’s enthusiasm was effectively encouraged by their parents, as exemplified by mother A, an elementary school teacher, who employed classroom-style interactive methods with her child. This also supports the conclusion of Kelly J. Sheehan’s previous research that parents show a higher proportion of questioning and spatial dialogue than children [11]. At the same time, it also shows the enthusiasm of parents in guiding children’s HCI [10].
We would like to thank the families who participated in the test. We are aware that the sample size of 11 families is too small, limiting the generalizability of our findings, and we hope to expand the number of families tested in the future and recruit more diverse group types to validate our results and discussions more scientifically.
In addition to the co-cooking scenario, we also expect that the MUIs can provide design inspiration for enriching the UX and UI of apps for other home scenarios with its fun, flexibility, and inclusiveness. Predictably, for example, in terms of education, MUIs can enable learners of different ages to engage and tailor content to their individual learning styles, and in terms of entertainment, how MUI interfaces can enhance the fun of family sharing by supporting multiplayer interactive games and collaborative viewing experiences.
We also realize that our research faces broader challenges and that other dimensions of UI/UX in multi-user scenarios have not been explored in sufficient depth, such as the navigation function, task allocation, and completion efficiency, etc., which we are pleased to consider in depth and use as directions for future research.

4. Conclusions

For adult users, in comparison to an OUI, an MUI alters the user’s visual confirmation method. It introduces an additional step that involves locating their own area, where the color and layout could play significant roles in enhancing the efficiency and convenience of visual confirmation. The alterations in the MUI do not affect the readability of the content for the users. Regarding the operational process, employing the same navigation elements in the MUI results in no divergence from the OUI. Nevertheless, when users utilize the novel button controls, they are required to perform additional interactive engagement and learning costs. In general, the innovative layout design and interface contribute to users’ gratification and enjoyment, with no significant difficulties reported when utilizing the MUI in contrast to the OUI.
For young users, distinguishing between various functions on the user interface presents a challenge. Children typically rely on their parents’ actions to access information from the interface, which results in a lack of specific visual confirmation. Children display a pronounced preference for images and GIFs while encountering difficulties in reading text [18]. Parents often read and repeat the text content to aid their children. Parents engage their children in sliding through the pages, enhancing their sense of involvement and assisting the parents simultaneously [11]. Children are well acquainted with swipe and tap gestures, without requiring formal instruction for these. Children do not exhibit notable interest in the alterations in the interface between the two versions, and workflows that are exclusively designed for children in the MUI demonstrate limited effectiveness. Children’s interactive behavior is predominantly influenced by their parents [10], and the attitudes of parents substantially impact their children’s satisfaction. One limitation of this study cannot be ignored here: the children in our sample did not complete full primary education and had limited reading ability, as this is not the case for all children.
We recommend that, in scenarios where there is a high demand for visual effects and content recognition in multi-user collaborations, it would be beneficial to provide the users with an MUI. Additionally, an MUI can enhance the enjoyment of collaboration and is suitable for educational and entertainment applications. Given the growing use of large screens, foldable screens, and multi-screen setups, along with users’ increasing expectations for efficient interactions and user experiences, we can expect wider acceptance of and a greater demand for MUIs and interactions in the near future.
The findings from the user experience survey related to OUIs and MUIs provide valuable insights for designers and service providers with respect to the strengths and limitations of integrating MUIs into interfaces linked to family activities.
This study included only a straightforward comparison between the two interfaces. The comparison items were limited, and there are still many inadequacies to be explored in the next stage of the research. Family types and age groups were not sufficiently diverse to represent a wide range of families. We look forward to enriching our multi-user interface design and user experience research in the future with larger sample sizes, more diverse test users, more specific behavioral observations, and longer testing periods.

Author Contributions

Conceptualization, M.Z. and M.W.; methodology, M.Z., M.L. and M.W.; experiments and data curation, M.Z.; software and formal analysis, M.Z. and M.L.; writing—original draft preparation, M.Z.; writing—review and editing, K.O. and M.W.; visualization, M.Z. and K.O.; supervision, M.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The datasets used are publicly available.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Stewart, J.; Bederson, B.B.; Druin, A. Single Display Groupware: A Model for Co-Present Collaboration. In Proceedings of the Proceedings of the SIGCHI Conference on Human Factors in Computing Systems the CHI Is the Limit—CHI ’99, Pittsburgh, PA, USA, 15–20 May 1999; ACM Press: New York, NY, USA, 1999; pp. 286–293. [Google Scholar]
  2. Isabwe, G.M.N.; Schulz, R.; Reichert, F.; Konnestad, M. Using Multi-Touch Multi-User Interactive Walls for Collaborative Active Learning. In Proceedings of the HCI International 2019—Late Breaking Posters, Orlando, FL, USA, 26–31 July 2019; Stephanidis, C., Antona, M., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 192–199. [Google Scholar]
  3. Dietz, P.; Leigh, D. DiamondTouch: A Multi-User Touch Technology. In Proceedings of the 14th Annual ACM Symposium on User Interface Software and Technology, Orlando, FL, USA, 11–14 November 2001; ACM: New York, NY, USA, 2001; pp. 219–226. [Google Scholar]
  4. Esenther, A.; Wittenburg, K. Multi-User Multi-Touch Games on DiamondTouch with the DTFlash Toolkit. In Proceedings of the Intelligent Technologies for Interactive Entertainment, Madonna di Campaglio, Italy, 30 November–2 December 2005; Maybury, M., Stock, O., Wahlster, W., Eds.; Springer: Berlin/Heidelberg, Germany, 2005; pp. 315–319. [Google Scholar]
  5. Peltonen, P.; Kurvinen, E.; Salovaara, A.; Jacucci, G.; Ilmonen, T.; Evans, J.; Oulasvirta, A.; Saarikko, P. It’s Mine, Don’t Touch!: Interactions at a Large Multi-Touch Display in a City Centre. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Florence, Italy, 5–10 April 2008; ACM: New York, NY, USA, 2008; pp. 1285–1294. [Google Scholar]
  6. Patsoule, E. Interactions around a Multi-Touch Tabletop: A Rapid Ethnographic Study in a Museum. In Design, User Experience, and Usability. User Experience Design Practice; Marcus, A., Ed.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2014; Volume 8520, pp. 434–445. ISBN 978-3-319-07637-9. [Google Scholar]
  7. Dillenbourg, P.; Evans, M. Interactive Tabletops in Education. Comput. Support. Learn. 2011, 6, 491–514. [Google Scholar] [CrossRef]
  8. Creed, C.; Sivell, J.; Sear, J. Multi-Touch Tables for Exploring Heritage Content in Public Spaces. In Visual Heritage in the Digital Age; Ch’ng, E., Gaffney, V., Chapman, H., Eds.; Springer Series on Cultural Computing; Springer: London, UK, 2013; pp. 67–90. ISBN 978-1-4471-5534-8. [Google Scholar]
  9. Piper, A.M.; Hollan, J.D. Supporting Medical Conversations between Deaf and Hearing Individuals with Tabletop Displays. In Proceedings of the 2008 ACM Conference on Computer Supported Cooperative Work, San Diego, CA, USA, 8–12 November 2008; ACM: New York, NY, USA, 2008; pp. 147–156. [Google Scholar]
  10. Connell, S.L.; Lauricella, A.R.; Wartella, E. Parental Co-Use of Media Technology with Their Young Children in the USA. J. Child. Media 2015, 9, 5–21. [Google Scholar] [CrossRef]
  11. Sheehan, K.J.; Pila, S.; Lauricella, A.R.; Wartella, E.A. Parent-Child Interaction and Children’s Learning from a Coding Application. Comput. Educ. 2019, 140, 103601. [Google Scholar] [CrossRef]
  12. Xu, W.; Liu, X. Gamified Design for the Intergenerational Learning: A Preliminary Experiment on the Use of Smartphones by the Elderly. In Proceedings of the Human Aspects of IT for the Aged Population. Acceptance, Communication and Participation, Las Vegas, NV, USA, 15–20 July 2018; Zhou, J., Salvendy, G., Eds.; Springer International Publishing: Cham, Switzerland, 2018; pp. 571–580. [Google Scholar]
  13. Zeng, E.; Roesner, F. Understanding and Improving Security and Privacy in {multi-User} Smart Homes: A Design Exploration and {in-Home} User Study. In Proceedings of the 28th USENIX Security Symposium (USENIX Security 19), Santa Clara, CA, USA, 14–16 August 2019; pp. 159–176. [Google Scholar]
  14. Paay, J.; Kjeldskov, J.; Skov, M.B. Connecting in the Kitchen: An Empirical Study of Physical Interactions While Cooking Together at Home. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing, Vancouver, BC, Canada, 14–18 March 2015; Association for Computing Machinery: New York, NY, USA, 2015; pp. 276–287. [Google Scholar]
  15. Yeh, C.J. The Principles of Interaction Design in the Post-Digital Age; ARTIST-MAGAZINE: Taipei, Taiwan, 2010; ISBN 978-986-282-001-8. [Google Scholar]
  16. Harrison, R.; Flood, D.; Duce, D. Usability of Mobile Applications: Literature Review and Rationale for a New Usability Model. J. Interact. Sci. 2013, 1, 1. [Google Scholar] [CrossRef]
  17. Shawgi, E.; Noureldien, N.A. Usability Measurement Model (Umm): A New Model for Measuring Websites Usability. Int. J. Inf. Sci. 2015, 5, 5–13. [Google Scholar]
  18. Druin, A.; Bederson, B.B.; Hourcade, J.P.; Sherman, L.; Revelle, G.; Platner, M.; Weng, S. Designing a Digital Library for Young Children: An Intergenerational Partnership. In The Craft of Information Visualization; Elsevier: Amsterdam, The Netherlands, 2003; pp. 178–185. ISBN 978-1-55860-915-0. [Google Scholar]
  19. Soni, N.; Aloba, A.; Morga, K.S.; Wisniewski, P.J.; Anthony, L. A Framework of Touchscreen Interaction Design Recommendations for Children (TIDRC): Characterizing the Gap between Research Evidence and Design Practice. In Proceedings of the 18th ACM International Conference on Interaction Design and Children, Boise, ID, USA, 12–15 June 2019; ACM: New York, NY, USA, 2019; pp. 419–431. [Google Scholar]
Figure 1. The dumpling-making process.
Figure 1. The dumpling-making process.
Futureinternet 16 00478 g001
Figure 2. Framework of interface prototype.
Figure 2. Framework of interface prototype.
Futureinternet 16 00478 g002
Figure 3. OUI prototype of recipe app. The left image is the English translation, and the right image is the Chinese version interface used by the testers.
Figure 3. OUI prototype of recipe app. The left image is the English translation, and the right image is the Chinese version interface used by the testers.
Futureinternet 16 00478 g003
Figure 4. MUI prototype of recipe app. The left image is the English translation, and the right image is the Chinese version interface used by the testers.
Figure 4. MUI prototype of recipe app. The left image is the English translation, and the right image is the Chinese version interface used by the testers.
Futureinternet 16 00478 g004
Figure 5. Participants undertaking the test (the faces of the testers shown in the figure are shielded).
Figure 5. Participants undertaking the test (the faces of the testers shown in the figure are shielded).
Futureinternet 16 00478 g005
Figure 6. User experience evaluation model.
Figure 6. User experience evaluation model.
Futureinternet 16 00478 g006
Figure 7. Mean values of three elements in different interface areas.
Figure 7. Mean values of three elements in different interface areas.
Futureinternet 16 00478 g007
Figure 8. Test results of operation process/EQ13–EQ20.
Figure 8. Test results of operation process/EQ13–EQ20.
Futureinternet 16 00478 g008
Table 1. Age information of participants.
Table 1. Age information of participants.
NMean AgeStd. Dev.
11 (Kid)6.640.67
11 (Mother)33.733.44
11 (Father)35.555.13
Table 2. User experience evaluation questionnaire for adults.
Table 2. User experience evaluation questionnaire for adults.
FrameworkNo.Description
Visual ConfirmationEQ1Easy to differentiate the three ingredient list area
EQ2How much do you use content to distinguish the ingredients list area?
EQ3How much do you use color to distinguish the ingredients list area?
EQ4How much do you use layout to distinguish the ingredients list area?
EQ5Easy to differentiate the three cooking step area
EQ6How much do you use content to distinguish the cooking step area?
EQ7How much do you use color to distinguish the cooking step area?
EQ8How much do you use layout to distinguish the cooking step area?
ContentsEQ9Easy to read the texts on the ingredients list
EQ10Easy to read the texts of the cooking step unit.
EQ11Easy to see the pictures of the cooking step unit.
EQ12Easy to watch the GIFs of the cooking step unit.
Operation ProcessEQ13Familiar with the Tap operations.
EQ14Familiar with the Vertical (up-bottom) swipe operations.
EQ15Easy to find the Back button.
EQ16Easy to find the Next-step button.
EQ17Easy to control the step flow.
EQ18Efficiently use Back button.
EQ19Efficiently touch on next part operations of cooking step.
EQ20Efficiently vertical (up–down) swipe operations of cooking step.
Satisfaction/
Total Value
EQ21I’m satisfied with the recipe graphic design.
EQ22I think the recipe content is useful.
EQ23I think the operation is interesting.
EQ24I can feel the sense of achievement in the operation.
Table 3. Descriptive statistics of EQ1 and EQ5 for visual confirmation.
Table 3. Descriptive statistics of EQ1 and EQ5 for visual confirmation.
NMeanStd. DeviationMin.Max.Median
EQ1OUI225.681.427376.00
MUI226.500.740577.00
EQ5OUI225.321.912176.00
MUI226.450.800577.00
Table 4. Test statistics a for EQ1 and 5 (visual confirmation).
Table 4. Test statistics a for EQ1 and 5 (visual confirmation).
Futureinternet 16 00478 i001
EQ1 OUI–MUIEQ5 OUI–MUI
Z−2.198 b−2.448 b
Asymp. Sig. (2-tailed)0.0280.014
a Wilcoxon signed-rank test. b Based on negative ranks.
Table 5. Pairwise comparisons of three elements in visual confirmation.
Table 5. Pairwise comparisons of three elements in visual confirmation.
Sample 1–Sample 2Test StatisticStd. Test StatisticAdj. Sig. a
Ingredient List Area on OUIColor–Layout−4.386−0.7681.000
Color–Content23.0914.0410.000
Layout–Content18.7053.2740.003
Ingredient List Area on MUIColor–Layout−4.705−0.8281.000
Color–Content12.8182.2550.072
Layout–Content8.1141.4270.461
Cooking Step Area on OUIColor–Layout−4.523−0.7911.000
Color–Content24.5914.2990.000
Layout–Content20.0683.5080.001
Cooking Step Area on MUIColor–Layout−1.636−0.2911.000
Color–Content17.3183.0810.006
Layout–Content15.6822.7900.016
Each row tests the null hypothesis that the Sample 1 and Sample 2 distributions are the same. Asymptotic significances (2-sided tests) are displayed. The significance level is 0.05. a Adjusted significance: significance values are adjusted by the Bonferroni correction for multiple tests.
Table 6. Test statistics a for content/EQ9–EQ12.
Table 6. Test statistics a for content/EQ9–EQ12.
Futureinternet 16 00478 i002
EQ9
OUI–MUI
EQ10
OUI–MUI
EQ11
OUI–MUI
EQ12
OUI–MUI
Z−0.272 b−0.179 b−1.145 b−1.231 b
Asymp. Sig. (2-tailed)0.7860.8580.2520.218
a Wilcoxon signed-rank test. b Based on negative ranks.
Table 7. Descriptive statistics for content/EQ9–EQ12.
Table 7. Descriptive statistics for content/EQ9–EQ12.
ItemNMeanSDMin.Max.Median
EQ9OUI225.771.631176.00
MUI226.001.234376.50
EQ10OUI226.051.676277.00
MUI226.231.232277.00
EQ11OUI226.230.973476.50
MUI226.550.671577.00
EQ12OUI226.320.945477.00
MUI226.590.796477.00
EQ9: Easy to read the texts of the ingredients list. EQ10: Easy to read the texts of the cooking step unit. EQ11: Easy to see the pictures of the cooking step unit. EQ12: Easy to watch the GIF of the cooking step unit.
Table 8. Test using Statistic a for satisfaction/EQ21–FEQ24.
Table 8. Test using Statistic a for satisfaction/EQ21–FEQ24.
Futureinternet 16 00478 i003
EQ21
OUI–MUI
EQ22
OUI–MUI
EQ23
OUI–MUI
EQ24
OUI–MUI
Z−1.467 b−0.979 b−3.087 b−1.137 b
Asymp. Sig. (2-tailed)0.1420.3270.0020.256
a Wilcoxon signed-rank test. b Based on negative ranks.
Table 9. Child users’ performance during interaction.
Table 9. Child users’ performance during interaction.
Family
ItemABC D E FGHIJK
1Child users’ experience in making dumplings.NoNoNoNoNoNoNoNoNoNoNo
2Child users’ experience in using recipe app.NoNoNoNoNoNoNoNoNoNoNo
3Literacy abilities of child users.xxxxxxx
4Frequency with which child user autonomously touches screen (recipes), OUI/MUI9/1010/99/1516/165/1013/133/23/911/129/116/7
—Number of direct touches (child user), OUI/MUI5/64/67/1011/83/611/102/22/58/67/95/4
—Number of touches after permission or request (child user), OUI/MUI4/46/32/55/82/42/31/01/43/62/21/3
x: They have not yet entered formal primary education. △: Between grades 1–3 in primary school.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhou, M.; Li, M.; Ono, K.; Watanabe, M. A Comparative Study of the User Interaction Behavior and Experience in a Home-Oriented Multi-User Interface (MUI) During Family Collaborative Cooking. Future Internet 2024, 16, 478. https://doi.org/10.3390/fi16120478

AMA Style

Zhou M, Li M, Ono K, Watanabe M. A Comparative Study of the User Interaction Behavior and Experience in a Home-Oriented Multi-User Interface (MUI) During Family Collaborative Cooking. Future Internet. 2024; 16(12):478. https://doi.org/10.3390/fi16120478

Chicago/Turabian Style

Zhou, Mengcai, Minglun Li, Kenta Ono, and Makoto Watanabe. 2024. "A Comparative Study of the User Interaction Behavior and Experience in a Home-Oriented Multi-User Interface (MUI) During Family Collaborative Cooking" Future Internet 16, no. 12: 478. https://doi.org/10.3390/fi16120478

APA Style

Zhou, M., Li, M., Ono, K., & Watanabe, M. (2024). A Comparative Study of the User Interaction Behavior and Experience in a Home-Oriented Multi-User Interface (MUI) During Family Collaborative Cooking. Future Internet, 16(12), 478. https://doi.org/10.3390/fi16120478

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop