Next Article in Journal
Grouping Preventive Maintenance Strategy of Flexible Manufacturing Systems and Its Optimization Based on Reliability and Cost
Previous Article in Journal
On Constraints and Parasitic Motions of a Tripod Parallel Continuum Manipulator
Previous Article in Special Issue
Muscle Selection Using ICA Clustering and Phase Variable Method for Transfemoral Amputees Estimation of Lower Limb Joint Angles
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Integrated Application of Motion Sensing and Eye Movement Tracking Techniques in Perceiving User Behaviors in a Large Display Interaction

1
School of Humanity, Art and Digital Media, Hangzhou Dianzi University, Hangzhou 310018, China
2
Department of Computer Science and Systems, Stockholm University, 10691 Stockholm, Sweden
*
Author to whom correspondence should be addressed.
Machines 2023, 11(1), 73; https://doi.org/10.3390/machines11010073
Submission received: 28 October 2022 / Revised: 27 December 2022 / Accepted: 3 January 2023 / Published: 6 January 2023
(This article belongs to the Special Issue Smart Machines: Applications and Advances in Human Motion Analysis)

Abstract

:
In public use of a large display, it is a usual phenomenon that multiple users individually participate in respective tasks on a common device. Previous studies have categorized such activity as independent interaction that involves little group engagement. However, by investigating how users approach, participate in, and interact with large displays, we found that parallel use is affected by group factors such as group size and between-user relationship. To gain a thorough understanding of individual and group behaviors, as well as parallel interaction task performance, one 70-inch display-based information searching task and experiment was conducted, in which a mobile eye movement tracking headset and a motion sensing RGB-depth sensor were simultaneously applied. The results showed that (1) a larger group size had a negative influence on the group users’ concentration on the task, perceived usability, and user experience; (2) a close relationship between users contributed to occasional collaborations, which was found to improve the users’ task completion time efficiency and their satisfaction on the large display user experience. This study proves that an integrated application of eye movement tracking and motion sensing is capable of understanding individual and group users’ behaviors simultaneously, and thus is a valid and reliable scheme in monitoring public activities that can be widely used in public large display systems.

1. Introduction

Benefiting from a rapid advance in display technologies and a great drop in manufacturing cost, high-resolution large displays have become increasingly prevalent devices in different domains [1,2,3]. Various applications have demonstrated the benefits and potentials of large displays, including—but not limited to—enhanced interactivity and user engagement in domestic games [4], promoted collaborative productivity in offices [5,6] and increased attraction in public advertising [7]. In particular, capitalizing on the advantage of supporting parallel and collaborative interactions, large displays appear attractive to information interaction in public sites such as flight information displays in an airport [8].
Looking at previous large display-related studies, the majority seem to focus on the large display usability and user experience in either single-user or group-user collaborative interactions [7,9]. In contrast, parallel use of large displays, with multiple users simultaneously conducting respective tasks, is often neglected due to insufficient understanding of its characteristics and effects. As a result, parallel use of large displays, which exists as a significant form of interaction in public spaces, is traditionally regarded as a complementary body of collaborative interaction, as described in [10].
The significance of studying parallel use of large displays, particularly with different group forms in public spaces, is two-fold. First, parallel interaction with large displays is a common activity in public spaces. For example, advertising displays, shopping windows, and stock market screens are typical scenarios [9]. The emergence of ubiquitous computing and ambient displays has envisioned how users may interact with contextual displays in parallel in the near future [11]. Therefore, drawing insights into parallel use of large displays is of great significance not only to designing present large display-based applications but also to understanding future ambient display system interactions. Second, group-related factors such as the users’ standing positions, group size, and between-user relationship have been previously studied in collaborative interaction performance [12]. However, they are rarely considered nor systematically investigated in users’ parallel use of large displays in public spaces. For example, few studies, if any, have ever proven how varying group sizes affect individual users’ cognition and perception performances in front of a public large display. In previous studies, the evaluation of both individual and collaborative interactions were generally limited to explicit task performance and observed activities. More implicit measures, such as users’ visual attention and enthusiasm on display contents and subjective user experience, were rarely evaluated [13]. Compared to explicit outcomes, these implicit measures are more difficult to be measured. Especially when other, more complicated influencing factors such as dynamic group size and between-user relationship are involved, a deep understanding about the user behaviors and performance in front of a public large display becomes a quite challenging work.
Except for the physical movements and gestures that are the most representative explicit features reflecting human behavioral intentions, head rotations and eyesight navigation are less obvious but also explicit behavioral features reflecting task engagement and their attentions in activities [9]. When both the explicit and implicit features are measured, user interaction and group activities can be understood better. In this study, an RGB-depth motion sensing camera and a mobile eye tracking sensor were applied to detect the explicit body movements and implicit visual behaviors, respectively. In addition, an empirical experiment was designed and conducted to examine whether and how the group size and between-user relationship influence user engagement and task performance in parallel interaction with a large display in a simulated public situation. The results are extensively discussed with the purpose of drawing general conclusions for designing large public display user interfaces and accompanying sensor systems to support activity monitoring.

2. Related Work

2.1. Large Display-Based Interaction

In this paper, we refer to large displays as displaying devices with extra screen real estate that differentiates noticeably from traditional desktop monitors. This concept does not apply a specific limit of physical screen size; it mostly indicates displays that are beyond traditional desktop use [13]. Therefore, tiled monitors, projection display, and large flat screens fall into this scope. In this study, one 70-inch flat screen is used as a large display.
Large displays, either in private, semipublic, or public space, are capable of providing productivity and cognitive benefits [5,9]. A large display application was found to improve the task efficiency in information searching and navigation [6]. A large display was also proven to be beneficial for 3D navigation in a virtual reality application [14]. A large display provides a wide field of view and a high user immersion, which is thought to be advantageous for improving the task efficiency of perceiving and processing information on a large display [9]. Moreover, a large display allows multiple users to simultaneously carry out respective tasks, either in a collaborative or ordinal manner [15].
Besides these benefits, large displays also have several deficiencies in general applications, including—but not limited to—losing track of the cursor, difficulty in distal access, and window management [9]. These issues are particularly distinct when many users are simultaneously using a large display. In such cases, users that are engaged in the parallel interaction with the large display have wide diversities in motion ability, cognition level, computer operating skill, and relevant technological knowledge. These diversities are even more complicated when large display-based systems and applications are applied in crowded sites such as hospitals and shopping malls, where not only ordinary users but also users having special requirements (e.g., the elderly and patients sitting on a wheelchair and preschool children having limited cognitive ability) are engaged in the large display interaction [16].
Parallel use of large displays seems an effective way to handle user diversity, because it naturally supports individual users’ interaction across various user profiles [16]. Unlike collaborative interaction on large displays where the operations of multiple users are often synchronous and their purposes are consistent, which requires specific operation strategies to cope with crowding and multitasking conflicting issues [5,15], parallel interaction on large displays has concurrent tasks within independent processes, which brings conveniences for conducting tasks. On one hand, users do not have to wait for their turn and they can participate in the interactive task at any time; on the other hand, when they withdraw from an interactive task, they are exempted from obligations of considering others. In previous studies such as [15] and [17], a large display-based parallel interaction and group activities were evaluated through an observational survey, and their findings were limited to user movements and explicit actions such as users approaching and leaving, the touch input on the display, user chatting, and so on. However, their visual attentions, perceived physical and mental loads, and their subjective experience and satisfaction are still unknown. From this perspective, a multiple-sensor-supported deeper investigation on group activities and user experience in parallel interaction with a public large display is really required.

2.2. Parallel Use of Public Large Displays

Parallel use of public large displays has two-fold implications in this paper. First, it refers to simultaneous interaction in a parallel manner, for example, people simultaneously searching on flight information screens in airports. Second, it refers to the users’ respective tasks that are independently conducted within one group, e.g., one user is browsing the flight schedule details while another is seeking the clock time on the display. In either case, the users perform individual tasks without interfering with others. Another example is in a bus station, where people are crowded in front of a station information display and search respective target information. In this situation, users have little communication with each other, although they may share the same task goals.
Unlike collaborative use of large displays, which only occurs when multiple users share the same task(s) [18], parallel use is a more common situation. Users in parallel conduct individual tasks and respectively interact with large displays, but not concerning others’ tasks [17]. In other words, everyone has equal access to the large display content. As pointed out by Marshall et al. [19], single-user input means turn taking, but a parallel-use mode enables independent and simultaneous inputs by different users, thus avoiding accessing negotiations. From this root, parallel use is spontaneously compatible with public interaction in various situations.
Oversized large displays, especially those ultralarge displays wider than 5 m, naturally support parallel interaction by several users [4]. Apart from this, nowadays, interaction techniques widely applied in large display systems can also support parallel interaction well [20], but there are still some aspects not fully considered, which harms the user experience in actual applications. First, technical novelty is excessively emphasized, and it causes the misunderstanding that a richer interactivity equals to a better usability. However, when unbefitting interaction techniques are applied in large display systems, these interaction techniques in fact become obstacles for users [21]. In particular when a large display is used by multiple users in parallel mode, interaction techniques nowadays can hardly benefit all users at the same time. For example, in the Proxemic Interaction technique [22] and Ambient Display technique [23], user-tracking and user-to-display proximity-detecting techniques are applied to generate responsive interfaces in real time based on the user’s position, spatial distance towards the display, face orientation, and so on. When a single user is engaged, this responsive interface technique provides a good interactivity and functional assistance; but in parallel interaction, responsive interface changes for some users inevitably bring distractions for others.
Second, since collaborative interaction is often regarded as grouping activities, and parallel interaction is deemed as individual behaviors, more complex factors such as group size and social connection (or relationship) between users and their potential influences are often neglected in previous research [24]. For example, in research by Jakobsen and Hornbæk [25] and Shoemaker et al. [26], collaborative interaction between two acquaintances were evaluated, but it was not considered when two participants were strangers. In another study by Jakobsen et al. [27], one user interaction was evaluated without considering more users. In order to gain a more comprehensive understanding about the large display application in parallel-use mode, and based on this to draw implications and strategies for designing more user-friendly interactive systems, a deeper evaluation on the user interaction and activities in front of a large display is required.

2.3. Sensor-Supported Interaction Evaluation on Large Displays

In public sites such as shopping malls and train waiting rooms, user interactions and group activities in front of large display-based information systems are more complicated, involving more environmental and social factors than in private sites [5]. In public interactions with a large display, user attention is the primary concern due to many objects striving for users’ attention in open environments [28]. A field study on noticing interactivity of a shopping window indicated that initial attention and visibility play a key role in public interactions [16]. In order to address the distraction issue, earlier research proposed quite a few technical solutions, including curiosity objects [29], animated physical motions, and shadow images [30]. However, these are not perfectly compatible with information-intensive interactions, in which users are driven spontaneously by individual information needs. In this regard, it is required to gain more understanding about how environmental and social factors (e.g., parallel-user quantity and between-user relationship) affect user interaction performance.
Since large display applications often serve various purposes, large display-based interactions in public sites have much complexity in terms of both functional design and user evaluation. For example, when large displays are applied as advertising boards, they are often designed to attract visual attentions from passing-by users as much as possible, but without considering user experience for a long time. When large displays are designed for gaming and interactive entertainment, high interactivity and user immersion are the most required, but when large displays are used as train or flight schedule information display boards, the system usability and the information search efficiency become more critical [31]. Because there were no applicable user models for predicting user performance in front of public large displays, nowadays most large display-based interactive systems are designed randomly without referring to any guidelines [31].
Similar dilemmas happen in the use evaluation of large displays. Alt et al. [31] had suggested to collaboratively use different methods in large display evaluations. These methods included observational and descriptive study, rational study, and empirical experiments. Researchers such as Peltonen et al. [15] suggested to be aware of the lighting condition and its influence on the interaction evaluation in outdoor sites. Other researchers, such as Williamson and Hansen [32] and Hansen et al. [33], focused on user’s performative behaviors in public sites and proved their close correlation with the user’s perceived experience. It was also found that the user’s eyesight and action-performing naturalness can be detected to measure user engagement objectively.
To detect and analyze the user’s visual attention and behaviors more precisely, sensor-supported methods are better choices [34]. According to a latest survey, noncontacting motion sensing cameras such as Microsoft KinectTM and Intel RealSenseTM have been (but not universally) used to evaluate users’ performative behaviors. For example, in Faity et al.’s work [35], a Kinect camera was used to recognize and track the user’s arm movements and based their quantifying of the arm’s kinematic ability on this, which was proven to be reliable and effective. In a more recent study by Lou et al. [36], a motion sensing camera, ASUS Xtion PROTM, was adopted to evaluate freehand interaction performance on a large display. Eye trackers are a kind of visual detecting and tracking sensor that are widely used for evaluating users’ visual behaviors in response to stimulus [37]. In an earlier study by Lee et al. [38], an eye movement tracking sensor was exploited to realize an eyesight control-and-input television interface system. The eye movement tracking sensor was also explored to model the user’s eye gaze behaviors and based on this to realize user authentication in public use of large displays [39]. A more recent work by Wang et al. [40] used an eye movement tracking sensor to track and predict a driver’s visual behaviors, and they proposed a dual-sensor-based tracking technique. Despite their respective applications, motion sensing cameras and eye movement tracking sensors have been seldom used simultaneously in human–machine interaction evaluations.
From noticing the display to interacting with the display content or participating in the group interactive tasks, large display interaction is thought to be a stepwise process followed by users [15]. Initially, users are either driven by specific objectives such as searching target information or attracted by the interactivity or animating effects; users show interests to interact with the large display. This phase is regarded as “approaching”. Then, the user notices the surrounding environment and parallel users, and based on this, judges how to participate in the large display interaction, either in parallel use irrespective of others, or engaging in collaborative activities. We call this phase as “participating-in”. Finally, in the “interaction” phase, the user engages in the interactive task (e.g., searching for target information on the large display interface) and achieves specific aims. To make the analysis clearer and more systematic, the preceding three phases are respectively studied.

3. Methods

In this study, one purpose is to explore the application of motion sensing and eye movement tracking techniques and their benefits in evaluating explicit and implicit user behaviors in interaction with large displays. Another more important purpose is to gain a deeper understanding about the user performance and group activities in interaction with public large displays, in particular about the performance and activities in parallel interaction mode and the possible influencing factors such as group size and relationship between users. Since motion sensing and eye movement tracking sensors were used and the user performing data were captured and recorded in real time, this study was conducted in a simulated public site where both the public environment and the interaction tasks were highly simulated.

3.1. Participants

A total of 390 volunteered participants (206 males, 184 females) were recruited, whose ages ranged from 19 to 58 years old (mean = 34.6, SD = 5.03, median = 32). The participants were undergraduate, master, and Ph.D. students and staff from a local university, majoring in applied mathematics, automation, computer science, mechanical engineering, and digital media art. All of them had normal or corrected-to-normal vision, without any body impairments. There were 6 group forms designed in this study, each having a different composition in terms of group size and between-user relationship:
(1)
The 1st form was the smallest group, having only one participant, set to reflect the performance of one user application;
(2)
The 2nd form was a paired group having 2 unacquainted participants, set to reflect the simple case of two users’ parallel use of the large display;
(3)
The 3rd form was a middle-size group consisting of 4 unacquainted participants, set to simulate a small group’s parallel use of the large display;
(4)
The 4th form was a larger group having 8 participants, half of which were acquainted participants having a close social connection with each other, e.g., couples, classmates, family members, cooperative partners, and so on. This group form was set to simulate the most common composition of the parallel interaction in actual contexts: some users were strangers, while some were acquainted friends or family members;
(5)
The 5th form also had 8 participants, but they were all unacquainted members who did not know each other. This group form was set to simulate a loosely coupled large group’s parallel use of the large display;
(6)
The 6th form was the largest group, which had 16 participants, including both acquainted and unacquainted members, and the amount of acquainted participants ranged from 2 to 8 in different groups. This group form was set to simulate a crowd group in front of a specific information display.
According to our field observation in a local airport, there were hardly more than 16 parallel users interacting with a single display simultaneously. In our perspective, a group of 16 users can be regarded as the upper limit of a crowded group size. Above 6 group forms cover the majority of situations in group use of public large displays. Of course, there are more complicated group compositions with more users engaged in using two or more displays simultaneously, but these more complicated groups can be treated as a combination of two or more group forms. Note that above each form had 10 groups; thus, there were a total of 10 groups/form ×6 forms = 60 groups involved in the study. Table 1 provides detailed information about all 60 groups, in terms of group forms and compositions.
All participants gave their informed consent for inclusion before they participated in the study. The study was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by the Ethics Committee of Hangzhou Dianzi University (approval No. HDU2022AUG10, HDU2022SEP15). The study adopted between-group design to comparatively evaluate the influences of group size and between-user relationship on parallel interaction with a large display. In other words, all 390 participants were separated into 60 groups, and each participant joined in only one group, thus eliminating the learning effect from one group to another. In each group, one participant was randomly selected to wear the eye movement tracking headset throughout the whole task procedure. Figure 1 shows an example of group composition in 6 group forms.

3.2. Apparatus

A 1600 mm × 900 mm flat screen (resolution: 1920 × 1080) was used as the large display device, which was connected to a workstation (Windows 7, 64 GB memory, and 4.0 GHz Intel 64-core processor). The workstation ran the experimental interface program and it showed on the large display. The interface layout and the detailed information were derived from a real flight-schedule-searching website. Among a large number of schedule information items, 72 items were selected. Each item consisted of 6 attributes that were shown in respective columns. These attributes included air company, flight number, origin to destination, departure time, arrival time, and status. These 72 items of flight information were distributed in 9 pages, and each page showed 8 items. A total of 9 pages were circularly displayed in a fixed order, with each page remaining for 60 s; from one page to another, there was a page-sliding effect.
To build a highly simulated airport waiting scenario, this experiment was conducted in a wide multimedia laboratory with an area of 120 square meters, where a single-direction observational glass was equipped in the wall and an array of eight high-definition cameras were mounted around the laboratory room, which were used to monitor and record the experimental procedures in multiple views. Except for the 390 participants, an additional 60 students who came from a different major in another university were invited to visit the laboratory, but they did not participate in the experiment directly; they were guided to freely talk and walk beside the participants or observe the participants completing the experimental task without disturbing, just like the pedestrians in the airport waiting hall. The same 60 students were involved in the experimental process of all groups. A wide space was provided in front of the large display, to ensure that 16 participants could interact with the large display simultaneously. A motion sensing camera, ASUS Xtion PROTM, was provided and mounted at the center of the large display’s top margin, as shown in Figure 2. It was developed to complete two functions: (1) to perceive all participants within its detectable region, detect their distances toward the display, and track their movements in real time; (2) to record the entire process of the study. It is noted that a small number of participants stood peripheral to or outside the visible range of the motion sensing camera; thus, their positions were difficult to track or at least inaccurate. To overcome this weakness, as we described in the apparatus section, the array of eight RGB cameras mounted around the laboratory room were used to record the whole experiment procedure at multiple views. Given recorded videos from diverse views, participants outside the detectable range of the motion sensing camera could thus be positioned, and their distance toward the large display could be approximately estimated. In addition, a mobile eye movement tracking headset from Pupil LabsTM was provided for tracking and recording one participant’s eyesight path data, as shown in Figure 2. This eye movement tracking headset was connected to the workstation through an 8.0 m long data cable, allowing the wearer to move freely without any limits in action. The eye movement tracking headset was developed to detect and track the participant’s pupil movement, which was converted into a visual concentrating path in real time, and recorded the entire eyesight moving process.

3.3. Procedures

A list of information-searching tasks was printed on a mobile-phone-size sheet, and then the sheet was provided to the participant. This was designed for important reasons. First, it simulates the user’s usual behaviors in front of an information display board in the airport: holds a boarding card in hand, browses and searches the target information on the information display board, and changes the visual concentration between the sheet and the large display interface. Second, all searching tasks listed on the sheet were 10 questions that were randomly selected from a total list-of-questions database (see Appendix A Table A1). Note that all questions were specifically designed and their difficulties were counterbalanced through a preliminary test; those too difficult or oversimple questions were either adjusted in or excluded from the question database. By this means, the task difficulty was controlled and the user performances in different group forms were compared more accurately.
At the beginning of study, each participant received a random task sheet consisting of 10 questions; the participant was required to find out the correct result of each question from the large display interface and report it to the experimenter orally. Prior to the formal tasks, the participants were provided a brief introduction about the research purpose and the task requirements by the experimenter; then they were given sufficient time to practice and get familiar with the large display information system, thus eliminating the learning effect on the task performance in multirepeat study.
The 60 groups were guided to complete the required task in order, from the 1st group to the 60th group. To collect a sufficient amount of behavior observation and task performance data, and at the same time to eliminate the influence of occasional factors on the findings, each group was requested to repeat the task 5 times at an interval of 24 h. The eye movement tracking headset recorded videos synchronously from both the participant’s pupils and the real world. At the same time, the motion sensing camera recorded the real-time process of the task and stored it in video format. The participants were required to inform the experimenter as soon as they completed all questions; the overall task completion time of each participant was recorded by the experimenter. After all participants in the group completed tasks, they were then required to complete two poststudy scales: (1) system usability scale (SUS) and (2) user experience scale (UES). Detailed information about the two scales is presented in the following section. Following two scales, participants were encouraged to give comments about the task procedure and large display information system.

3.4. Evaluation Metrics

This study was conducted to investigate the user behavior, task performance, and group activities in parallel use of public large displays. In this process, the potential influences of group size (i.e., the parallel user amount) and social connection of users (i.e., the relationship between users) on large display use were measured; at the same time, the practical benefits of the motion sensing and the eye movement tracking sensors were evaluated. In this study, the task evaluation metrics include the (1) task completion time efficiency; (2) user concentration; (3) user perceived system usability and (4) the overall experience.
(1)
Task completion time efficiency: The task completion time refers to the interval from when the task interface was initiated to when the participant reported that he or she had completed all tasks (i.e., found out all required information of 10 questions). The task completion time of each participant was recorded and a mean completion time was calculated for each group.
(2)
User concentration: This measures how intently the participants engaged in the information-searching task. In this study, user concentration particularly refers to the group’s overall concentration on the task. A high concentration means all participants are deeply engaged in the task. It seems a universal consensus that user-to-object proximity reflects the user’s attention or concentration on the object [22]. Generally, a close proximity indicates that the user has a deep concentration on the object, but a larger distance indicates a reduced concern on the object. Based on this, researchers developed ‘proxemic interface’ [22] and ‘ambient display’ [23] techniques in which user-to-display proximity was detected and the user interface was generated responsively. In the present study, the participant’s spatial distance toward the center of the large display was regarded and detected as a quantitative evaluation metric of user concentration. However, considering that the group size or the number of parallel users also have a potential influence on the participant’s standing position in front of the large display, the user-to-display distance is thus still not a sufficiently accurate measurement of user concentration in the task. To gain a more objective result, the participant’s eye movement tracking data were also collected to check their visual attention on the large display.
(3)
User perceived system usability: This provides an overall evaluation on the large display information-searching system in terms of use convenience, easiness, functional satisfaction, use efficiency, and so on. It was measured through a system usability scale (SUS) questionnaire, which was derived from the Brooke’s version [41]. This scale consists of 10 statements: 5 of them are stated positively, e.g., “the system interface is easy to use”; while other 5 are stated negatively, e.g., “the system is cumbersome to use”. Each statement was rated in 5-point Likert score from “absolutely disagree” to “absolutely agree”. The 10 statements in the SUS questionnaire are presented in Section 4.3;
(4)
Overall user experience: A user experience scale (UES) derived from AttrakDiff scale [42] was adopted to measure the participants’ perceived user experience on the large display system and the entire task procedure. The UES contains 28 bipolar options, and each option can be rated in a 7-step score from “−3” to “+3”. The 28 options were categorized into 4 dimensions: (1) pragmatic quality; (2) hedonic quality (simulation); (3) hedonic quality (identity); and (4) attractiveness; see in Section 4.3.

4. Analyses and Results

All participants completed the given tasks successfully, and their poststudy comments were generally positive, with few criticisms on the study simulation and the task design. For example, 24 participants mentioned that “the flight information on the display interface was presented clearly, they are even clearer than the real information system in the airport”; another 11 participants commented that “the task and the situation is highly immersive, and the task difficulty is appropriate”; and three couples in the largest group having 16 participants reported that “the task is highly similar to the real context in the airport, therefore can be naturally accustomed to by users”.
There were 390 participants in 60 groups engaged in the experiment, and each group repeated the experimental task 5 times; thus, a total of 3900 scale records (1950 SUS records, 1950 UES records) were collected after the experiment. There were 300 videos (1 video record per group × 60 groups × 5 repeats) recorded from the motion sensing camera, which recorded the entire task processes of all participants; and there were 300 eye movement records (in video format) collected through the eye movement tracking headset. To eliminate the fatigue and learning effects, in multiple-user groups, different participants were assigned to wear the eye movement tracking headset in two neighboring task repeats. The procedural videos recorded by the motion sensing camera were used to check and verify the participants’ dynamics throughout the task process, including physical movements in the space, spatial distances against the display, and proximity between participants. The eye movement tracking data were used to check the participant’s visual attention throughout the entire task process. Repeated-measures ANOVA with post hoc Tukey test and T-test pairwise comparisons was used in the parametric tests since all parametric data were tested to be normally distributed, but the Kruskal–Wallis test with post hoc Bonferroni pairwise comparisons was used in the nonparametric tests. All reported results were significant at the level of p < 0.001.
As interpreted in the preceding section, the entire process of large display interaction can be generally divided into three phases: first is the approaching phase, when the user notices the large display and steps closer; second is the participating-in phase, when the user observes others and decides how to participate in the large display interaction, either in collaborative or in parallel modes; and the third is the interacting phase, when the user completes specific tasks through the large display system. User performance and group activities in three phases are respectively analyzed as follows.

4.1. How Participants Approach the Large Display

Approaching is the beginning phase of the large display interaction, in which the participant notices the large display and steps toward it. According to our observation, this phase lasted approximately 150s from the moment when the participants noticed the display information and approached it to the moment when the participants stayed at a convenient position. In this phase, there were two causes of the participant approaching the large display: first, there was a specific objective driving the participant to use the large display; second, the participant was attracted by the content or interactivity of the large display and showed interests on the display system casually. We went through all recorded videos and observed how participants approached the large display. In this process, the group size and the relationship between participants were found to have obvious effects on the group behaviors; interesting behavioral patterns were also observed.
Hypothesis 1: 
Participants in a larger group stood at a larger interaction distance in parallel use of the display.
Hypothesis 2: 
A strong relationship between participants led the group members to stay closer toward the large display.
All participants were observed to have a short pause without physical movements or physical behaviors at the beginning of the task. There were generally two types of pause observed throughout the study procedure: one happened when the participant started the task and looked at the task sheet at the first time, and the other happened when the participant browsed and searched the target information on the large display. The former pause often caused another pause when the participant moved eyes from the task sheet to the large display interface. In these ways, the participant kept in a relatively constant position throughout the task process.
As interpreted in Section 3.2, the motion sensing camera was developed to recognize and track all participants in front of the large display in real time. In this process, the participants’ spatial distances against the large display (user-to-display depth) were detected and recorded at an interval of 3 s. According to a preliminary test, an interval longer than 3 s might omit some transient changes in the participants’ position, e.g., stepped toward the display but stepped backward quickly; while an interval shorter than 3 s could capture some occasional changes due to body leans and posture adjustments, which cause errors in distance detection. Therefore, an interval of 3 s is an appropriate choice. We compared the participants’ spatial distances toward the large display in six groups (see Table 2) and found that group size had a significant influence on the group members’ interaction distance that they stayed at (F(4, 196) = 69.38, p < 0.001, η p 2 = 0.54), with a larger group size generating a larger interaction distance. We suspect that it was the participants’ subjective intention of keeping personal spaces that affected their choice in interaction distances and staying positions, which had been verified in the participants’ post-explanations while reviewing the experimental process through the video records. There were also five participants who reported that the large display information system was a bit strange for them in their first use; thus, in a large group situation, it was a better choice to stand behind the other participants and observe or learn what others performed. In addition, between-user relationship was also found to have a significant effect on the group’s overall distance toward the large display (two-tailed T-test, t(199) = −30.71, p < 0.001). Given the same participant amount, a closer relationship among participants in the fourth group form resulted in a significantly smaller interaction distance than that in the fifth group form. We suspect that the strong relationship between participants caused a honeypot phenomenon [16] in which multiple participants concentrated on the same target simultaneously and stayed at a close distance toward the target. Given the statistical data in separated task repeats, interaction distance changes according to the task repeat order were graphed and compared across different group forms, as shown in Figure 3. From the first task repeat to the fifth repeat, the three smaller-sized group forms showed little changes in participants’ average interaction distance (ANOVA, F(4, 596) = 2.03, p = 0.176); but the other three larger-sized group forms had more significant reductions in interaction distance (F(4, 596) = 124.09, p < 0.001). Taking the fifth group form as example, the average interaction distance in the first task repeat was 1.8 times of that at the last task repeat. We speculate that it was comparative sense and pressure that drove the participants in crowded groups to approach the large display gradually, which was aimed at gaining a better task completion performance.
Between-participant interpersonal proximity, which was calculated through the three-dimensional coordinates that were detected by the motion sensing camera, were also compared among different groups from the second group form to the sixth group form, as shown in Table 3. A statistical analysis on the interpersonal proximity proved that group size had a significant influence on the interpersonal proximity (F(3, 147) = 823.56, p < 0.001, η p 2 = 0.81). More specifically, a larger group size resulted in a smaller interpersonal proximity among participants. Given an equal group size, acquainted participants in the fourth group form had a significantly closer interpersonal proximity than unacquainted ones in the fifth group form (two-tailed T-test, t(199) = −19.07, p < 0.001).
Given the dynamic data throughout the study procedure, the changing process of the interpersonal proximity was revealed and compared between the fourth and fifth group forms, as shown in Figure 4. It can be found that the fourth group form had a larger decrease in interpersonal proximity at the approaching phase. Taking one group of the fourth form and one group of the fifth form as examples, from the beginning of the task to the moment of approximately 90 s, eight participants in the group of the fifth form reached and sustained at a relatively constant interpersonal proximity of approximately 0.27 m, but the participants in the group of the fourth form reached and sustained at a closer proximity; the four acquainted participants in this group had an interpersonal proximity of approximate 0.15 m, while the other four unacquainted participants had an interpersonal proximity of approximately 0.2 m.
Hypothesis 3: 
A larger group size generated more frequent position changes of the group members, and a close relationship resulted in fewer position adjustments of the participants.
In our observations, we found that participants stayed at a relatively constant position when concentrating on the task; we have also confirmed such a correlation between the position change frequency and the participant’s concentration through the eye movement and fixation data. Thus, a higher frequency of position changes shows that the participant had a lower concentration on the task. In order to compare the participants’ position change frequency in different groups, we went through the videos again and counted each participant’s position change frequency. Note that once the participant was observed to move their feet and changed the position more than 0.5m, a position change was recorded. Here, the position change threshold of 0.5m was defined based on an empirical test; a threshold larger than 0.5m omits some slight movements but a smaller one counts occasional but unnecessary changes. Table 4 presents the statistical result of the position change frequencies. It was revealed that group size had a significant effect on the participant’s position change frequency (F(4, 196) = 19.93, p < 0.001, η p 2 = 0.60); a larger group size caused a higher position change frequency, which then resulted in a lower concentration on the task. Comparing position change frequencies between acquainted and unacquainted participants in the fourth and fifth group forms, we found that acquainted participants had a significantly lower position change frequency than unacquainted ones (two-tailed T-test, t(49) = −4.02, p < 0.001). Between acquainted and unacquainted participants in the sixth group form, their position change frequency difference became more obvious: in 10 groups of the sixth form, 42 acquainted participants had a mean position change frequency of 11.32 times per task round, but the other 118 unacquainted participants generated a mean frequency of 23.94 times per task round; the difference was obvious (two-tailed T-test, t(49) = −11.07, p < 0.001).

4.2. How Participants Participate in the Large Display Interaction

In the participating-in phase, the participant observed the surrounding environment and parallel users, and based on this, made a judgement on how to participate in the large display use, either in parallel use for individual objectives without concerning others, or joined in collaborative tasks with others. In this phase, behavioral patterns and strategies the participant adopted to participate in parallel interaction were reflected.
Hypothesis 4: 
Parallel use of the large display caused occasional collaborations, which in particular happened among acquainted participants, and the occasional collaborations were found to promote the task performance, such as task completion time efficiency.
In parallel use of the large display information system, several types of collaborations were captured, not only in the acquainted group but also among unacquainted participants. These collaborations included chatting about the task and the large display system, watching and comparing task sheets, and delivering sheets from one participant to another. Among the acquainted participants in the 10 groups of the fourth group form, there was a mean of 3.5 occasional collaborations captured throughout the entire task process, all having a durations longer than 10s. However, among the other eight unacquainted participants in the 10 groups of the fifth form, there was only a mean of 0.75 occasional collaborations identified, which lasted no longer than 5s. Among these, one interesting moment was observed in one group of the fourth group form, as that one unacquainted participant compared his task sheet to another participant’s sheet and asked to share the task information while other four acquainted participants were discussing about the information display system. Except for occasional collaborations, other group activities were also noticed. For example, the participants in the largest groups of the sixth form were frequently observed to synchronously adjust their positions when one participant moved in front of the large display.
To evaluate the task performance and explore its influencing factors, all participants’ task completion time in 60 groups were compared, as shown in Table 5. The results proved that group size had a not-significant effect on the task completion time (F(4, 196) = 0.059, p = 0.981, η p 2 = 0.02). Given an equal group size in the groups of the fourth and the fifth forms, acquainted participants had a significantly shorter task completion time than the other unacquainted participants (two-tailed T-test, t(49) = −4.31, p < 0.001). In the 10 groups of the sixth group form, the task completion time efficiency difference between the acquainted and the unacquainted participants became more obvious; the acquainted participants spent an average duration of 211.7s to complete the required task in five task repeats, which was far shorter than that of the other unacquainted participants (Munacquainted in the group 6 = 299.4s, t(49) = −6.25, p < 0.001). To measure the relationship effect on the task performance, the experimenter went through the recorded videos and picked out all fragments when the participants watched at the large display. It was found that 82 acquainted participants in the groups of the fourth and the sixth forms spent a significantly shorter time (Macquainted = 99.6s) on the information searching task than the other 158 unacquainted participants (Munacquainted = 172.9s).
Hypothesis 5: 
Participants standing at a large distance toward the large display showed less enthusiasm in participating in others than the participants staying closer to the display, and a higher participating enthusiasm resulted in a higher task completion time efficiency.
It was observed that in one group having four or more parallel participants, those who stayed farther away from the large display were more easily disturbed by the other environmental stimulus than those standing close to the display, and showed less enthusiasm in the task. For example, in one group of the third form, two female participants stood far away from the other participants (spatial distance ≥ 2.0m); one was even not completely detected in the camera’s detectable region, and was observed to operate her mobile phone casually in the task process. In one group of the fourth form, four acquainted participants were casually observed to cooperate and communicate, but the other unacquainted participants performed less engagement in collaboration and stood far away. Such findings indicate that the participants staying close to the large display and the other participants staying at a larger distance showed obvious differences in task engagement and task completion time efficiency, but did not demonstrate that one group having a closer interaction distance towards the large display had an overall higher time efficiency in completing the task than another group standing at a larger distance from the display. In fact, as shown in the Table 5, the aforementioned analysis proved that group size did not have an obvious influence on the group’s overall task completion time efficiency. According to our observation and analysis on the group behaviors, we speculate that the difference in the participants’ enthusiasm in diverse groups was the main cause for task efficiency differences at diverse interaction distances, but resulted in little difference across different-sized groups in overall efficiency. More specifically, in small groups having four or fewer participants, participants’ standing distance and their enthusiasm in the experimental task were more equal with less difference, but in larger groups where a larger amount of participants were told to complete the task simultaneously, a section of them were motivated to occupy a closer interaction distance and complete the task more rapidly than the other participants; thus, this section of participants showed a higher enthusiasm or motivation to take part in the task than they behaved in smaller groups, and therefore improved their task completion time efficiency. From this perspective, though the participants in a crowded group stood at a larger interaction distance than those in a smaller group, a part of them performed a higher task completion efficiency than the other participants, which eventually improved the crowded group’s overall efficiency of completing the experimental task.
Given the participants’ task completion time and their standing distances toward the large display throughout the entire task process, a general correlation between the interaction distance and the task completion time was identified, as shown in Figure 5. There were generally two trends identified in the graph: (1) at a close distance from 0.7 m to approximately 1.2 m, the task completion time that the participant spent seemed to have a negative correlation with the interaction distance; (2) however, from 1.2m to a larger distance, the task completion time had a generally positive correlation with the interaction distance. These two trends were drawn by a six-order polynomial fitting which was proven to have an acceptable degree of fitting (r2 = 0.755), but it did not show that the participant’s task completion time efficiency had a precisely positive or negative relationship with the interaction distance; it also did not reflect that there was an optimal interaction distance at which the task completion time efficiency was the highest. According to our analysis on the group behaviors, as we interpreted previously, some of the participants in crowded groups were more motivated to achieve a better performance than the other participant; these participants were observed to occupy a closer and more central position toward the large display, and their task completion efficiency was indeed found to outperform that of the others. We suppose that the statistical data of these more motivated participants were the main reason for a relatively shorter task completion time at a close interaction distance of closer than 1.2 m. Except for this, as the participant’s interaction distance increased, the task completion time efficiency showed a reduced trend.

4.3. How Participants Interact with the Large Display

In the interacting phase, the participants’ behavioral patterns in large display use and perceived usability and user experience on the information display system were mainly considered. Perceived usability was evaluated based on the SUS measurement, while perceived user experience was evaluated based on the UES measurement. Participants’ visual behaviors and task concentrations were evaluated based on the eye movement tracking data.
Hypothesis 6: 
A larger group size has a negative influence on the perceived usability and user experience of the information display system.
Hypothesis 7: 
A strong relationship among parallel participants had a promoting effect on both perceived usability and user experience.
Based on the SUS data collections, users’ perceived usability of the information display system was firstly measured. Table 6 shows the statistical result of the SUS measurement. As interpreted in the preceding section, SUS adopted in this study was derived from the most classical system usability scale proposed by Brooke [41]. As shown in Table 6, it consists of 10 statements: 5 were positively stated (from P1 to P5), while the other 5 were negatively stated (from N1 to N5). This scale was designed in a 5-point Likert scale type; each statement can be rated from 1 to 5, which refers to the agreement level. A larger point represents a higher agreement level; thus, 1 point represents “absolutely disagree” and 5 points represents “absolutely agree”. Given all rating points on the 10 statements, an overall SUS score (abbreviated as S) can be calculated through a specific formula, as below:
S = ( i = 1 5 P i 1 + j = 1 5 5 N j ) × 2.5
Therefore, a higher S represents a higher evaluation on the system usability. Results showed that the overall evaluation scores in six group forms were all higher than 80. In the 10 groups of the sixth form, the acquainted participants also had a positive evaluation score of 81.25, while other unacquainted participants had a comparatively lower evaluation score of 75.18. In general, all participants in this group form were more or less satisfied with the large display system in the study. However, there were also differences observed among different group forms; all differences reported below were significant in the nonparametric test (Kruskal–Wallis test with post hoc Dunn test). First, a larger group size resulted in a lower satisfaction on perceived usability, as seen in the evaluation metrics of P2, P4, P5, N2, N3 and N5. Moreover, the acquainted participants in both the fourth and the sixth group forms had a higher SUS score than the other unacquainted participants in these two groups, indicating that a close relationship benefited users’ perceived usability in public use of the large display system. Such a difference was in particular distinct in the evaluation item of P4—“Common users can learn to use this system very quickly”. Acquainted participants highly supported that the large display system was easy enough to learn for common users, but unacquainted participants did not. We speculated that collaborations among acquaintances made them more confident and familiar with the system use, which had a promoting effect on their attitude on this statement. In the groups of the sixth form, there was also a large distinction between the acquainted and the unacquainted participants in the evaluation item of N3—“The system is cumbersome to use”. More than a third of the unacquainted participants gave a score of 3.0 or above, supporting that the information system was cumbersome in use; however, the other acquainted participants all rated it 1.0. We speculate that crowding is the main reason for user perceived cumbersome in parallel use of a public large display, but the collaborations between acquaintances eliminated the cumber effect.
Given the UES data collections, user perceived experience on the information display system was also measured. UES measurement was derived from the AttrakDiff scale [42], which was designed to have 28 pairs of bipolar statements, such as “complicated vs. practical simple”; each pair was rated on a 7-step scale from −3” to “+3”. These 28 pairs were generally divided into four groups, which corresponded to four qualities, respectively:
(1)
“Pragmatic quality” (PQ): measures whether the system functions and the interface display are appropriate to achieve the task;
(2)
“Hedonic quality–identity” (HQ-I): measures whether the system and the interface use are comfortable and satisfying in behaviors;
(3)
“Hedonic quality–stimulation” (HQ-S): measures whether the system use is beneficial for developing personal skills and knowledge;
(4)
“Attractiveness” (ATT): measures an overall assessment on the appeal and subjective preference of the system and its interfaces.
Figure 6 provides a statistical graph about the UES result. In the PQ measurement, as shown in Figure 6a—“user experience PQ results”, the acquainted participants in the fourth and the sixth group forms supported that the interface was more biased toward the humanized characteristic, while other unacquainted participants approved that the interface was “technical”. In “confusing vs. clearly structured”, “impractical vs. practical” and “complicated vs. simple” items, participants from a large group had a more negative evaluation than those from a small group ( χ 2 2 = 29.24, p < 0.001), and the acquainted participants had a more positive evaluation than those unacquainted participants ( χ 2 2 = 11.76, p < 0.001). In the HQ-I measurement, as shown in Figure 6b—“user experience HQ-I results”, acquainted participants had a more positive evaluation on almost all qualities, compared to the other unacquainted participants ( χ 2 2 = 42.22, p < 0.001). In the HQ-S measurement, as shown in Figure 6c—“user experience HQ-S results”, smaller group forms, i.e., the first, second, and third group forms, had a lower evaluation on the “novel”, “innovative”, and “inventive” experience than the other three larger group forms ( χ 2 2 = 13.91, p < 0.001), indicating that parallel user quantity had a promoting influence on the user evaluation of the HQ-S quality; the more parallel users, the higher the possibility that the participants thought that the information display system was challenging and innovative and they could achieve more benefits by using this system. Finally, in the ATT measurement, as shown in Figure 6d—“user experience ATT results”, participants in three large group forms did not support that the system was “likable”, “attractive”, or “pleasant”, while those in smaller groups gave more positive evaluations on these experiences ( χ 2 2 = 9.98, p < 0.001). In addition, acquainted participants had a higher satisfaction on the system’s attractiveness than the other unacquainted participants ( χ 2 2 = 36.40, p < 0.001).
To sum up, both the group size and the interpersonal relationship had significant influences on user perceived experience quality. In general, participants from a larger group had a lower evaluation on the user perceived experience; they also had a lower satisfaction on the practical functions and benefits that the large display system provided. Furthermore, it was found that group participants having a close relationship had a more positive evaluation on the user perceived experience of the large display system than other unacquainted participants.
Hypothesis 8: 
A large group size had a negative impact on the participant’s concentration on the information search task.
Hypothesis 9: 
A strong relationship among parallel participants had a negative impact on the visual attention, but it had little influence on the task completion time efficiency.
The eye movement tracking data consisted of the participant’s pupil fixation position at an interval of 500ms. Given these data, fixations outside the large display area were identified. The results showed that the participants in three small group forms, respectively, spent 93.8%, 92.0%, and 89.2% of the time fixating on the large display interface, while participants in the other three larger group forms, respectively, spent 64.8%, 76.3%, and 62.5% of the time fixating on the large display interface, which indicates that group size had a negative influence on the participant’s concentration on the task (F(5, 20) = 16.43, p < 0.001, η p 2 = 0.49). In addition, the acquainted participants in the fourth and sixth group forms were found to move their eyesight outside the display content more frequently than the other participants (time percent of eye fixation on the display: Macquainted = 53.5%, Munacquainted = 66.8%, T-test, t(49) = −9.28, p < 0.001), though they achieved a higher task completion time efficiency than the other unacquainted ones. In different-sized groups, users’ visual behaviors, e.g., eye fixations and eye movement speeds, showed distinct differences. Figure 7 shows a visualized graph of eye fixation trajectory of two participants from two groups of the sixth and the second forms, respectively, at a short interval of 10 s; a blue or red dot represents one fixation on the information interface, a larger dot represents a longer fixation duration, and the numbers marked inside the dots represent the order of the fixations. The graph shows an obvious distinction in fixation amount and duration between two participants from different-sized groups; the participant from the larger group (see in Figure 7a) showed more rapid eye movement and a larger amount of eye fixations on the interface than that of the other participant from a smaller group (see in Figure 7b), indicating that a larger group size had a negative influence on the visual concentration on the interface.
In addition, a collection of eye movement heat maps was illustrated and compared, as shown in Figure 8. A heat map is a kind of visualization technique widely used in analyzing visual attention and task concentration. This technique can be exploited to reveal what information is most frequently being considered on any surfaces, and thus is a valid way of evaluating general information user interfaces. In this study, dwelling time of the visual fixation across the whole interface area was measured and visualized. The color in the heat map reflected the fixation frequency throughout the task, a blue or green color showed that the interface area was rarely gazed at, but a red color showed that the interface area was gazed at more frequently. The results showed that participants in the three smaller group forms generated more focused eyesight fixations on the interface. In particular, the information at the third, fourth, and fifth columns were the most frequently focused, as shown in Figure 8a–c. We speculate that the information displayed at these three columns were more complicated than others, which is why eye fixations on these positions were more obvious. However, the participant’s eye fixations in the three larger group forms were more scattered, high-frequency eye fixations were less observed, and quite a large proportion of fixations fell on the upper center of the display margin where the motion sensing camera was mounted, as shown in Figure 8d–f. The eye movement heat maps not only revealed what information the participant focused on, but also reflected the concentration degree of the participant’s visual attention.

5. Discussion and Future Work

The findings achieved in the study are more extensive than previous research, which not only verified the earlier findings but also extended to a deeper understanding about the parallel use of large displays in a public site and users’ behavioral patterns. On the one hand, this study proved the practical advantages of the large display in wide applications. A large display provides a wide interactive space and an immersive environment, which are not only beneficial for parallel use by multiple participants simultaneously, but also support collaborations among team members well. This is generally consistent with Ardito et al. [13]’s finding in their survey on state-of-the-art large display use. On the other hand, this study contributes a new and comprehensive understanding about the effects of user group size and between-user relationship on the parallel use of large displays. In previous research, parallel use of large displays is often treated as an individual behavior [24]; group-relevant factors such as group size and between-user relationship were less considered. However, in this study, the practical influences of these two factors are verified.
Group size and between-user relationship were chosen as research targets for two reasons. First, group size was rarely considered in previous research; if any, only two users’ engaged situations were considered, such as in [25] and [26]. More complex group configurations, e.g., a group consisting of both acquainted and unacquainted members, however, were seldom considered. Other research work such as [14] focused on the age factor, and proved its practical influence on the group behaviors. Studies focusing on the age effect are quite rich; thus, they were not considered in the present study. However, the group size factor and its influences on the group behaviors and individual task performance were neglected. Second, earlier studies on between-user relationship were found to be unilateral or not comprehensive; either cooperative relationships between collaborators [26] or weak relationships among unacquainted users [23] were surveyed. However, in parallel use of public large displays, strong and weak relationships can be concurrent. How acquainted and unacquainted users use the same large display and whether there are interaction effects between different users are still unclear questions.
More specifically, there are several findings obtained in this study that were never reported in previous research. First, group size was found to be closely influential to the interaction distance toward the display that the group users stayed at; the more parallel users, the larger the interaction distance users stayed at. Given a wide space with a limited number of parallel users, the interaction distance actually reflected the user’s concentration on the task; a closer interaction distance indicated that the user had a deeper concentration on the information-searching task, but in a more crowded group having eight or more participants in the present experiment, it was also proven that the user’s intention of maintaining personal space, i.e., social safety space, was an vital reason for staying at a relatively larger distance against the display. Furthermore, a strong relationship in a group was also proven to have a promoting effect on the parallel users’ concentration on the task, which resulted in an increasing effect on the users’ overall efficiency in completing required tasks; a close relationship led to more collaborations among acquainted users, which also stimulated other unacquainted users to participate in the collaborative work. Given these findings, strategies and guidelines can be concluded and summarized for designing public large display-based applications, including—but not limited to—public advertising boards, digital bulletin boards, and information consulting boards in public sites.
The first guideline is about the group size effect and its inspiration for designing large display-based interactive systems. When a large display is used as an interactive terminal in a public site, its use frequency and the usual user group size should be taken into consideration, and based on these, its user interface design and interaction modes should be determined. Once the large display is used in a crowded situation where there are often more than five users employing in the large display system simultaneously, it is better designed to support the parallel interaction requirement, e.g., to support some users to watch and search target information well at a distance, while at the same time supporting the others to interact with the large display user interface at a close proximity. However, if the large display is used in a less frequent situation with fewer than three parallel users at most cases, its user interface should be designed to support close and collaborative interaction modes. The second guideline is about the influence of social connection between users and its guidance for designing large display systems. Multiple motion sensing and behavior-recognizing sensors are suggested to be used in large displays, thus realizing a user-adapted interaction technique. For example, when two users are detected to have occasional interactions with each other, the large display system provides a collaborative interaction mode, but when two users are detected to be using the large display independently, the large display provides a parallel-use mode. The social connection between users was proven to affect the user’s engagement and enthusiasm on the task and task performance, as well as participating behaviors. On the one hand, a close relationship between users was found to be beneficial to improving the group’s efficiency in achieving specific tasks; for example, in this experiment, some acquainted participants achieved a quite satisfying task completion efficiency, though they showed low concentration and relatively more scattered eye fixations on the large display interface; these acquainted participants were also found to have occasional collaborations more frequently, which was found to improve the participant’s engagement and subjective satisfaction on the task, which implies user interface interaction mechanisms and design strategies for developing large display supported collaborative systems in public and semipublic situations. For example, gaming and sharing stimulus can be applied to provide convenience for users having special requirements, e.g., the disabled and those having language barriers, to help them participate in the system use more smoothly. On the other hand, users having a weak social connection had few collaborative behaviors and participated in the large display use in parallel mode, completing the task respectively without relying on others. However, in this parallel-use mode, user performance in completing the task was closely related with what position the user stood at; users stood at a larger distance toward the large display were more easily disturbed by the surrounding activities, and thus their attentions were more easily diverted from the large display user interface. It also implies strategies and guidelines to improve the usability and ergonomic quality of public large display systems. For example, user detection and responsive user interface techniques can be exploited in the large display system; when users are detected to stay at an oversize distance, either additional stimuli can be presented on the display to motivate the user to approach the display, or the user interface layout can be adjusted and the information complexity be reduced to facilitate user interaction at a large distance.
As a typical example of a sensing technology application in human motion detection and behavior recognition, this study used an RGB-depth camera and a mobile eye movement tracking headset to recognize explicit and implicit behaviors, respectively. RGB-depth cameras, such as the Microsoft Kinect or the ASUS Xtion PRO, the latter of which is used in this study, have been widely exploited to track and evaluate human limb motor functions and behaviors in previous research [35], but have seldom been applied to detect and recognize individual and group activities. In this regard, this study is a good sample of motion sensing technology applications in human activity analysis. The eye movement tracking sensor, which was mostly used in closed-experiment situations such as private offices and driving cabs, was used in a simulated public situation in this study, and it was found to be valid and beneficial for discovering implicit behavioral patterns that were often neglected in observational studies. From this perspective, eye movement tracking sensors are effective means for understanding human behavioral patterns and underlying intentions. To sum up, an integrated application of motion sensing and eye movement tracking sensors provides a deeper and more comprehensive understanding of users’ behaviors and activities in parallel use of a public large display. Such a kind of integrated application of explicit and implicit motion sensing devices can also be widely used in ergonomics research in official and domestic environments.
The sample size is a limitation in this study. First, in each group, only one participant was chosen to wear the eye movement tracking headset because the host computer was not capable of managing two or more eye movement tracking devices that input two high-quality simultaneous video streams; thus, only one set of eye movement tracking data was collected in each group. Second, in an empirical study and quantitative analyses, 390 participants are not a sufficiently large amount to ensure a valid result. To make the results more reliable, we adopted multiple qualitative and quantitative evaluation methods to draw an objective result. For example, in the task completion time efficiency evaluation, participants in small groups were found to generate shorter task completion time than those in larger groups; in the eyesight fixation evaluation, participants from small groups were also found to spend more time fixating on the display than those in larger groups; and in the observational evaluation based on the recorded videos, participants from small groups stood closer to display but had a higher concentration on the display content than those in larger groups. Based on these results, it can be concluded that a larger group size has a negative influence on the participant’s concentration on the large display use. In addition, to collect a larger amount of experimental data and to improve the effectiveness of the quantitative analyses based on this, all groups were required to repeat the experimental task 5 times.
It has to be acknowledged that this is not an exhausting study that investigated all relevant influential factors; other more contextual factors such as lighting conditions and environmental sound also have potential influences on user behaviors and task performance on large displays, which, however, had not been considered in the present study. There are two aspects of research that can be conducted in future work. First, a contextual study in a more complicated public site is planned, in which more participants and more dynamic environmental factors (e.g., outdoor lighting brightness and noise level) will be considered. Second, more complicated interaction tasks on large displays will be simulated, and users’ task performance and individual and group activities will be evaluated.

6. Conclusions

This study investigated how multiple users conducted individual tasks and participated in group activities in parallel use of a public large display. In particular, it investigated three phases, respectively, in the entire process of parallel use of a public large display. These three phases are (1) approaching the large display, (2) participating with others, and (3) interacting with the large display. The results showed that group-relevant factors, i.e., the group size and the relationship between parallel users, had practical impacts on the task performance and behaviors throughout the three phases. In the approaching phase, the group size was proven to influence the group users’ interaction distance, which eventually affected the users’ time efficiency in completing their respective tasks; a larger group size was also found to cause more frequent position changes among group members. In the following phase, a close relationship between users was found to incur more collaborations, which had a benefiting effect on the concentration and efficiency in completing required tasks. In the interacting phase, it was found that the participants from a larger group had a lower satisfaction on the large display use, in terms of both perceived system usability and user experience quality; participants having a close relationship with each other had a higher evaluation on the perceived usability and user experience quality than other unacquainted participants. All the above findings were concluded from the collective data of a motion sensing camera and a mobile eye movement tracking sensor, indicating that an integrated application of body and eye movement sensing technologies is a valid and feasible means for understanding individual and group activities in various situations.

Author Contributions

Conceptualization, X.L. and L.F.; methodology, X.L., L.F. and X.S.; software, L.F., X.S., M.M. and Y.Z.; validation, X.L., L.F., X.S., M.M., P.H. and Y.D.; formal analysis, X.L., M.M., Y.Z. and Y.D.; investigation, X.L. and M.M.; resources, Y.D.; data curation, L.F. and X.L.; writing—original draft preparation, X.L. and L.F.; writing—review and editing, X.L., Y.Z. and L.F.; visualization, X.L. and M.M.; supervision, P.H.; project administration, X.L.; funding acquisition, X.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (NSFC), grant number 61902097; PRC Industry-University Collaborative Education Program, grant Number CES/Kingfar202209RYJG15; and the Natural Science Foundation of Zhejiang Province, grant number LQ19F020010.

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Ethics Committee of Hangzhou Dianzi University (10 August 2022, 15 September 2022).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Not applicable.

Acknowledgments

The authors thank the responsible editor(s) and reviewers for their insightful comments.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Appendix A

Table A1. An overall list of questions provided in the task sheet.
Table A1. An overall list of questions provided in the task sheet.
Serial No.Question Statement
Q1What is the departure time of the flight ‘CA147’ ?
Q2What is the estimated arrival time of the flight ‘AK542’ ?
Q3What is the earliest flight from Hangzhou to Chiengmai in this month?
Q4How long is the flight duration of the ‘D5816’ ?
Q5What is the status of the flight ‘3K832’ in 17th August ?
Q6What is the origin of the flight ‘FD497’ ?
Q7What is the operating company of the flight ‘CA1711’ ?
Q8What is the destination of the flight ‘DRA321’ ?
Q9What is the status of the flight ‘CA672’ at the end of August ?
Q10What is the estimated arrival time of the first flight to Bali Island in 1st September ?
Q11What is the last flight to Bali Island in this month ?
Q12How long is the flight duration from Hangzhou to Bali Island in 1st September ?
Q13How many flights from the AIR China company operate in 18th August ?
Q14What is the earliest departure time of the flight to Dubai in the next month ?
Q15What is the operating company of the flight ‘4AS35’ ?
Q16What is the status of the last flight from the Spring company today ?
Q17What is the status of the flight ‘FD497’ in the 8th September ?
Q18What is the departure time of the earliest flight to Cambodia ?
Q19How long is the flight duration of the ‘D7303’ to Colombo ?
Q20What is the latest flight to Chiengmai at the end of this month ?
Q21How many flights operate from Hangzhou to Phuket?
Q22What is the estimated arrival time of the fight ‘AKP781′ to Beijing?
Q23What is the earliest flight No. from Hangzhou to Tokyo in following days?
Q24How many hours the flight ‘FD497’ will fly from Hangzhou to Chiengmai?
Q25What is the departure time of the first flight in 18th August ?
Q26What air company the flight No. ‘FD497’ belongs to?
Q27Whether the flight No. ‘3K832’ takes off before the noon of the day?
Q28What is the air company name who operates the flight No. ‘CA1711’?
Q29What is the fast flight No. from Hangzhou to Colombo?
Q30How long it will take from Hangzhou to Chiengmai, through a flight from the Thai Air company?

References

  1. Reipschlager, P.; Flemisch, T.; Dachselt, R. Personal Augmented Reality for Information Visualization on Large Interactive Displays. IEEE Trans. Vis. Comput. Graph. 2021, 27, 1182–1192. [Google Scholar] [CrossRef] [PubMed]
  2. Vetter, J. Tangible Signals—Prototyping Interactive Physical Sound Displays. In Proceedings of the 15th International Conference on Tangible, Embedded, and Embodied Interaction (TEI ’21), Salzburg, Austria, 14–17 February 2021; pp. 1–6. [Google Scholar] [CrossRef]
  3. Finke, M.; Tang, A.; Leung, R.; Blackstock, M. Lessons learned: Game design for large public displays. In Proceedings of the 3rd International Conference on Digital Interactive Media in Entertainment and Arts, Athens, Greece, 10–12 September 2008; pp. 26–33. [Google Scholar] [CrossRef]
  4. Ardito, C.; Lanzilotti, R.; Costabile, M.F.; Desolda, G. Integrating Traditional Learning and Games on Large Displays: An Experimental Study. J. Educ. Technol. Soc. 2013, 16, 44–56. [Google Scholar]
  5. Chen, L.; Liang, H.-N.; Wang, J.; Qu, Y.; Yue, Y. On the Use of Large Interactive Displays to Support Collaborative Engagement and Visual Exploratory Tasks. Sensors 2021, 21, 8403. [Google Scholar] [CrossRef] [PubMed]
  6. Lischke, L.; Mayer, S.; Wolf, K.; Henze, N.; Schmidt, A. Screen arrangements and interaction areas for large display work places. In Proceedings of the 5th ACM International Symposium on Pervasive Displays, Oulu, Finland, 20–26 June 2016; pp. 228–234. [Google Scholar] [CrossRef] [Green Version]
  7. Rui, N.M.; Santos, P.A.; Correia, N. Using Personalisation to improve User Experience in Public Display Systems with Mobile Interaction. In Proceedings of the 17th International Conference on Advances in Mobile Computing & Multimedia, Munich, Germany, 2–4 December 2019; pp. 3–12. [Google Scholar] [CrossRef]
  8. Coutrix, C.; Kai, K.; Kurvinen, E.; Jacucci, G.; Mäkelä, R. FizzyVis: Designing for playful information browsing on a multi-touch public display. In Proceedings of the 2011 Conference on Designing Pleasurable Products and Interfaces, Milano, Italy, 22–25 June 2011; pp. 1–8. [Google Scholar] [CrossRef]
  9. Veriscimo, E.D.; Junior, J.; Digiampietri, L.A. Evaluating User Experience in 3D Interaction: A Systematic Review. In Proceedings of the XVI Brazilian Symposium on Information Systems, São Bernardo do Campo, Brazil, 3–6 November 2020; pp. 1–8. [Google Scholar] [CrossRef]
  10. Mateescu, M.; Pimmer, C.; Zahn, C.; Klinkhammer, D.; Reiterer, H. Collaboration on large interactive displays: A systematic review. Hum. Comput. Interact. 2019, 36, 243–277. [Google Scholar] [CrossRef] [Green Version]
  11. Wehbe, R.R.; Dickson, T.; Kuzminykh, A.; Nacke, L.E.; Lank, E. Personal Space in Play: Physical and Digital Boundaries in Large-Display Cooperative and Competitive Games. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; pp. 1–14. [Google Scholar] [CrossRef]
  12. Ghare, M.; Pafla, M.; Wong, C.; Wallace, J.R.; Scott, S.D. Increasing Passersby Engagement with Public Large Interactive Displays: A Study of Proxemics and Conation. In Proceedings of the 2018 ACM International Conference on Interactive Surfaces and Spaces, Tokyo, Japan, 25–28 November 2018; pp. 19–32. [Google Scholar] [CrossRef] [Green Version]
  13. Ardito, C.; Buono, P.; Costabile, M.F.; Desolda, G. Interaction with Large Displays: A Survey. ACM Comput. Surv. 2015, 47, 1–38. [Google Scholar] [CrossRef]
  14. Tan, D.; Czerwinski, M.; Robertson, G. Women go with the (optical) flow. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Ft. Lauderdale, FL, USA, 5–10 April 2003; pp. 209–215. [Google Scholar] [CrossRef]
  15. Peltonen, P.; Kurvinen, E.; Salovaara, A.; Jacucci, G.; Saarikko, P. It’s Mine, Don’t Touch!: Interactions at a large multi-touch display in a city centre. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Florence, Italy, 5–10 April 2008; pp. 1285–1294. [Google Scholar] [CrossRef]
  16. Müller, J.; Walter, R.; Bailly, G.; Nischt, M.; Alt, F. Looking glass: A field study on noticing interactivity of a shop window. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Austin Texas, TX, USA, 5–10 May 2012; pp. 297–306. [Google Scholar] [CrossRef]
  17. Nacenta, M.A.; Jakobsen, M.R.; Dautriche, R.; Hinrichs, U.; Carpendale, S. The LunchTable: A multi-user, multi-display system for information sharing in casual group interactions. In Proceedings of the 2012 International Symposium on Pervasive Displays, Porto, Portugal, 4–5 June 2012; pp. 1–6. [Google Scholar] [CrossRef] [Green Version]
  18. Dostal, J.; Hinrichs, U.; Kristensson, P.O.; Quigley, A. SpiderEyes: Designing attention- and proximity-aware collaborative interfaces for wall-sized displays. In Proceedings of the 19th International Conference on Intelligent User Interfaces, Haifa, Israel, 24–27 February 2014; pp. 143–152. [Google Scholar] [CrossRef]
  19. Marshall, P.; Hornecker, E.; Morris, R.; Dalton, N.S.; Rogers, Y. When the fingers do the talking: A study of group participation with varying constraints to a tabletop interface. In Proceedings of the 3rd IEEE International Workshop on Horizontal Interactive Human Computer Systems, Amsterdam, The Netherlands, 1–3 October 2008; pp. 33–40. [Google Scholar] [CrossRef] [Green Version]
  20. Sakakibara, Y.; Matsuda, Y.; Komuro, T.; Ogawa, K. Simultaneous interaction with a large display by many users. In Proceedings of the 8th ACM International Symposium on Pervasive Displays, Palermo, Italy, 12–14 June 2019; pp. 1–2. [Google Scholar] [CrossRef]
  21. Norman, D.A.; Nielsen, J. Gestural interfaces: A step backward in usability. Interactions 2010, 17, 46–49. [Google Scholar] [CrossRef]
  22. Greenberg, S.; Marquardt, N.; Ballendat, T.; Diaz-Marino, R.; Wang, M. Proxemic interactions: The new ubicomp? Interactions 2011, 18, 42–50. [Google Scholar] [CrossRef]
  23. Raudanjoki, Z.; Gen, A.; Hurtig, K.; Hkkil, J. ShadowSparrow: An Ambient Display for Information Visualization and Notification. In Proceedings of the 19th International Conference on Mobile and Ubiquitous Multimedia, Essen, Germany, 22–25 November 2020; pp. 351–353. [Google Scholar] [CrossRef]
  24. Paul, S.A.; Morris, M.R. Sensemaking in Collaborative Web Search. Hum. Comput. Interact. 2011, 26, 72–122. [Google Scholar] [CrossRef]
  25. Jakobsen, M.; HornbæK, K. Proximity and physical navigation in collaborative work with a multi-touch wall-display. In Proceedings of the CHI 12 Extended Abstracts on Human Factors in Computing Systems, Austin, Texas, TX, USA, 5–10 May 2012; pp. 2519–2524. [Google Scholar] [CrossRef]
  26. Shoemaker, G.; Tsukitani, T.; Kitamura, Y.; Booth, K.S. Body-centric interaction techniques for very large wall displays. In Proceedings of the 6th Nordic Conference on Human-Computer Interaction: Extending Boundaries, Reykjavik, Iceland, 16–20 October 2010; pp. 463–472. [Google Scholar] [CrossRef] [Green Version]
  27. Jakobsen, M.R.; Haile, Y.S.; Knudsen, S.; Hornbaek, K. Information Visualization and Proxemics: Design Opportunities and Empirical Findings. IEEE Trans. Vis. Comput. Graph. 2013, 19, 2386–2395. [Google Scholar] [CrossRef] [PubMed]
  28. Huang, E.M.; Koster, A.; Borchers, J. Overcoming Assumptions and Uncovering Practices: When Does the Public Really Look at Public Displays? In Proceedings of the International Conference on Pervasive Computing, Sydney, Australia, 19–22 May 2008; pp. 228–243. [Google Scholar] [CrossRef] [Green Version]
  29. Houben, S.; Weichel, C. Overcoming interaction blindness through curiosity objects. In Proceedings of the CHI ’13 Extended Abstracts on Human Factors in Computing Systems, Paris, France, 27 April 2013; pp. 1539–1544. [Google Scholar] [CrossRef]
  30. Ju, W.; Sirkin, D. Animate Objects: How Physical Motion Encourages Public Interaction. In Proceedings of the 5th International Conference on Persuasive Technology, Copenhagen, Denmark, 7–10 June 2010; pp. 40–51. [Google Scholar] [CrossRef]
  31. Alt, F.; Schneega, S.; Schmidt, A.; Müller, J.; Memarovic, N. How to evaluate public displays. In Proceedings of the 2012 International Symposium on Pervasive Displays, Porto, Portugal, 4–5 June 2012; pp. 1–6. [Google Scholar] [CrossRef]
  32. Williamson, J.R.; Hansen, L.K. Designing performative interactions in public spaces. In Proceedings of the ACM Conference on Designing Interactive Systems, Newcastle, UK, 11–15 June 2012; pp. 791–792. [Google Scholar] [CrossRef]
  33. Hansen, L.K.; Rico, J.; Jacucci, G.; Brewster, S.A.; Ashbrook, D. Performative interaction in public space. In Proceedings of the CHI 11 Extended Abstracts on Human Factors in Computing Systems, Vancouver, BC, Canada, 7–12 May 2011; pp. 49–52. [Google Scholar] [CrossRef] [Green Version]
  34. Lou, X.; Fu, L.; Yan, L.; Li, X.; Hansen, P. Distance Effects on Visual Searching and Visually-Guided Free Hand Interaction on Large Displays. Int. J. Ind. Ergon. 2022, 90, 103318. [Google Scholar] [CrossRef]
  35. Faity, G.; Mottet, D.; Froger, J. Validity and Reliability of Kinect v2 for Quantifying Upper Body Kinematics during Seated Reaching. Sensors 2022, 22, 2735. [Google Scholar] [CrossRef] [PubMed]
  36. Lou, X.; Chen, Z.; Hansen, P.; Peng, R. Asymmetric Free-Hand Interaction on a Large Display and Inspirations for Designing Natural User Interfaces. Symmetry 2022, 14, 928. [Google Scholar] [CrossRef]
  37. Shehu, I.S.; Wang, Y.; Athuman, A.M.; Fu, X. Remote Eye Gaze Tracking Research: A Comparative Evaluation on Past and Recent Progress. Electronics 2021, 10, 3165. [Google Scholar] [CrossRef]
  38. Lee, H.C.; Lee, W.O.; Cho, C.W.; Gwon, S.Y.; Park, K.R.; Lee, H.; Cha, J. Remote Gaze Tracking System on a Large Display. Sensors 2013, 13, 13439–13463. [Google Scholar] [CrossRef] [PubMed]
  39. Bhatti, O.S.; Barz, M.; Sonntag, D. EyeLogin—Calibration-free Authentication Method for Public Displays Using Eye Gaze. In Proceedings of the ACM Symposium on Eye Tracking Research and Applications (ETRA ’21), Virtual Event, 25–27 May 2021; pp. 1–7. [Google Scholar] [CrossRef]
  40. Wang, Y.; Ding, X.; Yuan, G.; Fu, X. Dual-Cameras-Based Driver’s Eye Gaze Tracking System with Non-Linear Gaze Point Refinement. Sensors 2022, 22, 2326. [Google Scholar] [CrossRef] [PubMed]
  41. Brooke, J. SUS-A Quick and Dirty Usability Scale. Usability Evaluation in Industry; CRC Press: Boca Raton, FL, USA, 1996; Volume 189, pp. 4–7. [Google Scholar]
  42. Hassenzahl, M.; Burmester, M.; Koller, F. AttrakDiff: Ein Fragebogen zur Messung wahrgenommener hedonischer und pragmatischer Qualität. Mensch Comput. 2003, 57, 187–196. [Google Scholar] [CrossRef]
Figure 1. An example of composition of each group form: acquainted participants are illustrated in red line; in each group, one participant was selected to wear the eye movement tracking headset.
Figure 1. An example of composition of each group form: acquainted participants are illustrated in red line; in each group, one participant was selected to wear the eye movement tracking headset.
Machines 11 00073 g001
Figure 2. Experimental apparatus and the study scene.
Figure 2. Experimental apparatus and the study scene.
Machines 11 00073 g002
Figure 3. Interaction distance changes in 5 task repeats across 6 different group forms.
Figure 3. Interaction distance changes in 5 task repeats across 6 different group forms.
Machines 11 00073 g003
Figure 4. An example of the interpersonal proximity changes throughout the task in the 4th and 5th group forms.
Figure 4. An example of the interpersonal proximity changes throughout the task in the 4th and 5th group forms.
Machines 11 00073 g004
Figure 5. Correlation between the participants’ interaction distance and the task completion time.
Figure 5. Correlation between the participants’ interaction distance and the task completion time.
Machines 11 00073 g005
Figure 6. Statistical result of the UES measurement on 6 group forms.
Figure 6. Statistical result of the UES measurement on 6 group forms.
Machines 11 00073 g006
Figure 7. A comparative visualization of eye fixation trajectory in different-sized groups: (a) the eye fixation trajectory in a large group of the 6th form; (b) the eye fixation trajectory in a small group of the 2nd form.
Figure 7. A comparative visualization of eye fixation trajectory in different-sized groups: (a) the eye fixation trajectory in a large group of the 6th form; (b) the eye fixation trajectory in a small group of the 2nd form.
Machines 11 00073 g007
Figure 8. Eye movement heat maps in 6 group forms.
Figure 8. Eye movement heat maps in 6 group forms.
Machines 11 00073 g008
Table 1. Overall information about the composition of 60 experimental groups.
Table 1. Overall information about the composition of 60 experimental groups.
Group Form1st Form2nd Form3rd Form4th Form5th Form6th Form
Group Amount10 Groups10 Groups10 Groups10 Groups10 Groups10 Groups
Group SizeN = 1N = 2N = 4N = 8N = 8N = 16
Group Composition (gender, age)Group 1: Male, 22;Group 1: 2 Males, 20 + 21;Group 1: 2 Males + 2 Females, 20, 20, 28, 35;Group 1: (acquainted): 4 Males, all aged 22; (unacquainted): 1 Male + 3 Females, 22 × 2, 26, 30;Group 1: 5 Males + 3 Females, 20 ×2, 21 × 3, 24, 27, 30;Group 1: 10 Males + 6 Females, 19, 20 × 2, 21 × 4, 24 × 2, 26, 27, 30, 33 × 2, 37, 41;
Group 2: Female, 21;Group 2: Male + Female, 21 + 21;Group 2: 3 Males + 1 Female, 22 × 4;Group 2: (acquainted): 1 Male + 3 Females, 23 × 2, 23, 44; (unacquainted): 2 Males + 2 Females, 20, 23, 30, 58;Group 2: 8 Males, all aged 27;Group 2: 7 Males + 9 Females, 22 × 5, 26 × 3, 30 × 2, 32 × 2, 34, 36, 40, 46;
Group 3: Male, 23;Group 3: 2 Females, 21 + 21;Group 3: 2 Males + 2 Females, 20 × 2, 27, 40;Group 3: (acquainted): 2 Males + 2 Females, all aged 23; (unacquainted): 3 Males + 1 Females, 30 × 2, 33, 37;Group 3: 4 Males + 4 Females, 25 × 4, 27, 30, 43, 55;Group 3: 8 Males + 8 Females, 22 × 6, 23 × 6, 26 × 4;
Group 4: Female, 23;Group 4: Male + Female, 22 + 24;Group 4: 1 Male + 3 Females, 24, 28, 30, 44;Group 4: (acquainted) 1 Male + 3 Females, 24 × 3, 40; (unacquainted) 4 Males, 20, 24, 25, 41;Group 4: 3 Males + 5 Females, 26 × 3, 27 × 2, 33, 37, 45;Group 4: 10 Males + 6 Females, 21 × 4, 23 × 4, 25 × 4, 28 × 4;
Group 5: Male, 23;Group 5: 2 Females, 19 + 26;Group 5: 4 Females, 30 × 2, 36, 46;Group 5: (acquainted) 3 Males + 1 Female, 30 × 2, 33, 35; (unacquainted) 2 Males + 2 Females, 27 × 2, 30, 32;Group 5: 8 Males, 27 × 2, 30 × 3, 32 × 2, 40;Group 5: 4 Males + 12 Females, 24 × 4, 25 × 3, 27 × 3, 29 × 3, 33 × 2, 38;
Group 6: Male, 25;Group 6: 2 Males, 23 + 30;Group 6: 2 Males + 2 Females, all aged 24;Group 6: (acquainted) 2 Males + 2 Females, 25 × 3, 35; (unacquainted) 3 Males + 1 Female, 23, 24, 25, 30;Group 6: 5 Males + 3 Females, 30 × 4, 37, 40, 41, 47;Group 6: 7 Males + 9 Females, 20 × 2, 21 × 4, 22 × 3, 27 × 2, 30 × 2, 34 × 2, 40;
Group 7: Female, 30;Group 7: 2 Females, 27 + 33;Group 7: 3 Males + 1 Females, 30 × 3, 35;Group 7: (acquainted) 2 Males + 1 Female, 24 × 2, 30, 35; (unacquainted) 2 Males + 2 Females, 23 × 3, 42;Group 7: 4 Males + 4 Females, 25 × 4, 27 × 2, 33, 35;Group 7: 9 Males + 7 Females, 19 × 2, 21 × 3, 22 × 3, 25 × 2, 27 × 2, 29 × 2, 33, 35;
Group 8: Male, 32;Group 8: 2 Males, 30 + 33;Group 8: 2 Males + 2 Females, 22 × 2, 33, 50;Group 8: (acquainted) 3 Males + 1 Female, all aged 24; (unacquainted) 4 Males, 26 × 2, 29, 30;Group 8: 4 Males + 4 Females, 30 × 3, 32 × 3, 41;Group 8: 6 Males + 10 Females, 23 × 3, 24 × 5, 25 × 3, 26 × 2, 35 × 2, 38;
Group 9: Male, 40;Group 9: Male + Female, 38 + 46;Group 9: 1 Male + 3 Females, 23, 30, 37, 42;Group 9: (acquainted) 2 Males + 2 Females, all aged 23; (unacquainted) 2 Males + 2 Females, 20 × 2, 26, 49;Group 9: 6 Males + 2 Females, 27 × 2, 30 × 2, 31 × 2, 33, 38;Group 9: 8 Males + 8 Females, 30 × 5, 32 × 6, 37 × 2, 40, 42, 47;
Group 10: Female, 47;Group 10: 2 Males, 40 + 52;Group 10: 2 Males + 2 Females, 35 × 2, 37, 39;Group 10: (acquainted) 2 Males + 2 Females, 23 × 3, 30; (unacquainted) 4 Males, all aged 25;Group 10: 5 Males + 3 Females, 23 × 3, 25 × 3, 30, 34;Group 10: 9 Males + 7 Females, 20 × 2, 25 × 4, 26 × 3, 30 × 2, 31 × 2, 32 × 2, 36;
Table 2. Group size effect on the participants’ spatial distance toward the large display.
Table 2. Group size effect on the participants’ spatial distance toward the large display.
Group FormSample SizeMean Distance (mm)Std. Dev.F-Valuep-Value
1st form50637.52/F(4, 196) = 69.38<0.001
2nd form100729.76189.26
3rd form2001211.05327.38
4th form200 (acquainted) + 200 (unacquainted)1506.27296.53
5th form4002000.12388.18
6th form210 (acquainted) + 590 (unacquainted)2388.29511.17
Table 3. A statistical result of the interpersonal proximity in group forms 2 to 6.
Table 3. A statistical result of the interpersonal proximity in group forms 2 to 6.
Group FormSample SizeMean Proximity (mm)Std. Dev.F-Valuep-Value
2nd form100862.7982.20F(3, 147) = 823.56< 0.001
3rd form200589.6361.93
4th form200 (acquainted) + 200 (unacquainted)269.7434.01
5th form400340.0554.43
6th form210 (acquainted) + 590 (unacquainted)149.8629.28
Table 4. Participants’ position change frequency difference across 6 group forms.
Table 4. Participants’ position change frequency difference across 6 group forms.
Group FormMean Position Frequencies of Each Participant in the GroupMeanStd. Dev.F-Valuep-Value
Group 1Group 2Group 3Group 4Group 5Group 6Group 7Group 8Group 9Group 10
1st form5.204.404.204.004.603.805.004.804.804.404.520.68F(4, 196) = 19.93<0.001
2nd form7.607.208.007.807.408.408.208.007.407.207.720.86
3rd form12.2011.4011.2010.6011.6012.4012.0011.8011.4010.8011.541.16
4th form(acq.)8.808.607.609.6010.4010.009.809.6010.409.8011.802.59
(unacq.)12.0013.0014.8014.6015.2013.6013.4015.6014.4014.80
5th form14.8021.8019.8017.0018.6020.4017.4015.2016.0016.4017.742.34
6th form(acq.)11.013.4010.807.809.4011.2012.0013.2012.8011.6017.636.81
(unacq.)24.0022.0025.6027.8020.4019.6026.0024.8024.4024.80
Table 5. Task completion time in different groups.
Table 5. Task completion time in different groups.
Group FormParticipants’ Mean Task Completion Time in Different Group FormsMeanStd. Dev.T-Test Valuep-Value
Group 1Group 2Group 3Group 4Group 5Group 6Group 7Group 8Group 9Group 10
1st form260.0253.8264.4260.4279.8267.2256.4265.0262.8254.2262.407.63
2nd form282.8256.6270.4263.2260.8269.4268.8272.0270.4268.8268.327.09
3rd form293.8242.6277.4251.7273.4270.4270.6274.4282.2268.4270.4911.16
4th form(acq.)222.6243.6222.5217.0204.4211.2209.8225.2210.4224.6250.9833.98t(49) = −4.31<0.001
(unacq.)276.8282.4279.2288.4298.8279.6271.4283.8288.2279.6
5th form266.4260.5276.5268.8282.2266.8285.5276.6270.8262.0271.619.66
6th form(acq.)225.4215.6209.8186.8200.5220.4213.8204.6226.0214.4255.5547.00t(49) = −6.25<0.001
(unacq.)325.5294.5290.6274.8290.2311.4295.5304.6286.2320.4
Table 6. SUS evaluation result of the 6 group forms.
Table 6. SUS evaluation result of the 6 group forms.
System Usability Relevant Statements:
5 (Positive) + 5 (Negative)
Mean Rating Score
1st Group Form2nd Group Form3rd Group Form4th Group Form5th Group Form6th Group Form
(acq.)(unacq.)(acq.)(unacq.)
P1I would like to use this system frequently.5.004.504.004.504.003.884.003.57
P2The system interface is easy to use.4.003.503.503.253.753.133.503.00
P3Functions in the system are well designed.5.005.004.754.754.254.253.503.79
P4Common users can learn to use this system very quickly.4.004.504.255.003.503.634.003.71
P5I am confident to use this system.5.004.004.254.754.003.883.503.50
N1This system is unnecessarily complex.1.001.001.001.251.501.501.001.29
N2I need a technical person to help me while using this system.1.001.001.001.251.001.131.501.57
N3The system is cumbersome to use.1.001.001.250.751.501.131.002.14
N4There are too many inconsistent functions in the system.1.001.001.001.001.001.001.251.25
N5I need to learn a lot before I can operate this system.2.001.501.501.251.501.501.251.25
Overall usability rating score92.5090.0087.5091.8882.5081.2581.2575.18
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lou, X.; Fu, L.; Song, X.; Ma, M.; Hansen, P.; Zhao, Y.; Duan, Y. An Integrated Application of Motion Sensing and Eye Movement Tracking Techniques in Perceiving User Behaviors in a Large Display Interaction. Machines 2023, 11, 73. https://doi.org/10.3390/machines11010073

AMA Style

Lou X, Fu L, Song X, Ma M, Hansen P, Zhao Y, Duan Y. An Integrated Application of Motion Sensing and Eye Movement Tracking Techniques in Perceiving User Behaviors in a Large Display Interaction. Machines. 2023; 11(1):73. https://doi.org/10.3390/machines11010073

Chicago/Turabian Style

Lou, Xiaolong, Lili Fu, Xuanbai Song, Mengzhen Ma, Preben Hansen, Yaqin Zhao, and Yujie Duan. 2023. "An Integrated Application of Motion Sensing and Eye Movement Tracking Techniques in Perceiving User Behaviors in a Large Display Interaction" Machines 11, no. 1: 73. https://doi.org/10.3390/machines11010073

APA Style

Lou, X., Fu, L., Song, X., Ma, M., Hansen, P., Zhao, Y., & Duan, Y. (2023). An Integrated Application of Motion Sensing and Eye Movement Tracking Techniques in Perceiving User Behaviors in a Large Display Interaction. Machines, 11(1), 73. https://doi.org/10.3390/machines11010073

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop