Next Article in Journal
Research on Bi–ISAR Sparse Aperture High Resolution Imaging Algorithm under Low SNR
Next Article in Special Issue
Precise Identification of Food Smells to Enable Human–Computer Interface for Digital Smells
Previous Article in Journal
Electroencephalogram Signals for Detecting Confused Students in Online Education Platforms with Probability-Based Features
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Complex Hand Interaction Authoring Tool for User Selective Media

Electronics and Telecommunications Research Institute, 218 Gajeong-ro, Yuseong-gu, Daejeon 34129, Korea
*
Author to whom correspondence should be addressed.
Electronics 2022, 11(18), 2854; https://doi.org/10.3390/electronics11182854
Submission received: 7 July 2022 / Revised: 22 August 2022 / Accepted: 3 September 2022 / Published: 9 September 2022
(This article belongs to the Special Issue Real-Time Visual Information Processing in Human-Computer Interface)

Abstract

:
Nowadays, with the advancement of the Internet and personal mobile devices, many interactive media are prevailing, where viewers make their own decisions on the story of the media based on their interactions. The interaction that the user can make is usually pre-programmed by a programmer. Therefore, interactions that users can make are limited to programmable areas. In comparison, in this paper, we propose an Interactive media authoring tool which can compose diverse two-hand interactions from several one-hand interactive components. The aim is to provide content creators with a tool to produce multiple hand motions so that they can design a variety of user interactions to stimulate the interest of content viewers and increase their sense of immersion. Using the proposed system, the content creator can gain greater freedom to create more diverse and complex interactions than programmable ones. The system is composed of a complex motion editor that edits one-hand motions into complex two-hand motions, a touchless sensor that senses the hand motion and a metadata manager that handles the metadata, which specify the settings for the interactive functions. To our knowledge, the proposed system is the first web-based authoring tool that can authorize complex two-hand motions from single hand motions, and which can also control a touchless motion control device.

1. Introduction

Realistic media technology that makes it possible to experience or appreciate specific scenes in the media based on user interaction is being developed. A variety of realistic media contents are being produced based on hand motions to combine the virtual space with the real space [1,2,3,4]. Users can replace and create plots, and experience reactions to user interactions while watching movies. The interactive media production platform provides tools to create active interactions between media and users based on a variety of multimedia content (video/photos/text) and enables the editing of story branches. In addition, it provides a unique function for video segment editing, an intuitive interface that allows the creator to easily create images, and provides an embedded code and API to provide web-based interactive video insertion capabilities [5,6,7,8,9].
However, until now, there has been no web-based authoring platform that can create the interaction between the media and the users based on hand motions, especially on complex hand motions. The hand can be used as an effective tool in the authorizing platform to create various interactions between the media and the media users. The motion of the hand is recognized via a variety of sensors, from wearable ones to non-contact ones [10,11,12].
In this paper, we propose a web-based authoring platform which can create scene-specific interactions with complex hand motions. For example, it becomes easy with the proposed method to create an interaction that provides life-like driving experience when watching a driving scene, etc. This kind of interaction was formerly not easy to create with programmable authorizing platforms. In other words, we propose an Interactive complex hand interaction authorizing tool which can compose diverse two-hand interactions from several one-hand interactive components. Many other scene-specific actions can also be created easily with our complex hand motion editor.
We summarize the main contributions of the proposed method as follows:
  • To our knowledge, the proposed system is the first one to authorize complex two-hand motions from single hand motions. Even though there exist some works [12,13,14] which propose authorizing tools for human interaction, these systems do not compose complex interactions from single ones;
  • The authorizing tool is a web-based one which can also control a touchless sensor, so that there is no need for any wearable device. There are only a few web-based authorizing tools, and they do not use motion control devices [5,6,7,8,9];
  • The proposed platform can create very complex hand motions out of motion sensors which can only recognizes rather simple hand motions;
  • The authorizing tool can provide content creators with the ability to create interactive media very easily without programming technology;
  • This tool can contribute to creating a new interactive platform, similar to YouTube, which is specialized to interactive content.
The user hand interaction proposed in this paper can be applied to various applications, i.e., to applications in the field of NUI (Nature User Interface) that are nowadays actively being studied. The paper is constituted as follows: in Section 2, we introduce the configuration of the complex hand interaction (ComplexHI) authoring tool. In particular, we introduce the configuration of interaction metadata for editing, creating, and playing complex hand interaction, and the Interactive Media Player (IMP) for operating complex hand interaction. In Section 3, we describe the types and recognition algorithms of a single hand interaction and the generation of complex two-hand interactions based on the single hand ones. Finally, in Section 4, we show the environment in which the complex hand interaction operates and the results of the user behavior, and mention future research challenges for the expansion of the complex hand interaction.

2. Complex Hand Interaction Authoring Tool

Figure 1 shows the overall function diagram of the proposed complex hand interaction (ComplexHI) authoring tool. The ComplexHI authoring tool provides the functions with which to create complex motions by combining several single motions. For instance, let us assume that we want to design a driving motion and a stopping motion. A driving motion can be seen as a two-hand motion which can be composed of two single hand motions where the hands are moving in opposite directions of each other while grabbing both hands. Meanwhile, we can design the stopping motion by an opening of the grabbing hands.
Figure 2 explains how the ComplexHI authoring tool is linked to the scenario editor and how the complex motion is generated in conjunction with the scenario editor. A scene in the scenario editor is represented by a node. When the creator clicks on a specific scene node, the ComplexHI authoring tool is started and a screen appears, where the creator can authorize an interaction. The creator configures the user’s device and edits the story by editing the complex hand interactions. The edited interaction information is stored in a form of metadata which is parsed at the time that the IMP is played.
As shown in Figure 3, the ComplexHI consists of a metadata manager for generating and managing metadata for complex motion behavior, a complex motion manager for user hand motion recognition, and an interactive media player for recognizing and viewing user interactions. After the creator executes on the composite motion editor, the interaction metadata and the scenario data are generated for the creation of the composite motion. This is automatically generated when the user sets the motion flow in the GUI. The generated metadata, in turn, automatically parse the metadata when the user watches the media and goes through a motion recognition process in conjunction with the complex motion manager when the corresponding motion interaction occurs. The Interactive Media Player (complex motion player) is equipped with a parser that can parse the metadata produced by the composite motion editor, so that it interprets the composite motion metadata and recognizes them in conjunction with the motion function when a specific user interaction occurs according to the metadata information.
Figure 4 shows the operation flow chart of the complex hand interaction authoring tool and the GUI. The video file to which the complex hand interactions is to be applied is loaded and a list of single hand interaction is placed in the motion flow editor for complex hand interactions production. A single hand interaction is expressed as a single node, where a complex motion can be created by connecting multiple nodes. After arranging a single motion, one can set the properties of the motion and preview each motion. In addition, in order to confirm that the motions composed of multiple nodes are actually applied to the video, they can be checked with a preview function. Furthermore, the Motion Flow Editor helps one to freely edit and create complex motions on a node-by-node basis.
The ComplexHI is operated in the following sequence:
  • The image to which the complex hand interaction is applied is loaded from the scenario editor;
  • Move to the Motion Workflow area to create the complex hand interaction;
  • Create a new one- or two-hand interaction in the Motion Workflow area. Each interaction is represented by a single node in the Motion Workflow area;
  • Set the properties of the hand interaction. That is, set the properties of the interaction mode, the occurrence time, the delay time, and the overlay text;
  • Repeat step 4, for other hand interactions. Every new hand interaction will appear as a new node in the Motion Workflow area;
  • After the completion of the generation of the complex hand interaction, save the Motion Work Flow and check the hand interaction information;
  • Right-click on the Motion Work Flow to check the operation of the complex hand interaction as a video with the preview function;
  • Finally, using a preview, the complex hand interaction operation is checked with the video file;
  • Hereafter, the generated interaction can be further modified in the Motion Workflow area at any time.

2.1. Interaction Metadata

To specify settings for interactive functions in general media files without any special equipment or encoding, a separate metadata is required. In this paper, for compatibility with the interactive media authorizing tool provided as a web service, we propose to generate the language of metadata in the form of JavaScript Object Notification (JSON), an object format that has been created in JavaScript format. Using the user interaction metadata, the user can add any kind of new interactive function. The user interaction metadata include setting information such as the type of device to which the user interaction is applied, interaction mode, interaction overlay text, interaction start time, interaction waiting time, image resource information, single hand interaction mode setting, complex hand interaction mode setting, etc. Metadata are stored in the form defined in Table 1, and users can experience interactive media according to the user motion, motion recognition, and waiting time information recorded in these data.
For example, the metadata for a driving interaction becomes: {“mode”:“hand”, “device”:“leap”, “video_path”:“test.mp4”, “start_time”:“0.00”, “delay_time”:5, “player_mode”:“360VR”,“data”:“360VR.mp4”, “single hand interaction mode”:[“GRAB”,“PUSH”],“complex hand interaction mode”:“drive”}. The value for the “mode’’ command is set to “hand” as the interactive motion is a hand motion. Likewise, the “player_mode” is set to “360VR” which means that a 360 VR video will be played during the driving interaction. The kinds of single hand interactions are “Grab” and “Push”, which are composed into a complex hand interaction mode “drive”. The value for “data” is the path to the 360 VR video, which will be played during the driving interaction.

2.2. Interactive Media Player (IMP)

The CHI is equipped with an Interactive Media Player. The Interactive Media Player is an image control player that reads the user-generated interaction metadata, and operates according to the user’s interaction at the time that the interaction occurs. For, example, we show in Figure 5 the case of a driving scene where the video interacts with the user based on complex hand interactions. When an interaction occurs during the video playback, the video itself turns into a 360 VR video that can interact with complex hand interactions. In fact, it is the video player that changes to a 360 VR video player. After the user has interacted with the video for a predefined time, that is, when the user’s interaction operation time ends, the regular video player is restored again instead of the interactive 360 VR player.

3. Composing Complex Hand Interactions

A complex hand interaction can be created by simply composing a set of single hand motions. However, to compose a complex hand interaction, first some basic single hand motions have to be defined and developed. To this end, we first developed a diversity of single motion user hand motion recognition processes. In particular, for the development of touchless user interaction we used a leap motion sensor which is suitable for low-cost hand tracking. Table 2 briefly illustrates various single hand interaction processes, which are used to create other complex motions by connecting the single motions in a continuous flow.
The basic single hand interactions have to be defined by an algorithm which has to be programmed. The single hand interactions that can be recognized by Algorithm 1 based on the sensed data of the leap motion are GRAB, SWIPE, TOUCH, PUNCH, and PUSH. The interaction recognition time is the time required by the input of the interaction metadata. Furthermore, the motion data, the motion latency, and the motion count are taken over as parameters according to each recognition function. They serve to find a match to the motion name in the conditional statement within this function and call the motion function. In addition, the motion waiting time may be measured by setting the interaction start time just before entering the corresponding function. In addition, when the function is called, the interactive media player is stopped, and when user motion is recognized according to a specific interaction, the stopped interactive media player is played in the next video. In the case of GRAB, the number of fists is limited to one using grabflag, and in the case of TOUCH, PUSH and PUNCH, when the axis of movement of the hand falls within a specific value range, it is recognized separately.
A single hand interaction can be developed either for a one-hand interaction or a two-hand interaction. Whether it is used for a one hand or two-hand interaction can be configured by setting the mode to a ‘single’ mode or a ‘multiple’ mode. When a single hand Interaction has been developed for a two-hand interaction, it can be combined with any kind of other single hand interactions by combining the interactions in the Motion Workflow area in the authoring tool. The single hand Interactions described in Table 2 recognize the user hand motions according to Algorithm 1. Figure 6 shows the cases where the hand motions are recognized for each single hand interaction.
Algorithm 1 GRAB, TOUCH, PUSH, PUNCH single hand interaction.
  • procedureMotion( G R A B , P U S H , T O U C H , P U N C H )      ▹ The motion function
  •      r m o t i o n m o d e
  •     while  L e a p m o t i o n E n d t i m e L e a p m o t i o n S t a r t t i m e L e a p m o t i o n _ D e l a y t i m e  do
  •          P a u s e P l a y e r                           ▹ Motion standby
  •         if  r = G R A B  then
  •            if  G r a b C o u n t = U s e r G r a b M o t i o n C o u n t  then
  •                 G r a b F l a g T R U E
  •                 G r a b I n t e r a c t i o n S U C C E S S ;
  •            end if
  •            if  r = T O U C H  then
  •                if  t o u c h F l a g F A L S E AND H a n d f i n g e r = T R U E  then
  •                    for  H a n d . f i n g e r 2 . . . 5  do
  •                        if  H a n d m o v e z a x i s > = 100  then
  •                            t o u c h F l a g T R U E
  •                            T O U C H I n t e r a c t i o n S U C C E S S ;
  •                        end if
  •                    end for
  •                end if
  •                if  r = P U S H OR r = P U N C H  then
  •                    if  H A N D G r a b  then
  •                        if  H a n d m o v e z a x i s > = 200  then
  •                            p u s h F l a g T R U E
  •                            P U S H I n t e r a c t i o n S U C C E S S ;
  •                        end if
  •                    end if
  •                    if  H A N D = G r a b  then
  •                        if  H a n d m o v e z a x i s > = 200  then
  •                            p u n c h F l a g T R U E
  •                            P U N C H I n t e r a c t i o n S U C C E S S ;
  •                        end if
  •                    end if
  •                end if
  •            end if
  •         end if
  •     end while
  •     return  S t a r t P l a y e r                           ▹ Single hand interaction Finish
  • end procedure
Table 3 shows an example of collecting single motions as shown in Table 2 and creating complex hand interactions. A complex hand interaction may be generated, such as the motion of driving or the motion of appreciating a panorama image by a user’s hand motion or clapping. In the case of a complex motion that drives, it is a composite motion created by combining the GRAB and the PUSH single motions. In this paper, we composed a driving interaction using the authoring tool as an example. When a driving interaction occurs in conjunction with a 360 VR image, the user uses both hands to experience driving. So, it becomes a complex hand interaction. The complex hand interactions used in the driving scenario are composed of the single hand interactions described in Table 2. When one clenches one’s fists with both hands, the video is played, and when one opens both hands, the video stops. To drive to the left or right directions, both hands should hold fists and switch to the left or right by changing the vertical height. This operation can be confirmed in the experimental result image.
The operation of the complex hand interaction reads the hand interaction metadata information defined by the authoring tool. When the actual video image is played, it is first recognized by the metadata information whether the interaction is a complex hand interaction or a single hand interaction. The hand interaction recognition process during the video playback is the same as described in Algorithm 2. First, it is recognized whether the user’s hand is a one-hand or a two-hand action. After the information of both hands are recognized, the Start ComplexHandInteraction() function for complex hand interaction recognition operates. The ComplexHandInteractionCheck() function receives the interaction metadata, the hand type, and the frame input through the Leap motion sensor as inputs. This function checks the Leap motion information, the hand interaction information in the interaction metadata, and the hand information for each frame.
A complex hand interaction can be created in the scenario editor by selecting the complex hand induction authoring tool menu. Algorithm 2 is a general loop on how to open and initialize an edited complex hand interaction in the player software. Figure 7 shows the operation of a complex hand interaction, which shows Driving, Panorama, and CLAP Interactions. For example, the Driving Interaction uses two hands to adjust the left and right directions using the difference in the height between the two hands.The actual operation results of the Driving interaction are shown on the test result screen in the experimental results section.
Algorithm 2 Setting for complex hand interaction.
  • procedureMotion2( c o m p l e x h a n d i n t e r a c t i o n )  ▹ The complex hand interaction function
  •      r m o t i o n m o d e
  •     while  L e a p m o t i o n E n d t i m e L e a p m o t i o n S t a r t t i m e L e a p m o t i o n _ D e l a y t i m e  do
  •          P a u s e P l a y e r                 ▹ Motion standby
  •         if  O n e H a n d o r T w o H a n d  then
  •            if  T w o H a n d  then
  •                 c o m p l e x h a n d i n t e r a c t i o n
  •                 c o m p l e x h a n d i n t e r a c t i o n _ C h e c k ( M o t i o n _ D a t a , H a n d _ T y p e , L e a p F r a m e )
  •                if  c o m p l e x h a n d i n t e r a c t i o n = = M o t i o n _ D a t a  then
  •                     c o m p l e x h a n d i n t e r a c t i o n _ F l a g T R U E
  •                     c o m p l e x h a n d i n t e r a c t i o n S U C C E S S ;
  •                end if
  •            end if
  •             c o m p l e x h a n d i n t e r a c t i o n            ▹ complex hand interaction processing
  •         end if
  •     end while
  •     return  S t a r t P l a y e r                    ▹ complex hand interaction Finish
  • end procedure

4. Experimental Results

This complex hand interaction tool has been developed to work with scenario editors. As shown in Figure 8, it starts to work when one selects the complex hand induction authoring tool menu in the scenario editor. The single motions are consequently applied to the frames in the video by the motion work flow editor. In the motion work flow, one can see the resultant screen where the single motions are connected to each other to create the complex motion. Each single motion may have different attributes of operation time, waiting time, and overlay text. Each single motion corresponds to a function which checks a hand motion to be used by the user through an operation window of the single motion. In the development environment of the complex motion editor, the user hand recognition algorithm was developed with Javascript, and PHP and MySQL databases were used to store the motion information. As for the web service, it was developed to be operated and serviced by the Apache web server.
Figure 9 shows the result of the operation of the Driving interaction, a complex hand interaction. A Driving interaction may be operated as a compose of start, stop, right, and left operations. If a user experiences a Driving interaction while watching a video, the IMP converts the Video Player to a 360 VR video Player. The user can then experience a Driving interaction using the start, stop, right, and left operations.
Figure 10 shows a kiosk product where the complex motion developed by the proposed ComplexHI is applied. It also shows a screen where the user experiences a Driving and a Panorama interaction while watching a 360 VR video played on the KIOSK. The 360 VR video was taken beforehand by a 360 degree camera while driving in a car through the driving scene. By providing the user with a non-contact interaction according to the complex motion, the user can enjoy the interactive content without any screen touch or sensor contact.
We performed two user tests on the well-known PSSUQ [15] scale; one for the users of the editor side and the other for the end users side. The PSSUQ consists of 19 post-test survey prompts that calculate three crucial metrics (System Quailty/Information Quality/Interface Quality) to rate the usefulness of a product as shown in Table 4. Table 5 shows the reliability analysis of PSSUQ. We set all the 19 items in the survey, and surveyed seven participants who used both the authoring tool to write the interactive media content, and the end product to which the complex hand interaction was applied. As shown in Table 6, the average PSSUQ of the user for the authoring tool and content experience is 2.64 and 3.15, respectively, out of a scale from 1 to 7, where a smaller score indicates a higher satisfaction. Interestingly, the usage of the authoring tool shows a smaller PSSUQ score than the usage of the end-product. This is due to the fact that, for the end-product, the gestures have to be learned by the end-user, which is not so easy a task. However, the authoring tool shows a relatively low PSSUQ average value that verifies the usefulness of the proposed authoring tool system.

5. Comparison with Other Interactive Content Authoring Tools

In this section, we compare the proposed content authoring tool with other existing interactive media authoring platforms shown in Table 7. Existing interactive media authoring platforms provide interaction editing functions, video upload and editing, and scenario editing functions that enable scenario branching in a web environment. RaptMedia [6] provides the ability to branch out scenarios by utilizing videos, photos, and text. Eko [7] provides story branching, object tracking, and user interaction for product sales. Rancontr [8] is a cloud-based real-time interactive media authoring tool that provides VR functions. Meanwhile, WireWax [9] automatically detects a face while uploading a video, providing a service for users to easily label. Furthermore, it simply inserts a link into an object to provide a function to connect to the corresponding website and purchase products.
The proposed authoring tool contains all the features provided by existing interactive media authoring platforms. In addition, the proposed system can also compose diverse two-hand interactions with a hand motion control device. In other words, by providing a hand interaction function using a user device, it is possible to create a scene-specific hand interaction with composite motion according to the scenario.

6. Conclusions

These days, with the advance of digital broadcasting, the media environment provides users with an opportunity to enjoy differentiated contents in a more aggressive fashion through user-media interactions based on computer technology. In this paper, we first proposed an interactive media player that can recognize the user’s motion and control the video in a web service environment without installing any specific program. Then, we proposed and developed an interactive media authoring tool with which immersive interactive media can be produced and enjoyed. The authorizing tool is able to create complex two-hand interactions based on single one-hand actions as it can compose and edit the hand motions with mere hand interactions with the authoring system. This is the first time that an authorizing system is proposed to authorize complex two-hand motions from single hand motions, as most of the authoring tools authorize interactions by programming. The complex hand interaction authoring tool makes it possible to apply complex hand motions to diverse contents without the need for programming. With the proposed system, it is possible to create any kind of custom or composite motions. It is expected that, in the future, the proposed system will allow individual creators to produce their own composite motion and gain competitiveness through differentiated composite motion production. Furthermore, it is anticipated that the proposed system can be applied to areas such as tourism and education as the authoring tool would be useful in producing funny and informative interactive educational and advertising content easily. In order to develop complex hand interactions effectively in the future, object control algorithms through user interaction that are grafted with object control technology will be studied. In addition, research will be conducted to automatically recognize user devices in conjunction with multiple contactless devices.

Author Contributions

Conceptualization, B.D.S. and H.C.; methodology, B.D.S. and S.-H.K.; software, B.D.S.; validation, B.D.S.; formal analysis, B.D.S. and H.C.; investigation, B.D.S.; resources, S.-H.K.; writing—original draft preparation, B.D.S.; writing—review and editing, B.D.S. and H.C.; visualization, S.-H.K.; supervision, B.D.S.; project administration, S.-H.K. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by Electronics and Telecommunications Research Institute (ETRI) grant funded by the Korean government. [22ZH1200, The research of the basic media-contents technologies].

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Peter, M.; Claudia, C. A Privacy Framework for Games Interactive Media. IEEE Games Entertain. Media Conf. (GEM) 2018, 11, 31–66. [Google Scholar]
  2. Guleryuz, O.G.; Kaeser-Chen, C. Fast Lifting for 3D Hand Pose Estimation in AR/VR Applications. In Proceedings of the 25th IEEE International Conference on Image Processing (ICIP), Athens, Greece, 7–10 October 2018. [Google Scholar]
  3. Nooruddin, N.; Dembani, R.; Maitlo, N. HGR: Hand-Gesture-Recognition Based Text Input Method for AR/VR Wearable Devices. In Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics (SMC), Toronto, ON, Canada, 11–14 October 2020. [Google Scholar]
  4. Zhou, Y.; Habermann, M.; Xu, W.; Habibie, I.; Theobalt, C.; Xu, F. Monocular Real-time Hand Shape and Motion Capture using Multi-modal Data. In Proceedings of the Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
  5. Netflix (Interactive Content). Available online: http://www.netflix.com (accessed on 1 July 2022).
  6. RaptMedia (Interactive Video Platform). Available online: http://www.raptmedia.com (accessed on 1 July 2022).
  7. Eko (Interactive Video Platform). Available online: https://www.eko.com/ (accessed on 1 July 2022).
  8. Racontr (Interactive Media Platform). Available online: https://www.racontr.com/ (accessed on 1 July 2022).
  9. Wirewax (Interactive Video Platform). Available online: http:///www.wirewax.com/ (accessed on 1 July 2022).
  10. Lu, Y.; Gao, B.; Long, J.; Weng, J. Hand motion with eyes-free interaction for authentication in virtual reality. In Proceedings of the 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), Atlanta, GA, USA, 22–26 March 2020; pp. 715–716. [Google Scholar]
  11. Nizam, S.; Abidin, R.; Hashim, N.; Lam, M.; Arshad, H.; Majid, N. A review of multimodal interaction technique in augmented reality environment. Int. J. Adv. Sci. Eng. Inf. Technol. 2018, 8, 1460–1469. [Google Scholar] [CrossRef]
  12. Ababsa, F.; He, J.; Chardonnet, J.-R. Combining hololens and leap-motion for free hand-based 3D interaction in MR environments. In AVR 2020; De Paolis, L.T., Bourdot, P., Eds.; LNCS; Springer: Cham, Switzerland, 2020; Volume 12242, pp. 315–327. [Google Scholar]
  13. Bachmann, D.; Weichert, F.; Rinkenauer, G. Review of three-dimensional human-computer interaction with focus on the leap motion controller. Sensors 2018, 18, 2194. [Google Scholar] [CrossRef] [PubMed]
  14. Yoo, M.; Na, Y.; Song, H.; Kim, G.; Yun, J.; Kim, S.; Moon, C.; Jo, K. Motion estimation and hand gesture recognition-based human–UAV interaction approach in real time. Sensors 2022, 22, 2513. [Google Scholar] [CrossRef] [PubMed]
  15. Lewis, J.R. Psychometric Evaluation of the PSSUQ Using Data from Five Years of Usability Studies. Int. J.-Hum.-Comput. Interact. 2002, 14, 463–488. [Google Scholar]
Figure 1. Function diagram of the proposed complex hand interaction (ComplexHI) authoring tool.
Figure 1. Function diagram of the proposed complex hand interaction (ComplexHI) authoring tool.
Electronics 11 02854 g001
Figure 2. Complex hand interaction authoring tool configuration using scenario editor information.
Figure 2. Complex hand interaction authoring tool configuration using scenario editor information.
Electronics 11 02854 g002
Figure 3. The Structure of the ComplexHI.
Figure 3. The Structure of the ComplexHI.
Electronics 11 02854 g003
Figure 4. The Operation flow chart of the ComplexHI and the GUI.
Figure 4. The Operation flow chart of the ComplexHI and the GUI.
Electronics 11 02854 g004
Figure 5. The Operation Process of the IMP.
Figure 5. The Operation Process of the IMP.
Electronics 11 02854 g005
Figure 6. Examples of hand gestures.
Figure 6. Examples of hand gestures.
Electronics 11 02854 g006
Figure 7. Example of complex hand interactions.
Figure 7. Example of complex hand interactions.
Electronics 11 02854 g007
Figure 8. Experimental result: complex hand interaction authoring tool execution screen.
Figure 8. Experimental result: complex hand interaction authoring tool execution screen.
Electronics 11 02854 g008
Figure 9. Experimental result: A screen where a user experiences a complex hand interaction (Driving).
Figure 9. Experimental result: A screen where a user experiences a complex hand interaction (Driving).
Electronics 11 02854 g009
Figure 10. Experimental result: A screen where a user experiences complex hand interaction on a KIOSK.
Figure 10. Experimental result: A screen where a user experiences complex hand interaction on a KIOSK.
Electronics 11 02854 g010
Table 1. Specification of Interaction Metadata.
Table 1. Specification of Interaction Metadata.
No.CommandValue (data Type)Description
1modehand interaction … etc (string)Interaction mode
2overlaytext(string)Information of overlay text
3deviceleap (string)Set of User’s device
4videopathpath(string)Video file path
5starttimemin, sec (int)Set of Interaction Start time
6delaytimesec (int)Set of Interaction Delay time
7playermodevideo/panorama/360/mp3 (string)Interactive media player mode
8datavideo/panorama/360/mp3 file path (string)User’s Upload file
9single hand interaction modePush, Punch, Grab, TouchSet of single hand interaction mode
10complex hand interaction modedrive, panorama, clapSet of complex hand interaction mode
Table 2. Some examples of Single hand interaction Specification.
Table 2. Some examples of Single hand interaction Specification.
NameHand Interaction SpecificationSingle Hand Interaction Process
GRABThe motion of holding things with one handThe parameter takes over the latency and the number of times to grab. GrabFlag prevents re-calls to increase the number of clenched fists only once. Terminate the motion function if a specified number of graphs have been performed, or if the waiting time is exceeded.
SWIPESwing motion with the palm of one handThe parameter takes over the timeout. The swipeInitFlag serves to verify that the hand has reached the start of the action. Exit the corresponding motion function when the hand is moved from side to side or when the wait time is exceeded.
TOUCHSelecting an object in a particular location with one fingerThe parameter takes over the timeout. The state in which only the index finger is opened is the state in which the action starts. The motion function is terminated if it moves a specified distance from the start action or exceeds the wait time.
PUSHPush palm forwardThe parameter takes over the timeout. Exit the corresponding motion function if the hand has been moved by a specified distance with the hand open, or if the waiting time is exceeded.
PUNCHClenching fists and pushing forwardThe parameter takes over the timeout. With your fists clenched, move a specified distance or exit the corresponding motion function when the waiting time is exceeded.
Table 3. Example of complex hand interaction specification.
Table 3. Example of complex hand interaction specification.
Complex Hand Interaction (Two Hand)hand Interaction SpecificationSingle Hand Interactions (One Hand)
DrivingStart with GRAB single motion on both hands and perform a stop motion when both hands are extended. Performs a left-right turnaround using the up-and-down phase difference of the two-hand fist GRAB motion.GRAB, PUSH
Panorama MotionPlay panoramic images by applying SWIPE’s left-right motion to one-handed (GRAB) motionGRAB, SWIPE
CLAPStretch both hands and deform the direction of PUSH movement to the left and right, not front and back.PUSH, PUSH
Table 4. The 19 items in PSSUQ [15].
Table 4. The 19 items in PSSUQ [15].
No.Post-Study System Usability Questionnaire Items
1Overall, I am satisfied with how easy it is to use this system
2It was simple to use this system
3I could effectively complete the tasks and scenarios using this system
4I was able to complete the tasks and scenarios quickly using this system
5I was able to efficiently complete the tasks and scenarios using this system
6I felt comfortable using this system
7It was easy to learn to use this system
8I believe I could become productive quickly using this system
9The system gave error messages that clearly told me how to fix problems
10Whenever I made a mistake using the system, I could recover easily and quickly
11The information (such as on-line help, on-screen messages, and other documentation) provided with this system was clear
12It was easy to find the information I needed
13The information provided for the system was easy to understand
14The information was effective in helping me complete the tasks and scenarios
15The organization of information on the system screens was clear
16The interface of this system was pleasant
17I liked using the interface of this system
18This system has all the functions and capabilities I expect it to have
19Overall, I am satisfied with this system
Table 5. The reliability analysis of PSSUQ.
Table 5. The reliability analysis of PSSUQ.
PSSUQ ClassificationNumber of Questionnaire
System Quailtiy1–8
Information Quality9–15
Interface Quality16–18
Total1–19
Table 6. Result of the user test based on the PSSUQ scale.
Table 6. Result of the user test based on the PSSUQ scale.
UserAuthoring Tool (ComplexHI)Experience User
User012.053.05
User022.152.94
User032.263.11
User042.474.57
User053.574.42
User063.313.31
User072.683.15
Average2.643.50
Table 7. Comparison of different interactive contents authoring tools.
Table 7. Comparison of different interactive contents authoring tools.
CompanyService EnvironmentScenario EditorInteraction AuthoringGesture (Hand Interaction)
RaptMediaOOOX
EkoOOOX
RacontrOOOX
WireWaxOOOX
ETRI(ComplexHI)OOOO
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Song, B.D.; Choi, H.; Kim, S.-H. Complex Hand Interaction Authoring Tool for User Selective Media. Electronics 2022, 11, 2854. https://doi.org/10.3390/electronics11182854

AMA Style

Song BD, Choi H, Kim S-H. Complex Hand Interaction Authoring Tool for User Selective Media. Electronics. 2022; 11(18):2854. https://doi.org/10.3390/electronics11182854

Chicago/Turabian Style

Song, Bok Deuk, HongKyw Choi, and Sung-Hoon Kim. 2022. "Complex Hand Interaction Authoring Tool for User Selective Media" Electronics 11, no. 18: 2854. https://doi.org/10.3390/electronics11182854

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop