Bodily Expression Support for Creative Dance Education by Grasping-Type Musical Interface with Embedded Motion and Grasp Sensors

Dance has been made mandatory as one of the physical education courses in Japan because it can cultivate capacities for expression and communication. Among several types of dance education, creative dance especially contributes to the cultivation of these capacities. However, creative dance requires some level of particular skills, as well as creativity, and it is difficult to presuppose these pre-requisites in beginner-level dancers without experience. We propose a novel supporting device for dance beginners to encourage creative dance performance by continuously generating musical sounds in real-time in accordance with their bodily movements. It has embedded sensors developed for this purpose. Experiments to evaluate the effectiveness of the device were conducted with ten beginner-level dancers. Using the proposed device, the subjects demonstrated enhanced creative dance movements with greater variety, evaluated in terms of Laban dance movement description. Also, using the device, they performed with better accuracy and repeatability in a task where they produced an imagined circular trajectory by hand. The proposed interface is effective in terms of creative dance activity and accuracy of motion generation for beginner-level dancers.


Introduction
Dance is a bodily action intended to convey to others the imagination conceived by the dancer. In Japan, dance has been a compulsory subject for elementary school since 2011, for junior high school since 2012, and for high school since 2013, following revisions to the curriculum guidelines in 2008 by the Ministry of Education, Culture, Sports, Science and Technology (MEXT). The dance in compulsory education is composed of Creative Dance, Folk Dance, and Hip Hop as a modern dance [1]. In particular, the improvisational aspects of creative dance are expected to contribute to the cultivation of expressive ability, creativity, and imagination. Communication skills are also expected to be cultivated by sharing imagination with others through bodily expressions and dance improvisation actions. Dance education is useful not only for students under compulsory education but also for college students and adults with no experience of the compulsory education of dance. For the latter, it can provide an opportunity for them to acquire basic social capabilities such as creativity, faculty, and the ability to grasp a situation by performing creative dance [2]. Motion and grasping force sensed by the interface are presented back to the user in real-time as a variation in musical sound. The sound encourages the user to generate motion continuously and accurately following the imagined trajectory. It also helps the user to conceive imagination for the next motions.
In our pilot study of this experiment with fewer subjects, a positive result was obtained [12]. In [12], order effect was not considered and the evaluation depended simply on the average and standard deviation of accelerometer data. In contrast, in this paper, the results are evaluated according to Laban motion standard based on the abovementioned standpoints, and order effect is dealt with by introducing reversed order of the conditions with more number of subjects. Moreover, the second experiment, that investigates motion accuracy to evaluate the accuracy of physical representation of motion image under the support of the device, is newly presented here. Both the real-time creation of motion image and the physical representation of the created image are the essential and inevitable targets of support in supporting creative dance. These two experiments combined can guarantee the effectiveness of the interface. Also, for both of the experiments, statistical validation of the results is added. These results prove the effectiveness of the proposed interface, at the level of its realistic and broad application to beginner level dancers.
The remainder of this paper is organized as follows: Section 2 describes related research. Section 3 presents the proposed bodily expression support interface that includes the hardware design, control methods, and performance styles. Section 4 provides the experiment settings. Section 5 presents the experiment results and discussion that confirms the usability of the proposed interface and explores the effect of the proposed grasping interface on the creation of a musical sound-space. Section 6 presents the conclusions. Motion and grasping force sensed by the interface are presented back to the user in real-time as a variation in musical sound. The sound encourages the user to generate motion continuously and accurately following the imagined trajectory. It also helps the user to conceive imagination for the next motions.
In our pilot study of this experiment with fewer subjects, a positive result was obtained [12]. In [12], order effect was not considered and the evaluation depended simply on the average and standard deviation of accelerometer data. In contrast, in this paper, the results are evaluated according to Laban motion standard based on the abovementioned standpoints, and order effect is dealt with by introducing reversed order of the conditions with more number of subjects. Moreover, the second experiment, that investigates motion accuracy to evaluate the accuracy of physical representation of motion image under the support of the device, is newly presented here. Both the real-time creation of motion image and the physical representation of the created image are the essential and inevitable targets of support in supporting creative dance. These two experiments combined can guarantee the effectiveness of the interface. Also, for both of the experiments, statistical validation of the results is added. These results prove the effectiveness of the proposed interface, at the level of its realistic and broad application to beginner level dancers.
The remainder of this paper is organized as follows: Section 2 describes related research. Section 3 presents the proposed bodily expression support interface that includes the hardware design, control methods, and performance styles. Section 4 provides the experiment settings. Section 5 presents the experiment results and discussion that confirms the usability of the proposed interface and explores the effect of the proposed grasping interface on the creation of a musical sound-space. Section 6 presents the conclusions.

Related Works
Since dance courses have become compulsory, supporting systems for dance education have gained importance. Among the studies to support dance improvement, Sato et al. [13,14] focus on arm movements in street dance to analyze the movements of novice and expert dancers. These studies evaluate from the perspective of motion analysis, whereas conventional approaches evaluate movement through sensibilities such as aesthetic and artistic points in dance performance. These studies focus on observing the difference between the movements of novices and experts and evaluate the characteristics of expert dancers; however, this research has not been considered to improve the dance performance for initial beginner-level dancers.
There have been other studies to evaluate motion using motion capture systems with virtual reality technologies [15][16][17][18]. These studies are useful in the training of motions with trajectories already determined such as ballet. It is possible to use these to improve the accuracy of creative dance; however, these systems do not encourage creative performance for beginners. Tadenuma et al. [19] also employed virtual reality technology to analyze dancer movements using Kansei information processing technology. They developed a system for transmitting to others, through visual images, the intent of a dancer estimated by the physical features in a video. However, because the line of sight faces the display system, free dance expression is restricted. Moreover, the dance space is limited because of the system required sensing areas for the motion capture system.
There is also research that proposes a new dance technique through lighting representation (LED Lighting System) [20]. This study evaluates the effects of lighting representation on physical expression; however, this is also for experts, rather than novices. Unlike display systems, this approach employs wearable LEDs. Because there are many parameters to determine the light pattern, it is necessary to choreograph the light according to the motions in advance; this does not correspond to a real-time dancing operation.
There have been proposals that use portable devices such as haptic interfaces and mobile phones. This line of research is particularly active in musical application [21][22][23]. These studies introduce various types of sensing technique to detect human motion and the measured body movements are mapped to music or sound. These performance systems liberate a performer from the usual physical limitations and provide different capabilities for music creation. Unlike the methods of confirming a dance by visualization, this is called sonification technology and has been used by professional dancers and artists [24,25]. There are many wearable devices; however, these are not suitable for usage to support dance beginners in an environment with multiple people because the installation tends to be complicated. Therefore, an interface that is portable and does not include dance field restrictions is necessary. Mobile phones, including iPhones, have become useful devices because of their multiple sensors that can be used to detect human actions and their desirable sizes. Many software applications have been developed to perform music production and editing [26,27]. However, such applications focus on the development of a musical interface to create novel music. These instruments require both hands, and are therefore not suitable for facilitating free-dance performance.
Thus, for beginners' dance support, it is important that the system can address multiple people via a non-wearing approach, include the possibility to measure the movement of the dancer without limiting the dance space, and function in real-time. In this research, we propose a method to perform musical feedback using a grasping-type portable interface.

Design
The design of our musical interface, TwinkleBall [28,29], is presented in Figure 2. The main body of the proposed interface consists of a rubber ball, Bluetooth wireless communication module, photodiode, three-axis accelerometer, LEDs, peripheral interface controller (PIC), and battery (9.0 V). All the electronic devices are enclosed in the rubber ball, which is translucent and hollow. The measurement range of the acceleration sensor is ±3.6 g. The peak wavelength of the photosensor is 560 nm. The Bluetooth module, photosensor, three-axis sensor, PIC, and battery are placed on an electronic circuit board inside the core, which is affixed to the rubber ball using rubber sheets. To convert the analogue signal of the sensors into a digital signal, 10-bit A/D conversion is used. The sampling rate of the A/D converter is 550 Hz. The energy autonomy of the device is 1 h. The LEDs are placed on the interior surface of the rubber ball. The specifications of the rubber ball are as follows: diameter, 152 mm; mass, 260 g; material, polyvinyl chloride (PVC). Figure 2 indicates that performers and audiences can easily see TwinkleBall, which shines by virtue of its LEDs and translucent material, even if the performance is staged under low-light conditions. As indicated in Figure 3, the signal output from the photosensor and accelerometer are digitized and sent to an external computer via the Bluetooth wireless communication module. The interval of the communication is set at 35 ms.
Sensors 2017, 17, 1171 5 of 16 560 nm. The Bluetooth module, photosensor, three-axis sensor, PIC, and battery are placed on an electronic circuit board inside the core, which is affixed to the rubber ball using rubber sheets. To convert the analogue signal of the sensors into a digital signal, 10-bit A/D conversion is used. The sampling rate of the A/D converter is 550 Hz. The energy autonomy of the device is 1 h. The LEDs are placed on the interior surface of the rubber ball. The specifications of the rubber ball are as follows: diameter, 152 mm; mass, 260 g; material, polyvinyl chloride (PVC). Figure 2 indicates that performers and audiences can easily see TwinkleBall, which shines by virtue of its LEDs and translucent material, even if the performance is staged under low-light conditions. As indicated in Figure 3, the signal output from the photosensor and accelerometer are digitized and sent to an external computer via the Bluetooth wireless communication module. The interval of the communication is set at 35 ms.

Sound Generation Mechanism
To apply the proposed interface to a dance performance, it is important for the generated sounds to represent bodily motions. In this study, we design the sound application such that the grasping motion controls the note and the moving motion with the interface controls the volume and tempo. We use MIDI sounds as output. In particular, when the shape of the rubber ball changes because of the users' grasping force, the distance d between the internal photosensor and LEDs varies as illustrated in Figure 4. Because the illumination intensity is inversely proportional to distance d, changes in the grasping force produce different output signals from the photosensor. This output signal is sent to the computer via the Bluetooth module and the note is then tuned based on its value. Because we use MIDI sounds, the range of the note is seven bits. The 10-bit digital signal from the photosensor is normalized into 7-bit before the MIDI output is calculated. Let Pmin and Pmax be the illumination intensity at the maximum distance of d and the minimum distance of d, respectively, and p be the input of the illumination intensity. Then the note is calculated as follows:  560 nm. The Bluetooth module, photosensor, three-axis sensor, PIC, and battery are placed on an electronic circuit board inside the core, which is affixed to the rubber ball using rubber sheets. To convert the analogue signal of the sensors into a digital signal, 10-bit A/D conversion is used. The sampling rate of the A/D converter is 550 Hz. The energy autonomy of the device is 1 h. The LEDs are placed on the interior surface of the rubber ball. The specifications of the rubber ball are as follows: diameter, 152 mm; mass, 260 g; material, polyvinyl chloride (PVC). Figure 2 indicates that performers and audiences can easily see TwinkleBall, which shines by virtue of its LEDs and translucent material, even if the performance is staged under low-light conditions. As indicated in Figure 3, the signal output from the photosensor and accelerometer are digitized and sent to an external computer via the Bluetooth wireless communication module. The interval of the communication is set at 35 ms.

Sound Generation Mechanism
To apply the proposed interface to a dance performance, it is important for the generated sounds to represent bodily motions. In this study, we design the sound application such that the grasping motion controls the note and the moving motion with the interface controls the volume and tempo. We use MIDI sounds as output. In particular, when the shape of the rubber ball changes because of the users' grasping force, the distance d between the internal photosensor and LEDs varies as illustrated in Figure 4. Because the illumination intensity is inversely proportional to distance d, changes in the grasping force produce different output signals from the photosensor. This output signal is sent to the computer via the Bluetooth module and the note is then tuned based on its value. Because we use MIDI sounds, the range of the note is seven bits. The 10-bit digital signal from the photosensor is normalized into 7-bit before the MIDI output is calculated. Let Pmin and Pmax be the illumination intensity at the maximum distance of d and the minimum distance of d, respectively, and p be the input of the illumination intensity. Then the note is calculated as follows:

Sound Generation Mechanism
To apply the proposed interface to a dance performance, it is important for the generated sounds to represent bodily motions. In this study, we design the sound application such that the grasping motion controls the note and the moving motion with the interface controls the volume and tempo. We use MIDI sounds as output. In particular, when the shape of the rubber ball changes because of the users' grasping force, the distance d between the internal photosensor and LEDs varies as illustrated in Figure 4. Because the illumination intensity is inversely proportional to distance d, changes in the grasping force produce different output signals from the photosensor. This output signal is sent to the computer via the Bluetooth module and the note is then tuned based on its value. Because we use MIDI sounds, the range of the note is seven bits. The 10-bit digital signal from the photosensor is normalized into 7-bit before the MIDI output is calculated. Let P min and P max be the illumination intensity at the maximum distance of d and the minimum distance of d, respectively, and p be the input of the illumination intensity. Then the note is calculated as follows: where n a is the reference value of note. As the range of MIDI note is 0 to 127, n a is set as 60 which is C4 (Middle C) in scientific pitch notation with a frequency of 261.6 Hz. n range is set as 24. We can control 2 octaves from 60 to 84.
where na is the reference value of note. As the range of MIDI note is 0 to 127, na is set as 60 which is C4 (Middle C) in scientific pitch notation with a frequency of 261.6 Hz. nrange is set as 24. We can control 2 octaves from 60 to 84. The proposed interface can change sound volume. The measurement value from the acceleration sensor changes when the dancers move the grasping interface through their motions. The accelerometer measures acceleration with respect to the x, y, and z-axes. For noise reduction of accelerometer, we applied a smoothing filter which is based on moving window average of 10 samples. The acceleration values are sent to the computer via the Bluetooth module. The computer calculates three-dimensional (3D) acceleration vector length L using these values and the volume is determined by this length as indicated in Figure 5a. The sound volume range is also seven bits, which corresponds to the resolution of the MIDI velocity. Therefore, the calculated vector is normalized to correspond to this range. The volume does not depend on the direction of movement because we simply use vector length L. Volume control is divided into two cases, namely, static and dynamic. In the dynamic case, where dancers move the interface through their motion, the volume is calculated linearly. In the static case, where dancers do not move, yet grasp the interface, the volume depends on the gradient angle of the interface. Although the interface shape is a sphere, it is divided into top and bottom hemispheres. The volume is determined as indicated in Figure 5b. In this paper, v is set to 60, which is almost half the range of the MIDI velocity determined by exploration in the preliminary experiments. Figure 5 illustrates the case of a 45-degree angle; this corresponds to a volume of 0.75v. Then, L is calculated as follows: where, x, y, and z are the sensed acceleration values. xi, yi, and zi are initial offsets of the sensor values which are measured before a user conducts dance, with the ball interface in a stationary state with the z-axis aligned with the vertical. Lz is used for the static case. Finally, we calculate the volume as: where, C is a threshold value to deal with sensing noises, and g represents gravity. By Equation (4), the range of volume is 0 to 60 in the static case and 60 to 127 in the dynamic case, as the range of MIDI velocity is 0 to 127.  The proposed interface can change sound volume. The measurement value from the acceleration sensor changes when the dancers move the grasping interface through their motions. The accelerometer measures acceleration with respect to the x, y, and z-axes. For noise reduction of accelerometer, we applied a smoothing filter which is based on moving window average of 10 samples. The acceleration values are sent to the computer via the Bluetooth module. The computer calculates three-dimensional (3D) acceleration vector length L using these values and the volume is determined by this length as indicated in Figure 5a. The sound volume range is also seven bits, which corresponds to the resolution of the MIDI velocity. Therefore, the calculated vector is normalized to correspond to this range. The volume does not depend on the direction of movement because we simply use vector length L. Volume control is divided into two cases, namely, static and dynamic. In the dynamic case, where dancers move the interface through their motion, the volume is calculated linearly. In the static case, where dancers do not move, yet grasp the interface, the volume depends on the gradient angle of the interface. Although the interface shape is a sphere, it is divided into top and bottom hemispheres. The volume is determined as indicated in Figure 5b. In this paper, v is set to 60, which is almost half the range of the MIDI velocity determined by exploration in the preliminary experiments. Figure 5 illustrates the case of a 45-degree angle; this corresponds to a volume of 0.75v. Then, L is calculated as follows: where, x, y, and z are the sensed acceleration values. x i , y i , and z i are initial offsets of the sensor values which are measured before a user conducts dance, with the ball interface in a stationary state with the z-axis aligned with the vertical. L z is used for the static case. Finally, we calculate the volume as: where, C is a threshold value to deal with sensing noises, and g represents gravity. By Equation (4), the range of volume is 0 to 60 in the static case and 60 to 127 in the dynamic case, as the range of MIDI velocity is 0 to 127. The proposed interface can also control the tempo to realize numerous expressions. However, all the sensing data from the interface are used to control the note and volume. Therefore, we employ the time sequence data of vector length L to change the tempo. We calculate the average value of L as follows: where k is the average value of L and i is the communication index. In this paper, n is set at eight because the interval of communication between the interface and the laptop is 35 ms. Then, the tempo is calculated by the following step function: where T is a threshold value to create the step function. In this paper, T is set at 2v. The tempo is not changed in the static case; however, it is changed through human motion in the dynamic case. Figure 6 displays scenes of a dance performance where the proposed interface is used. Figure 6a depicts the dancer changing the note by varying the grasping force with a single hand or both hands. Figure 6b depicts the dancer varying the volume and tempo by moving in a large motion such as waving. The proposed system is not sensitive to intricate motions; however, the strength of the dance movement influences the volume and tempo control. Therefore, the system responds to different dance motions. The proposed interface can also control the tempo to realize numerous expressions. However, all the sensing data from the interface are used to control the note and volume. Therefore, we employ the time sequence data of vector length L to change the tempo. We calculate the average value of L as follows: where k is the average value of L and i is the communication index. In this paper, n is set at eight because the interval of communication between the interface and the laptop is 35 ms. Then, the tempo is calculated by the following step function: 200, where T is a threshold value to create the step function. In this paper, T is set at 2v. The tempo is not changed in the static case; however, it is changed through human motion in the dynamic case. Figure 6 displays scenes of a dance performance where the proposed interface is used. Figure 6a depicts the dancer changing the note by varying the grasping force with a single hand or both hands. Figure 6b depicts the dancer varying the volume and tempo by moving in a large motion such as waving. The proposed system is not sensitive to intricate motions; however, the strength of the dance movement influences the volume and tempo control. Therefore, the system responds to different dance motions. The proposed interface can also control the tempo to realize numerous expressions. However, all the sensing data from the interface are used to control the note and volume. Therefore, we employ the time sequence data of vector length L to change the tempo. We calculate the average value of L as follows: where k is the average value of L and i is the communication index. In this paper, n is set at eight because the interval of communication between the interface and the laptop is 35 ms. Then, the tempo is calculated by the following step function: where T is a threshold value to create the step function. In this paper, T is set at 2v. The tempo is not changed in the static case; however, it is changed through human motion in the dynamic case. Figure 6 displays scenes of a dance performance where the proposed interface is used. Figure 6a depicts the dancer changing the note by varying the grasping force with a single hand or both hands. Figure 6b depicts the dancer varying the volume and tempo by moving in a large motion such as waving. The proposed system is not sensitive to intricate motions; however, the strength of the dance movement influences the volume and tempo control. Therefore, the system responds to different dance motions.

Experiments
In this section, we describe the experiments performed using TwinkleBall to confirm the validity of the proposed approach. We performed two experiments; the experiments focused on the improvement of creative activity and movement accuracy. Ten male and female subjects (university students and researchers) participated in the experiment. None of the subjects had previous dance education. All the subjects provided written informed consent to participate in these experiments.

Objective
We performed this experiment of dance improvisation using TwinkleBall with beginners as the subjects. For these subjects, this was the first time they performed an improvisation dance. The objective of this experiment was to confirm that TwinkleBall can support the expression of beginner-level dancers in creative dance. We evaluated the effectiveness of the sound generated by TwinkleBall by comparing the motion data between the conditions of "sound" and "mute." In the "sound" condition, users held TwinkleBall in their hand while dancing and TwinkleBall generated sound that represented the motion. In the "mute" condition, TwinkleBall was muted and did not generate sound.

Setting
The dancing space for the experiment was set at 2.5 m × 2.5 m. The procedure for the experiment was as follows: Step 1: Explanation First, we explained the specifications of TwinkleBall. In the MIDI, there were 128 possible program sounds as the defining musical instrument sounds. Each subject selected a sound number for the output sound to be used with his/her motions.
Step 2: Creative dance theme Each subject considered a theme for the creative dance.
Step 3: Step Dance performance The subject danced twice: (1) the subject grasped TwinkleBall without sound (i.e., mute TwinkleBall), (2) the subject grasped TwinkleBall with sound. We observed one minute per dance. The theme (from Step 2) chosen by the subjects was the same in both dance experiments.
To reduce order effects, we conducted experiments with two groups of five people. Half of the subjects received (1) first followed by condition (2); the other half received condition (2) first followed by condition (1).
Step 4: Step Completing questionnaires Finally, the subjects completed questionnaires orally for a qualitative evaluation. The questions were as follows: Q. The score was chosen from a range of five (positive) to one (negative) per question.

Evaluation Methods
To investigate the experiment data, we calculated the values for both the "sound" and "mute" conditions. We employed Laban Movement Analysis (LMA) [10,11] for the evaluation of all the dance movements including the force of bodily motions, changes in bodily motions, and the calculation of the standard deviation of the grasping motion for evaluation of the hand action, which is a fine movement expression.
The LMA system was developed for describing, interpreting, and documenting a variety of human movement throughout the entire body. This is a useful method to evaluate the motion quantitatively [23][24][25][26][27][28][29][30]. In this paper, we used Weight Effort for strength of dance and Time Effort for briskness of dance.
Weight Effort denotes the strength of the bodily motion of creative dance. We calculate the force per unit time during the dancing experiment, We, as follows: where m is the mass, a is the same as L from Equation (2), and t is the time duration of the experiment.
In this experiment, m is constant (i.e., m = 1) and a is collected from the acceleration sensor. Time Effort denotes the briskness of the change in bodily motions. This is an index to evaluate the characteristics of the sudden movement of dance, which corresponds to jerk. We calculate the jerk per unit time, Wt, as follows: For the evaluation of the fine grasping motion, we measured the lighting intensity, d l , and calculated its standard deviation, Wg, as follows: where n is the element count and d ave is the moving average value of d l per unit time. Based on the above three evaluation methods, we evaluated the overall motion of the dance and fine grasping motion.

Objective
The objective of the second experiment was to confirm that the subjects were moving their body according to their imagined motion. In this experiment, the movement task was to perform unconstrained circular hand motion at a constant speed while maintaining the acceleration amplitude at one of three designed constant values (low, 10 m/s 2 ; middle, 14 m/s 2 ; high, 18 m/s 2 ). The effectiveness of the sound generated by TwinkleBall was evaluated by comparing the motion data between the conditions of "sound" and "mute."

Setting
The experimental setup is illustrated in Figure 7. The procedure for the experiment was as follows: Step 1: Adjustment to target speed The subjects performed circular hand motions and adjusted until achieving the target acceleration (low, 10 m/s 2 ; middle, 14 m/s 2 ; high, 18 m/s 2 ).
Step 2: Maintaining target speed After confirming that the target acceleration was reached, the experimenter requested the subjects to maintain the hand motion constant for five seconds. The subjects performed hand motion twice for each of the three accelerations: (1) subject grasped TwinkleBall without sound (i.e., mute TwinkleBall) and (2) subject grasped TwinkleBall with sound. To reduce order effects, we conducted the experiments with two groups of five people. Half of the subjects performed (1) first followed by condition (2); the other half performed condition (2) first followed by condition (1).
Steps 1 and 2 were repeated for each of the three accelerations.
Sensors 2017, 17, 1171 10 of 16 sound (i.e., mute TwinkleBall) and (2) subject grasped TwinkleBall with sound. To reduce order effects, we conducted the experiments with two groups of five people. Half of the subjects performed (1) first followed by condition (2); the other half performed condition (2) first followed by condition (1).
Steps 1 and 2 were repeated for each of the three accelerations.

Evaluation Method
In this experiment, the movement task was to perform uniform circular hand motion. The centripetal acceleration measured by the acceleration sensor in the interface must be constant to achieve a uniform circular motion. Therefore, we calculated the standard deviation of the centripetal acceleration in the three acceleration conditions (low, middle, high). Table 1 lists the themes chosen arbitrarily by each subject in Step 2 of the experiment procedure. Because we explained to the subjects that they could perform creative dance freely in Step 1, they considered their feelings in the moment, or the imaginings that came to mind, and therefore the themes had a wide range of variety. The experimental results for (a) Weight Effort, We, (b) Time Effort, Wt, and (c) Grasping evaluation, Wg are displayed in Figure 8. The paired t-test technique (two-sided) was employed to verify the effectiveness of the proposed interface. It was tested at the 5% significance level. The t-test was computed from

Evaluation Method
In this experiment, the movement task was to perform uniform circular hand motion. The centripetal acceleration measured by the acceleration sensor in the interface must be constant to achieve a uniform circular motion. Therefore, we calculated the standard deviation of the centripetal acceleration in the three acceleration conditions (low, middle, high). Table 1 lists the themes chosen arbitrarily by each subject in Step 2 of the experiment procedure. Because we explained to the subjects that they could perform creative dance freely in Step 1, they considered their feelings in the moment, or the imaginings that came to mind, and therefore the themes had a wide range of variety. The experimental results for (a) Weight Effort, We, (b) Time Effort, Wt, and (c) Grasping evaluation, Wg are displayed in Figure 8. The paired t-test technique (two-sided) was employed to verify the effectiveness of the proposed interface. It was tested at the 5% significance level. The t-test was computed from  From the results of shown in Figure 8a,b, we can observe that the forces of dance increased and changes in the dance movements became more frequent using TwinkleBall with sound. As an evaluation of the overall body motion, the effectiveness of the proposed interface can be confirmed. Additionally, from the results shown in Figure 8c, both "sound" conditions are higher than "mute" conditions. By using TwinkleBall with sound, it is possible to generate fine grasping motion corresponding to sound change. Figure 8d is the questionnaire result. Questions No. 1 and 2 provided the performers a subjective evaluation on matching performance to the themes. For Question No. 1, there was a negative evaluation in the case of the "mute" condition. Conversely, for Question No. 2, the score indicates that when the subjects used TwinkleBall with sound, the sound encouraged them to express their imagination through bodily motion. The result of Question 3 shows a positive result with higher rating than the average score of 3. For further improvement of the rating, the delay originating from the interval of communication (35 ms) is a topic of future refinement. Tempo depends on the parameter v, as shown in Equation (6). The value of v was fixed through these experiments. Since the magnitude of acceleration is affected by the length of the upper arm, tuning of v for each individual is considered for future improvement. Question No. 4 scored a positive evaluation. We confirmed that sounds can support the generation of successive motions continuously. Question No. 5 denotes From the results of shown in Figure 8a,b, we can observe that the forces of dance increased and changes in the dance movements became more frequent using TwinkleBall with sound. As an evaluation of the overall body motion, the effectiveness of the proposed interface can be confirmed. Additionally, from the results shown in Figure 8c, both "sound" conditions are higher than "mute" conditions. By using TwinkleBall with sound, it is possible to generate fine grasping motion corresponding to sound change. Figure 8d is the questionnaire result. Questions No. 1 and 2 provided the performers a subjective evaluation on matching performance to the themes. For Question No. 1, there was a negative evaluation in the case of the "mute" condition. Conversely, for Question No. 2, the score indicates that when the subjects used TwinkleBall with sound, the sound encouraged them to express their imagination through bodily motion. The result of Question 3 shows a positive result with higher rating than the average score of 3. For further improvement of the rating, the delay originating from the interval of communication (35 ms) is a topic of future refinement. Tempo depends on the parameter v, as shown in Equation (6). The value of v was fixed through these experiments. Since the magnitude of acceleration is affected by the length of the upper arm, tuning of v for each individual is considered for future improvement. Question No. 4 scored a positive evaluation. We confirmed that sounds can support the generation of successive motions continuously. Question No. 5 denotes restraint using TwinkleBall. Because it is a negative evaluation, the implication is that the perception of restraint by the interface was not significant.

Creative Activity Experiment
Although the subjects in this experiment chose a variety of themes and the dance motions depended on the theme and personality (Figure 9), we quantitatively observed that the force of dance, variation of dance movements, and grasping movements increased when the dance beginners used TwinkleBall through this experiment. Moreover, we qualitatively confirmed the subjective effectiveness of TwinkleBall to support improvisational creative dance from the results of the questionnaires. Thus, it can be concluded that by using TwinkleBall, it is possible to assist beginner-level dancers perform creative dance.
Sensors 2017, 17, 1171 12 of 16 restraint using TwinkleBall. Because it is a negative evaluation, the implication is that the perception of restraint by the interface was not significant. Although the subjects in this experiment chose a variety of themes and the dance motions depended on the theme and personality (Figure 9), we quantitatively observed that the force of dance, variation of dance movements, and grasping movements increased when the dance beginners used TwinkleBall through this experiment. Moreover, we qualitatively confirmed the subjective effectiveness of TwinkleBall to support improvisational creative dance from the results of the questionnaires. Thus, it can be concluded that by using TwinkleBall, it is possible to assist beginnerlevel dancers perform creative dance.

Movement Accuracy Experiment
The results of the standard deviation of the centripetal acceleration in each target speed are presented in Figure 10. We compared the values for both the "sound" and "mute" conditions. Additionally, Figure 11 shows an example of the time sequence data of the magnitude of accelerometer data. The deviation of acceleration is larger in "mute" condition than in sound feedback condition, for each of the target speeds.
(a) (b) Figure 10. Results for average of standard deviation of detected centripetal acceleration in each target speed: (a) condition "mute" followed by condition "sound"; (b) condition "sound" followed by condition "mute".

Movement Accuracy Experiment
The results of the standard deviation of the centripetal acceleration in each target speed are presented in Figure 10. We compared the values for both the "sound" and "mute" conditions. Additionally, Figure 11 shows an example of the time sequence data of the magnitude of accelerometer data. The deviation of acceleration is larger in "mute" condition than in sound feedback condition, for each of the target speeds.
Sensors 2017, 17,1171 12 of 16 restraint using TwinkleBall. Because it is a negative evaluation, the implication is that the perception of restraint by the interface was not significant. Although the subjects in this experiment chose a variety of themes and the dance motions depended on the theme and personality (Figure 9), we quantitatively observed that the force of dance, variation of dance movements, and grasping movements increased when the dance beginners used TwinkleBall through this experiment. Moreover, we qualitatively confirmed the subjective effectiveness of TwinkleBall to support improvisational creative dance from the results of the questionnaires. Thus, it can be concluded that by using TwinkleBall, it is possible to assist beginnerlevel dancers perform creative dance.

Movement Accuracy Experiment
The results of the standard deviation of the centripetal acceleration in each target speed are presented in Figure 10. We compared the values for both the "sound" and "mute" conditions. Additionally, Figure 11 shows an example of the time sequence data of the magnitude of accelerometer data. The deviation of acceleration is larger in "mute" condition than in sound feedback condition, for each of the target speeds.
(a) (b) Figure 10. Results for average of standard deviation of detected centripetal acceleration in each target speed: (a) condition "mute" followed by condition "sound"; (b) condition "sound" followed by condition "mute". mute sound Figure 10. Results for average of standard deviation of detected centripetal acceleration in each target speed: (a) condition "mute" followed by condition "sound"; (b) condition "sound" followed by condition "mute".
The paired t-test technique (two-sided) was employed for the verification. We tested at the 5% significance level. From the t-test in the case of condition "mute" followed by condition "sound" in Figure 10a, we determined low speed: t(4) = 1.201, p < 0.296; middle speed: t(4) = 2.785, p < 0.049; and high speed: t(4) = 5.587, p < 0.005. From the t-test in the case of condition "sound" followed by condition "mute" in Figure 10b, we determined low speed: t(4) = −2.130, p < 0.100; middle speed: t(4) = −2.835, p < 0.047; and high speed: t(4) = −3.272, p < 0.031. In the case of both low speeds, no significant difference was observed between the results; however, significant differences can be observed in the cases of middle and high speeds. . Time sequence results of the magnitude of the accelerations for each target speeds in condition "mute" followed by condition "sound": (a) condition "mute"; (b) condition "sound".
The paired t-test technique (two-sided) was employed for the verification. We tested at the 5% significance level. From the t-test in the case of condition "mute" followed by condition "sound" in Figure 10a, we determined low speed: t(4) = 1.201, p < 0.296; middle speed: t(4) = 2.785, p < 0.049; and high speed: t(4) = 5.587, p < 0.005. From the t-test in the case of condition "sound" followed by condition "mute" in Figure 10b, we determined low speed: t(4) = −2.130, p < 0.100; middle speed: t(4) = −2.835, p < 0.047; and high speed: t(4) = −3.272, p < 0.031. In the case of both low speeds, no significant difference was observed between the results; however, significant differences can be observed in the cases of middle and high speeds.
In both cases of with and without sound from TwinkleBall, the standard deviation of centripetal acceleration was greater for the greater speed condition. However, the trend of this correlated increase of motion deviation according to target speed was rather mild in the "sound" condition compared to the "mute" condition as indicated by the significant difference in the standard deviation in the middle and high speeds. This could be explained by the different control strategy of human movement. For slower motions, the feedback control is dominant using sensory feedbacks from visual, proprioceptive, and vestibular sensors, continuously correcting the trajectory. Meanwhile, feedforward control is used to generate faster ballistic motions, where the feedback control is overly slow to be fully incorporated. Because the sound generated by the proposed device represented acceleration by rhythmic tempo, it could contribute to stabilizing acceleration in the repeated ballistic feedforward force generation. Thus, TwinkleBall supports the combination of slow and high speed motion and realizes a variety of expression for the dance beginner.

Conclusions
This paper described the effectiveness of support by the proposed hand-held grasping-type musical interface called TwinkleBall when beginner-level dancers generated physical expression in creative dance. With TwinkleBall, beginners are presented with sounds in real-time that are generated according to their imagined dance performance. Through experiments, we confirmed that these sounds can support successive movement generations continuously during a dance performance. We evaluated in terms of Laban dance movement description, a fine movement expression of grasping motion, and accuracy and repeatability in a task where they produced an imagined circular trajectory by hand. From the results of quantitative measurements and qualitative questionnaires of the creative dance performance, we compared the conditions of TwinkleBall with and without sound. We confirmed that TwinkleBall with sound presentation can increase the force of bodily motion, variation of expression and fine grasping expression, assist creativity, and represent imagined performance accurately for dance beginners.
The device may also be a new tool in brain science research by providing a new task condition . Time sequence results of the magnitude of the accelerations for each target speeds in condition "mute" followed by condition "sound": (a) condition "mute"; (b) condition "sound".
In both cases of with and without sound from TwinkleBall, the standard deviation of centripetal acceleration was greater for the greater speed condition. However, the trend of this correlated increase of motion deviation according to target speed was rather mild in the "sound" condition compared to the "mute" condition as indicated by the significant difference in the standard deviation in the middle and high speeds. This could be explained by the different control strategy of human movement. For slower motions, the feedback control is dominant using sensory feedbacks from visual, proprioceptive, and vestibular sensors, continuously correcting the trajectory. Meanwhile, feedforward control is used to generate faster ballistic motions, where the feedback control is overly slow to be fully incorporated. Because the sound generated by the proposed device represented acceleration by rhythmic tempo, it could contribute to stabilizing acceleration in the repeated ballistic feedforward force generation. Thus, TwinkleBall supports the combination of slow and high speed motion and realizes a variety of expression for the dance beginner.

Conclusions
This paper described the effectiveness of support by the proposed hand-held grasping-type musical interface called TwinkleBall when beginner-level dancers generated physical expression in creative dance. With TwinkleBall, beginners are presented with sounds in real-time that are generated according to their imagined dance performance. Through experiments, we confirmed that these sounds can support successive movement generations continuously during a dance performance. We evaluated in terms of Laban dance movement description, a fine movement expression of grasping motion, and accuracy and repeatability in a task where they produced an imagined circular trajectory by hand. From the results of quantitative measurements and qualitative questionnaires of the creative dance performance, we compared the conditions of TwinkleBall with and without sound. We confirmed that TwinkleBall with sound presentation can increase the force of bodily motion, variation of expression and fine grasping expression, assist creativity, and represent imagined performance accurately for dance beginners.
The device may also be a new tool in brain science research by providing a new task condition that has not been possible without it. Edagawa and Kawasaki [31] measured and analyzed EEG data during rhythmic finger tapping tasks to investigate the brain circuits related to auditory-motor rhythm