Decomposing Juggling Skill into Sequencing, Prediction, and Accuracy: A Computational Model with Low-Gravity VR Training
Abstract
1. Introduction
- Building on our previous VR study, we report, to our knowledge, the first 10-day juggling training multimodal evaluation system that combines computer vision (ball and hand kinematics), motion capture (3D wrist and shoulder trajectories), and biosignals (EEG and EMG), all synchronized on a common time base.
- We develop a fully automatic analysis pipeline that replaces manual video scoring and extracts trial duration, number of successful catches, and ball trajectories at scale from the multimodal recordings.
- Using these extracted measures, we innovatively define three components—Sequencing, Prediction, and Accuracy—and connect them in a computational model that jointly explains day-level juggling performance.
- Leveraging the longitudinal design with repeated measurements in novices, we quantify how these three components change across days and demonstrate that they follow distinct learning rates during early acquisition.
2. Materials and Methods
2.1. Participants
2.2. Experimental Design
- Session 1:
- Real-world three-ball juggling (pre).
- Session 2:
- VR three-ball juggling.
- Session 3:
- Visual-occlusion juggling with shutter glasses.
- Session 4:
- Real-world three-ball juggling (post).
2.3. Sensors and Data Collection
2.3.1. Overview of Sensors
- Optical motion capture (used for P, A): An OptiTrack (NaturalPoint Inc., Corvallis, OR, USA) system recorded 3D positions of four reflective markers placed on both shoulders and both wrists at 200 Hz.
- RGB-D camera (used for S, P, A): A front-facing Intel RealSense D455 (Intel Corporation, Santa Clara, CA, USA) recorded color video and depth at 60 Hz during real-world juggling and the occlusion task, capturing ball trajectories and hand movements (librealsense2 v2.49.0).
- Shutter-glasses trigger (used for P): signals indicating the opening and closing of liquid-crystal shutter glasses (PLATO Visual Occlusion Spectacles; Translucent Technologies Inc., Toronto, ON, Canada [25]) were recorded as an LSL stream and aligned with motion-capture and video data.
- VR system logs: Positions of virtual hands and balls in the visuo-haptic VR environment.
- EEG: A 64-channel EEG (ActiveTwo; BioSemi B.V., Amsterdam, The Netherlands; 2048 Hz) on selected days.
- EMG: Wireless EMG (Trigno; Delsys Inc., Natick, MA, USA; 12 sensors) from upper-limb muscles on selected days.
- fMRI: Functional MRI scans before and after the entire training experiment.
2.3.2. Real-World Three-Ball Juggling
2.3.3. Visual-Occlusion Task
2.4. Construction of Sequencing, Prediction, and Accuracy Indices
2.4.1. Sequencing (S) from Real-World Three-Ball Juggling
- Video trimming and trial timing. The RGB video from Sessions 1 and 4 was segmented into individual trials using Python. Mediapipe Pose (v0.10.18) [26] was used to track both hands, and OpenCV (v4.12.0)-based blob detection [27] identified candidate ball positions. Based on the relative positions and movements of the hands and balls, the program automatically determined when each juggling trial began and ended (Figure 3a).
- Frame-wise ball and hand detection. Within each trimmed juggling trial, videos were processed frame by frame in the second stage. For each frame, it recorded the detected ball candidates and the hand positions estimated by OpenCV [27] and Mediapipe [26]. These frame-wise detections were stored in a compact columnar format (parquet) and served as the input for the tracking stage (Figure 3b).
- Ball tracking and event labeling. Using the frame-wise detections, the third stage reconstructed three continuous ball trajectories for each trial. Throws were labeled when a ball left the vicinity of a hand, and catches were labeled when a ball entered and stayed near the opposite hand. By connecting detections across frames, the algorithm followed each ball over time and produced smooth trajectories with associated throw and catch events (Figure 3c).
- Phase estimation and Sequencing score. In the final stage, we used the reconstructed trajectories to compute phases and derive a Sequencing score (Figure 3d). For each ball , the smoothed and mean-subtracted vertical position was converted to an instantaneous phase using the analytic signal from the Hilbert transform [28],where denotes the Hilbert transform.For windows containing three consecutive apexes from three different balls, we quantified how closely the three phases maintained the ideal spacing. Let be the phase of ball i in degrees and the usable time samples within window w (). The window-wise Sequencing score was defined aswhere is the circular difference (modulo ) between and the ideal spacing. Thus, takes values in , with indicating perfectly maintained phase offsets.Figure 3. Overview of processing pipeline for constructing Sequencing, Prediction, and Accuracy. (a) Display of trimming the RGB video into proper trials. The red horizontal line is a shoulder threshold, while the blue line is a knee threshold. LED glow balls’ positions are estimated and marked as green circles. (b) Display of detecting balls and hands positions of each frame. The balls are labeled as green circles, while the hands are labeled as red circles. (c) A graphical example of reconstructed tracks of balls and hands by the tracker. The x-axis is time (s), and the y-axis is height (pixel). The blue, orange, and green lines describe trajectories of 3 different balls, while the pink and light blue lines show the vertical movements of both hands. Sequencing is computed based on these reconstructed balls’ phases. (d) Phase estimation in the Phasor stage. The top panel shows vertical trajectories of the three balls, with apex times used to define windows. The middle panel shows the instantaneous phase of each ball (degrees), and the bottom panel shows pairwise phase differences compared with the ideal 120° phase difference. (e) Example of success and failure of the visual-occlusion task for computing Prediction (P). (f) Illustration of probability ellipsoid for computing Accuracy (A). The x-axis is the lateral distance (m) from the center of a participant’s body, while the y-axis is the front–back distance (m) [29]. Each black dot connected with a blue line means the position of a first catch with the right hand (in this case), and each black dot connected with a red line means the position of a second catch with the left hand. Pink ovals represent the 95% probability ellipsoid of second catch positions of each day. Most participants show a decrease in the ellipsoid volume throughout the training period (left → right).Figure 3. Overview of processing pipeline for constructing Sequencing, Prediction, and Accuracy. (a) Display of trimming the RGB video into proper trials. The red horizontal line is a shoulder threshold, while the blue line is a knee threshold. LED glow balls’ positions are estimated and marked as green circles. (b) Display of detecting balls and hands positions of each frame. The balls are labeled as green circles, while the hands are labeled as red circles. (c) A graphical example of reconstructed tracks of balls and hands by the tracker. The x-axis is time (s), and the y-axis is height (pixel). The blue, orange, and green lines describe trajectories of 3 different balls, while the pink and light blue lines show the vertical movements of both hands. Sequencing is computed based on these reconstructed balls’ phases. (d) Phase estimation in the Phasor stage. The top panel shows vertical trajectories of the three balls, with apex times used to define windows. The middle panel shows the instantaneous phase of each ball (degrees), and the bottom panel shows pairwise phase differences compared with the ideal 120° phase difference. (e) Example of success and failure of the visual-occlusion task for computing Prediction (P). (f) Illustration of probability ellipsoid for computing Accuracy (A). The x-axis is the lateral distance (m) from the center of a participant’s body, while the y-axis is the front–back distance (m) [29]. Each black dot connected with a blue line means the position of a first catch with the right hand (in this case), and each black dot connected with a red line means the position of a second catch with the left hand. Pink ovals represent the 95% probability ellipsoid of second catch positions of each day. Most participants show a decrease in the ellipsoid volume throughout the training period (left → right).To quantify the sequencing ability for each day, we pooled all valid sequencing windows () obtained from trials yielding at least one valid window across the analyzed sessions (Sessions 1 and 4). The day-level Sequencing index (S) was computed as the arithmetic mean of these pooled window values.
2.4.2. Prediction (P) and Accuracy (A) from the Visual-Occlusion Task
- Trial segmentation and labeling. We used Lab Streaming Layer (LSL) XDF files containing video timestamps, motion-capture markers for both wrists and shoulders, and shutter timing signals. In MATLAB (R2021b), continuous recordings were segmented into individual trials based on changes in hand velocity and acceleration, and each trial was saved as a separate .mat file. For each trial, we annotated both the timing and success of the catches for each hand (including the second catch). Trials with shutter malfunction, severe marker loss, or unclear outcomes were excluded from subsequent analyses.
- Prediction (P): Catch success in the visual-occlusion task. Prediction (P) summarizes how reliably both hands complete the catches during the two-ball visual-occlusion task (Figure 3e). For each participant p and day d, let denote the number of valid trials in Session 3 in which both hands successfully completed the second catch, and let be the total number of valid trials on that day. The day-level Prediction index was defined aswith higher values indicating more reliable catch performance in the visual-occlusion task.
- Accuracy (A): Spatial dispersion of catch locations. Accuracy (A) quantifies how tightly the second-catch positions cluster in 3D space (Figure 3f). This two-ball shutter-glasses paradigm was chosen because it has many controlled catch samples and allows us to quantify hand-centered spatial precision under reduced visual feedback, which is harder to measure reliably in short and unstable three-ball cascades during early learning. For each participant p and day d, we expressed the 3D position of each second catch in a hand-centered coordinate system defined by the corresponding wrist marker. In Session 3 on Days 1, 5, and 10, two-ball juggling was performed in four blocks; for each valid block and each hand , all second-catch positions were pooled to form a 3D point cloud, and a 95% probability ellipsoid was fitted to this cloud, yielding an ellipsoid volume .Day-level Accuracy was then defined as the geometric mean of these volumes,so that smaller values of indicate tighter clustering and, therefore, higher spatial Accuracy of the second catches. Blocks in which the ellipsoid fit was unstable (e.g., nearly singular covariance) were excluded before computing the geometric mean.
- Aggregation and data merging. Day-level indices P, A, and S were summarized for Days 1, 5, and 10 and merged into a single dataset with participant and group information for the subsequent statistical analyses.
2.5. Statistical Modeling of Performance
2.5.1. Data Preprocessing
2.5.2. Generalized Linear Models
2.5.3. Model Evaluation and Comparison
3. Results
3.1. Skill Metrics over Days
3.2. Results of Computational Modeling
3.3. Sensitivity Analyses
4. Discussion
- H1. The three indices showed different distributions and behaviors, supporting the idea that early juggling skill can be separated into Sequencing, Prediction, and Accuracy (Figure 4).
- H2. Day-level juggling performance can be explained by the three components together. All components had positive effects on juggling proficiency , and Sequencing and Prediction showed the strongest independent contributions to it in the model (Table 2).
- H3. The sub-skills did not increase at the same rate across days, showing lead–lag development. In line with our expectation, Sequencing rose steadily and remained higher in the Low-g group, suggesting that slow-tempo low-gravity VR training mainly supported early Sequencing gains (Figure 5).
- Beyond these hypotheses, we observed large individual differences in learning trajectories, and a few very high-performance points were not well captured by the model (Figure 6).
4.1. Sub-Skill Contributions to Real-World Juggling Performance
4.2. Learning-Stage Dependence and Asymmetric Development
4.3. Inter-Individual Variability and High-Performance Tail
4.4. Computational Model and Goodness-of-Fit
4.5. Limitations
4.6. Future Work
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
| VR | Virtual reality |
| GLM | Generalized linear model |
| DTI | Diffusion tensor imaging |
| VBM | Voxel-based morphometry |
| PCA | Principal component analysis |
| FDA | Functional data analysis |
| N-T-C | Noise reduction, tolerance exploitation, and covariation |
| EEG | Electroencephalography |
| EMG | Electromyography |
| S | Sequencing (index) |
| P | Prediction (index) |
| A | Accuracy (index) |
| fMRI | Functional magnetic resonance imaging |
| Performance (juggling performance) | |
| LSL | Lab streaming layer |
| LED | Light-emitting diode |
| MAD | Median absolute deviation |
| AIC | Akaike information criterion |
| Coefficient of determination | |
| RMSE | Root mean squared error |
| MAE | Mean absolute error |
| LOSO-CV | Leave-one-subject-out cross-validation |
| VIF | Variance inflation factor |
| IQR | Interquartile range |
| CI | Confidence interval |
| SE | Standard error |
| LogLik | Log-likelihood |
References
- Willingham, D.B. A neuropsychological theory of motor skill learning. Psychol. Rev. 1998, 105, 558–584. [Google Scholar] [CrossRef]
- Dayan, E.; Cohen, L.G. Neuroplasticity Subserving Motor Skill Learning. Neuron 2011, 72, 443–454. [Google Scholar] [CrossRef] [PubMed]
- Fitts, P.M.; Posner, M.I. Human Performance; Brooks/Cole: Belmont, CA, USA, 1967. [Google Scholar]
- Newell, K.M. Coordination, Control and Skill. Adv. Psychol. 1985, 27, 295–317. [Google Scholar] [CrossRef]
- Bernstein, N.A. The Co-ordination and Regulation of Movements; Pergamon Press: Oxford, UK, 1967. [Google Scholar]
- Todorov, E.; Jordan, M.I. Optimal feedback control as a theory of motor coordination. Nat. Neurosci. 2002, 5, 1226–1235. [Google Scholar] [CrossRef]
- Wolpert, D.M.; Diedrichsen, J.; Flanagan, J.R. Principles of sensorimotor learning. Nat. Rev. Neurosci. 2011, 12, 739–751. [Google Scholar] [CrossRef]
- Bebko, J.M.; Demark, J.L.; Osborne, P.A.; Majumder, S.; Ricciuti, C.J.; Rhee, T. Acquisition and Automatization of a Complex Task: An Examination of Three-Ball Cascade Juggling. J. Mot. Behav. 2003, 35, 109–118. [Google Scholar] [CrossRef]
- Zago, M.; Pacifici, I.; Lovecchio, N.; Galli, M.; Federolf, P.A.; Sforza, C. Multi-segmental movement patterns reflect juggling complexity and skill level. Hum. Mov. Sci. 2017, 54, 144–153. [Google Scholar] [CrossRef]
- Scholz, J.; Klein, M.C.; Behrens, T.E.; Johansen-Berg, H. Training induces changes in white-matter architecture. Nat. Neurosci. 2009, 12, 1370–1371. [Google Scholar] [CrossRef]
- Draganski, B.; Gaser, C.; Busch, V.; Schuierer, G.; Bogdahn, U.; May, A. Changes in grey matter induced by training. Nature 2004, 427, 311–312. [Google Scholar] [CrossRef] [PubMed]
- Huys, R.; Beek, P.J. The coupling between point-of-gaze and ballmovements in three-ball cascade juggling: The effects of expertise, pattern and tempo. J. Sports Sci. 2002, 20, 171–186. [Google Scholar] [CrossRef] [PubMed]
- van Santvoord, A.; Beek, P.J. Spatiotemporal variability in cascade juggling. Acta Psychol. 1996, 91, 131–151. [Google Scholar] [CrossRef]
- Post, A.; Daffertshofer, A.; Beek, P.J. Principal components in three-ball cascade juggling. Biol. Cybern. 2000, 82, 143–152. [Google Scholar] [CrossRef]
- Müller, H.; Sternad, D. Decomposition of Variability in the Execution of Goal-Oriented Tasks: Three Components of Skill Improvement. J. Exp. Psychol. Hum. Percept. Perform. 2004, 30, 212–233. [Google Scholar] [CrossRef]
- Ramsay, J.O.; Gribble, P.L.; Kurtek, S. Analysis of juggling data: Landmark and continuous registration of juggling trajectories. Electron. J. Stat. 2014, 8, 1835–1841. [Google Scholar] [CrossRef]
- Yamamoto, K.; Tsutsui, S.; Yamamoto, Y. Constrained paths based on the Farey sequence in learning to juggle. Hum. Mov. Sci. 2015, 44, 102–110. [Google Scholar] [CrossRef][Green Version]
- Yamamoto, K.; Shinya, M.; Kudo, K. Asymmetric Adaptability to Temporal Constraints among Coordination Patterns Differentiated at Early Stages of Learning in Juggling. Front. Psychol. 2018, 9, 807. [Google Scholar] [CrossRef]
- Cao, Y.; Zhang, Z. Enhanced Contour Tracking: A Time-Varying Internal Model Principle-Based Approach. IEEE/ASME Trans. Mechatron. 2025, 30, 3188–3196. [Google Scholar] [CrossRef]
- Khan, D.; Alonazi, M.; Abdelhaq, M.; Al Mudawi, N.; Algarni, A.; Jalal, A.; Liu, H. Robust human locomotion and localization activity recognition over multisensory. Front. Physiol. 2024, 15, 1344887. [Google Scholar] [CrossRef]
- Kou, J.; Wang, Y.; Chen, Z.; Shi, Y.; Guo, Q.; Xu, M. Flexible assistance strategy of lower limb rehabilitation exoskeleton based on admittance model. Sci. China Technol. Sci. 2024, 67, 823–834. [Google Scholar] [CrossRef]
- Scott, S.H. Optimal feedback control and the neural basis of volitional motor control. Nat. Rev. Neurosci. 2004, 5, 532–546. [Google Scholar] [CrossRef]
- Cho, W.; Kobayashi, M.; Kambara, H.; Tanaka, H.; Kagawa, T.; Sato, M.; Kim, H.; Miyakoshi, M.; Makeig, S.; Iversen, J.; et al. Enhancing Juggling Proficiency Through Slow-Tempo Virtual Reality Training. In Proceedings of the 2025 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), Saint Malo, France, 8–12 March 2025; pp. 904–910. [Google Scholar] [CrossRef]
- Kothe, C.; Shirazi, S.Y.; Stenner, T.; Medine, D.; Boulay, C.; Grivich, M.I.; Artoni, F.; Mullen, T.; Delorme, A.; Makeig, S. The lab streaming layer for synchronized multimodal recording. Imaging Neurosci. 2025, 3, IMAG.a.136. [Google Scholar] [CrossRef]
- Milgram, P. A spectacle-mounted liquid-crystal tachistoscope. Behav. Res. Methods Instrum. Comput. 1987, 19, 449–456. [Google Scholar] [CrossRef]
- Lugaresi, C.; Tang, J.; Nash, H.; McClanahan, C.; Uboweja, E.; Hays, M.; Zhang, F.; Chang, C.L.; Yong, M.G.; Lee, J.; et al. MediaPipe: A Framework for Building Perception Pipelines. arXiv 2019, arXiv:1906.08172. [Google Scholar] [CrossRef]
- Bradski, G. The OpenCV Library. Dr. Dobb’s J. Softw. Tools 2000, 25, 120–125. [Google Scholar]
- Gabor, D. Theory of communication. Part 1: The analysis of information. J. Inst. Electr. Eng. Part III Radio Commun. Eng. 1946, 93, 429–441. [Google Scholar] [CrossRef]
- Wu, G.; Cavanagh, P.R. ISB recommendations for standardization in the reporting of kinematic data. J. Biomech. 1995, 28, 1257–1261. [Google Scholar] [CrossRef]
- Rousseeuw, P.J.; Croux, C. Alternatives to the Median Absolute Deviation. J. Am. Stat. Assoc. 1993, 88, 1273–1283. [Google Scholar] [CrossRef]
- Šimkovic, M.; Träuble, B. Robustness of statistical methods when measure is affected by ceiling and/or floor effect. PLoS ONE 2019, 14, e0220889. [Google Scholar] [CrossRef]
- Nelder, J.A.; Wedderburn, R.W. Generalized Linear Models. J. R. Stat. Soc. Ser. A 1972, 135, 370–384. [Google Scholar] [CrossRef]
- McCullagh, P. Generalized Linear Models, 2nd ed.; Routledge: Abingdon-on-Thames, UK, 1989. [Google Scholar] [CrossRef]
- Akaike, H. A new look at the statistical model identification. IEEE Trans. Autom. Control 1974, 19, 716–723. [Google Scholar] [CrossRef]
- Cox, D.R. Analysis of Binary Data, 2nd ed.; Routledge: Abingdon-on-Thames, UK, 1989. [Google Scholar] [CrossRef]
- Willmott, C.J.; Matsuura, K. Advantages of the mean absolute error (MAE) over the root mean square error (RMSE) in assessing average model performance. Clim. Res. 2005, 30, 79–82. [Google Scholar] [CrossRef]
- Esterman, M.; Tamber-Rosenau, B.J.; Chiu, Y.C.; Yantis, S. Avoiding non-independence in fMRI data analysis: Leave one subject out. NeuroImage 2010, 50, 572–576. [Google Scholar] [CrossRef]
- Pearson, K. Mathematical Contributions to the Theory of Evolution. III. Regression, Heredity, and Panmixia. Philos. Trans. R. Soc. A 1896, 187, 253–318. [Google Scholar] [CrossRef]
- Neter, J.; Kutner, M.H.; Nachtsheim, C.J.; Wasserman, W. Applied Linear Statistical Models, 4th ed.; Irwin: Burr Ridge, IL, USA, 1996. [Google Scholar]
- Dunn, P.K.; Smyth, G.K. Series evaluation of Tweedie exponential dispersion model densities. Stat. Comput. 2005, 15, 267–280. [Google Scholar] [CrossRef]





| Day | Group | (Sequencing) | (Prediction) | (Accuracy) | |
|---|---|---|---|---|---|
| 1 | Low-g | 3.5 [3, 4.5] | −0.615 [−1.37, 0.228] | −1.02 [−1.53, −0.69] | −0.87 [−1.66, −0.616] |
| 1 | Normal-g | 3 [2.1, 3] | −1.08 [−1.88, 0.185] | −1.13 [−1.41, −0.659] | −0.346 [−0.877, 0.0361] |
| 5 | Low-g | 9.35 [5.5, 11.9] | 0.346 [−1.27, 0.852] | 0.0778 [−0.432, 0.374] | −0.0247 [−0.839, 0.104] |
| 5 | Normal-g | 5.5 [4, 12.3] | −0.203 [−1.09, 0.173] | 0.178 [−0.556, 0.766] | 0.254 [−0.303, 0.797] |
| 10 | Low-g | 17.8 [11.6, 25.3] | 0.535 [−0.0257, 0.75] | 0.364 [0.164, 0.892] | 0.328 [−0.178, 0.54] |
| 10 | Normal-g | 9.75 [6.1, 18.5] | 0.0981 [−0.438, 0.354] | 0.395 [0.133, 0.983] | 1 [0.175, 1.23] |
| Predictor | SE | z | p | 95% CI | ||
|---|---|---|---|---|---|---|
| Intercept | 2.5563 | 0.1276 | 20.043 | *** | [2.306, 2.806] | – |
| 0.4073 | 0.1766 | 2.307 | * | [0.061, 0.753] | 1.503 | |
| 0.1669 | 0.1838 | 0.908 | 0.364 | [, 0.527] | 1.182 | |
| 0.5279 | 0.1036 | 5.092 | *** | [0.325, 0.731] | 1.695 |
| Model | LogLik | AIC | Pseudo- (CS) | /RMSE/MAE | LOSO /RMSE/MAE |
|---|---|---|---|---|---|
| Gamma–Log | −188.63 | 385.26 | 0.701 | 0.427 / 25.77 / 11.40 | 0.270 / 29.08 / 12.49 |
| Normal–Identity | −272.71 | 553.43 | 0.301 | 0.277/28.95/18.04 | 0.039/33.36/20.74 |
| Gamma-Log | Normal-Identity | |||
|---|---|---|---|---|
| Day | RMSE | RMSE | ||
| Day 1 | 0.000 | 5.17 | 0.000 | 15.10 |
| Day 5 | 0.168 | 17.90 | 0.035 | 19.28 |
| Day 10 | 0.375 | 39.36 | 0.265 | 42.69 |
| Model | Key Interaction | AIC | /RMSE | |||
|---|---|---|---|---|---|---|
| Gamma-Log (base) | 0.41 * | 0.17 | 0.53 *** | – | 385.26 | 0.43/25.77 |
| Base + full dataset ( imputed) | 0.44 * | 0.17 | 0.53 *** | – | 396.99 | 0.42/25.39 |
| Base + skill interactions | 0.54 ** | 0.06 | 0.55 *** | 382.96 | 0.65/20.21 | |
| Base + skill interactions + Group + Day | 0.39 † | 0.01 | 0.50 *** | * | 382.25 | 0.71/18.37 |
| Base + interactions + Group + Day + subject | 0.22 * | * ** | 303.31 | 0.91/10.18 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Cho, W.; Kobayashi, M.; Kambara, H.; Tanaka, H.; Kagawa, T.; Sato, M.; Kim, H.; Miyakoshi, M.; Makeig, S.; Iversen, J.R.; et al. Decomposing Juggling Skill into Sequencing, Prediction, and Accuracy: A Computational Model with Low-Gravity VR Training. Sensors 2026, 26, 294. https://doi.org/10.3390/s26010294
Cho W, Kobayashi M, Kambara H, Tanaka H, Kagawa T, Sato M, Kim H, Miyakoshi M, Makeig S, Iversen JR, et al. Decomposing Juggling Skill into Sequencing, Prediction, and Accuracy: A Computational Model with Low-Gravity VR Training. Sensors. 2026; 26(1):294. https://doi.org/10.3390/s26010294
Chicago/Turabian StyleCho, Wanhee, Makoto Kobayashi, Hiroyuki Kambara, Hirokazu Tanaka, Takahiro Kagawa, Makoto Sato, Hyeonseok Kim, Makoto Miyakoshi, Scott Makeig, John Rehner Iversen, and et al. 2026. "Decomposing Juggling Skill into Sequencing, Prediction, and Accuracy: A Computational Model with Low-Gravity VR Training" Sensors 26, no. 1: 294. https://doi.org/10.3390/s26010294
APA StyleCho, W., Kobayashi, M., Kambara, H., Tanaka, H., Kagawa, T., Sato, M., Kim, H., Miyakoshi, M., Makeig, S., Iversen, J. R., & Yoshimura, N. (2026). Decomposing Juggling Skill into Sequencing, Prediction, and Accuracy: A Computational Model with Low-Gravity VR Training. Sensors, 26(1), 294. https://doi.org/10.3390/s26010294

