Next Article in Journal
Standard of the Initial Ball Velocity for a Fly Ball in Baseball Hitting
Previous Article in Journal
2K-Reality and the Compliant Sports Augmentation Framework for Grassroots Sports
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Technologies to Aid Public Understanding in Running Performance †

Centre for Sports Engineering Research, Sheffield Hallam University, Sheffield S10 2LX, UK
*
Author to whom correspondence should be addressed.
Presented at the 13th conference of the International Sports Engineering Association, Online, 22–26 June 2020.
Proceedings 2020, 49(1), 26; https://doi.org/10.3390/proceedings2020049026
Published: 15 June 2020

Abstract

:
Measurement technologies and visualisation techniques are changing the way public audiences engage with televised coverage of sport. However, the adoption of measurement technologies for broadcast coverage of running—to engage audiences and improve public understanding of performance—has been limited. This might reflect measurement challenges of athletic competition environments; athlete-worn measurement devices can be impractical, and video-based analyses typically require well-defined input videos for analysis (e.g., calibration, etc.). Recently, single-camera and calibration-independent video processing has advanced practical analyses of running performance in sports environments. This paper presents (1) the application of a method to quantify temporal running parameters using broadcast footage of 100 m sprint and 1-mile endurance running, (2) the application of human posture detection to quantify spatial running parameters using hand-held action camera footage and (3) examples of co-developed data visualisations, aimed at improving public engagement and understanding of running performance.

1. Introduction

Measurement technologies and visualisation techniques are changing the way public audiences engage with televised coverage of sport. In sports such as football, cricket and tennis, ball tracking technologies are used to officiate matches as well as provide engaging commentary and performance analysis material. However, the adoption of measurement technologies in broadcast coverage of running, to engage audiences and improve public understanding of athletic performance, has been limited. This might reflect measurement challenges in athletic competition. For example, athlete-worn measurement devices are often impractical. Further, video-based analyses typically require well-defined input videos to allow effective analyses and to derive meaningful information. For example, video-based systems can often require extensive camera setup and calibration, limiting their use in competitive sport environments. This has limited the translation of applied sport science and understanding, to aid public understanding of running performance.
Gait analysis has many applications, ranging from clinical assessment to athletic performance analysis. Running velocity is the product of step length and step frequency; to improve running performance, one or both must be improved [1]. For athletes, the ability to measure and monitor these key performance metrics, would aid training and performance. For public audiences, the ability to understand and engage with these performance metrics would aid public engagement and understanding of athletic performance. Non-invasive analysis systems, developed to automatically measure key performance metrics with minimal intrusion to performance, have been developed [1]. The Gait Analyser system at Sheffield Hallam University City Athletics Stadium (SHUCAS) automatically measures spatio-temporal sprint running parameters (e.g., step length, step frequency and running velocity) in situ (outdoors), without applying markers or sensors to the athlete or running track. Further, results are fed back to Wi-Fi enabled devices ~2–3 s after capture, allowing flexible and near-immediate analyses of athletic performance (e.g., Figure 1).
Whilst such tools are useful to assess an athlete’s sprint strategy, spatial measures require calibration, which is not always possible. Calibrating intrinsic and extrinsic camera parameters can be time consuming or impracticable to calculate in sports environments. Further, camera perspective, which determines the physical resolution of individual pixels in calibrated space, often determines camera filming location and thus the feasibility of analyses. Camera calibration, which allows spatial gait parameter measurement, can therefore limit the flexibility of gait analysis. However, temporal analyses of running—which do not require camera calibration—can provide informative performance metrics, such as step frequency and step count. Further, body posture detection [2] algorithms, which self-scale posture information and are thus camera-agnostic, represent a flexible approach to capturing spatial-temporal metrics, such as step length and joint angle information. Both represent a flexible approach for analysing broadcast footage of competitive sport performances, and as such represent an opportunity to aid public engagement and understanding. This paper presents (1) a method to quantify temporal running parameters using broadcast footage of 100 m sprint and 1-mile endurance running, (2) an application of a posture detection algorithm to quantify spatial-temporal running parameters using a hand-held action camera and (3) examples of co-developed visualisations, to improve engagement and public understanding of running performance.

2. Materials and Methods

All procedures were approved by the Research Ethics Committee of the Faculty of Health and Wellbeing, Sheffield Hallam University.

2.1. 100 m Sprint: Temporal Analysis

Overhead camera footage (PAL encoded at 25 Hz) of the 2015 World Championships men’s 100 m final in Beijing—in which Usain Bolt won gold in a time of 9.79 s—was obtained from BBC Sport (British Broadcasting Corporation, London, UK). Due to the unique camera perspective and nature of a finals race (i.e., slowest time was 10.06 s), all athletes remained in the field-of-view. Therefore, Section 2.1 considers the analysis of step frequency and step count for all competing athletes (n = 9) throughout the entire 100 m race. An open-source tracker [3] was manually initialised for each athlete (bounding box enclosing athlete’s head) and executed sequentially. Horizontal coordinate tracking data for each athlete were then extracted and decomposed using a Daubechies 10 wavelet (level three decomposition). Using derived wavelet coefficients, a single branch, 1D signal for each athlete was reconstructed using the same wavelet and decomposition level. Finally, peak detection was performed for step signal minima and maxima (foot contacts corresponding to lateral extremes of horizontal athlete motion). Figure 2a presents extracted athlete tracking data (horizontal direction) for all athletes and Figure 2b presents corresponding wavelet-based step signals for all athletes (Usain Bolt plotted in solid-green, maxima and minima represent foot contacts).

2.2. 1-Mile Endurance: Temporal Analysis

The New Balance 5th Avenue Mile race is a mile-long race held in New York City. Broadcast race footage of the 2017 and 2018 women’s events was obtained from New Balance (New Balance Athletics, Boston, MA, USA). Race footage from both races (NTSC encoded at 30 Hz) comprised of footage from stationary cameras (e.g., panning and zooming) and moving cameras (e.g., motorcycle mounted). A single athlete was selected for analysis as the athlete won both events, running predominantly from the front of the pack, minimising camera-athlete occlusion. Image sequence windows suitable for analysis were identified and athlete tracking (as Section 2.1) applied. Subsequently, wavelet decomposition (as Section 2.1) was applied to vertical coordinate tracking data; peak detection was performed for step signal minima as foot contacts corresponding to vertical oscillation minima, and windowed analyses were aligned to race time. Figure 3a presents an example of a windowed (~12–20 s into race) wavelet-based step signal, and 3b presents corresponding step time intervals. The application of a discretised smoothing spline (based on generalised cross validation) [4] across captured and missing (e.g., camera view change, athlete occlusion etc.) step interval data allowed step frequency (e.g., red line; Figure 3b) and step count to be estimated for the entire 1-mile race.

2.3. Training Setting: Spatial-Temporal Analysis

Four male participants (age = 41.3 ± 9.3 years; stature = 1.85 ± 0.05 m; mass = 76.8 ± 10.2 kg) were recruited and written informed consent was obtained. Participants were asked to run (three trials) at a self-selected pace along the final 10 m of the 100 m straight at SHUCAS. All trials were filmed with a hand-held action camera (near sagittal plane view) capturing at 60 Hz (Hero7 Black, GoPro, USA). Action camera images were processed using the LCR-Net++ 3D pose detection method [2] with a Microsoft Azure Virtual Machine. Output body location data, comprising self-scaled 3D world coordinates of identified joints, were processed using MATLAB (2019b, MathWorks, USA). The scalar product of thigh and shank vectors (defined by 3D coordinates of corresponding hip, knee and ankle joints) was used to estimate knee angle data (e.g., Figure 4c). Foot lift and foot strike events, based on sagittal plane ankle trajectories, were used to define step frequency and step length (product of mean swing foot velocity and corresponding step time). Figure 4 presents a sample camera image with identified joints highlighted (Figure 4a), 3D self-scaled world coordinates (Figure 4b) and estimated left and right leg knee joint angle (Figure 4c).

2.4. Data Assessment and Data Visualisation

For temporal analyses, the time of identified foot contacts and derived step frequency (quotient of one second and the absolute time difference between foot contacts) was assessed by manually identifying the perceived instant of mid-stance. To assess reliability of manual identification, foot contact instants (n = 22) were identified on five repeat occasions for broadcast footage (25 Hz) used in Section 2.1. Standard error of the mean was <0.01 s; maximum time difference for repeat identified foot contacts was 0.04 s. For the 1-mile endurance race (Section 2.2), analysis was performed for only 2017 footage. For spatial-temporal analyses, step length and step frequency data were compared to corresponding reference data, captured concurrently by the Gait Analyser system [1] (operating at 50 Hz). For all step frequency and step length measures, agreement was assessed using Bland and Altman 95% limits of agreement (LOA). In the case of heteroscedasticity (i.e., |r2| > 0.1), ratio LOA (dimensionless) was reported. Further, root-mean square error (RMSE) was calculated. For 100 m sprint performance footage, step count and step frequency information were rendered in broadcast video using athlete tracking data. For 2018 footage of the Women’s New Balance 5th Avenue Mile, a feedback ‘information centre’ was developed with the New Balance Advanced Concepts team. The purpose of the ‘information centre’ was to present relevant and easily interpretable running performance data, alongside original broadcast video. For training setting videos, recruited participants were invited to review captured video and data derived from their training runs. This session was also used to allow participants to express feedback information pertinent to them, as well as to co-design individualised feedback visualisations.

3. Results

Table 1 presents step identification rate, LOA and RMSE for step frequency and step length. Temporal analysis identified 100% and 97.6% of steps in 100 m sprint and 1-mile endurance running, respectively. Step frequency estimates for 100 m sprint and 1-mile endurance analyses were heteroscedastic: 95% of ratios were between 44.2% and 7.5% of mean ratios respectively. RMSE for 100 m sprint and 1-mile endurance step frequency estimates was 0.12 and 0.17 Hz, respectively. Spatial-temporal analysis identified 100% of steps in training settings. Step frequency and step length estimates were heteroscedastic: 95% of ratios were between 15.0% and 27.4% of mean ratios, respectively. RMSE for training step frequency and step length were 0.31 Hz and 0.85 m, respectively.

4. Discussion

The flexibility of video-based gait analysis, an important tool in public engagement and understanding of athletic performance, can be limited by camera calibration. However, novel temporal (Section 2.1 and Section 2.2) and posture detection [2] analyses represent flexible approaches for improving public understanding. Video footage used in the current study present challenging environments for analyses, encompassing fixed panning cameras, zooming and moving cameras, which experience extreme vibration (e.g., motorcycle mounted filming; Video S1). Regardless, 100 m sprint and training setting analyses identified 100% of steps. For 1-mile endurance race analyses, 97.6% of steps were identified; step detection discrepancies reflect windowed, rather than continuous analyses, whereby step signal minima detection occasionally missed steps at window limits. Continuous tracking would resolve this; however, view changes are common in endurance race footage, and represents a potential limitation. All step frequency estimates exhibit heteroscedasticity, with larger step frequencies (i.e., shorter step time interval) exhibiting larger errors. This highlights the temporal resolution of current broadcast video as insufficient for accurate determination of mid-stance and reflects a need for higher frame rate broadcast video in sport. Finally, training setting step length estimate residuals exhibited a strong positive correlation (i.e., heteroscedasticity; Table 1). Strong correlation might reflect the self-scaling nature of posture detection algorithms [2]. Participant stature was automatically scaled for current analyses. Whilst not indicative of stature, mean distance between head and ankle locations was 1.47 ± 0.01 m, indicating that stature was underestimated. Step length estimates will also be influenced by their calculation (i.e., Section 2.3); however, scaling limitations likely explain heteroscedasticity observed in step length estimates. Figure 5 presents a visualisation for the 2015 World Championships men’s 100 m final; step frequency data (Figure 5b) highlight Bolt’s and Gatlin’s performances (solid green and red respectively). Temporal analysis highlighted that Bolt took fewer steps (41 vs. 43) at a lower average step frequency (4.5 Hz vs. 4.8 Hz), whilst an interesting ‘in phase’ period, observed in Bolt and Gatlin’s step frequencies (~4–5 s; Figure 5b), might reflect competitive interaction between athletes.
Figure 6a presents the New Balance 5th Avenue Mile visualisation, with co-developed ‘information centre’ displaying step count and frequency information. Live, step-by-step updates for step count and frequency data provides viewers with objective and engaging performance information, which visually corresponds to the race. Figure 6b presents a training setting video, illustrating feedback elements. The ‘step-circle’, indicative of idealistic lower-limb motion, with swing foot trajectory (e.g., blue shading), highlight technique aspects pertinent to economic running. Further, body alignment (e.g., red circle is mid-hip) to the head and point-of-support vector identifies body lean, another important aspect of technique. Finally, whilst knee joint angle (Figure 4c) data identify gait phases, accuracy was not addressed. The apparent suppression of angle magnitudes might be systematic, and thus potentially useful in training; however, this should be investigated.

5. Conclusions

This paper presents the utility of video technologies to aid public engagement and understanding in running. Presented technologies are flexible in their application and derive useful information; visualisations can be tailored to specific audiences to aid public engagement in sport.

Supplementary Materials

Video S1: ‘Temporal running analyses using broadcast video’, available online at: www.youtube.com/playlist?list=PLU0PmTP_JoKd9sEo4s1kc_rhKzr4p_KkG.

Acknowledgments

The authors would like to thank Patrick Streeter (Senior Advanced Concepts Engineer, New Balance Athletics Inc.) for assisting co-development of performance visualisations (Section 2.4 and Figure 6a).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Dunn, M.; Kelley, J. Non-invasive, spatio-temporal gait analysis for sprint running using a single camera. Procedia Eng. 2015, 112, 510–515. [Google Scholar] [CrossRef]
  2. Rogez, G.; Weinzaepfel, P.; Schmid, C. LCR-Net++: Multi-person 2D and 3D pose detection in natural images. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 42, 1146–1161. [Google Scholar] [CrossRef] [PubMed]
  3. Henriques, J.; Caseiro, R.; Martins, P.; Batista, J. High-speed tracking with kernalized correlation filters. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 583–596. [Google Scholar] [CrossRef]
  4. Garcia, D. Robust smoothing of gridded data in one and higher dimensions with missing values. Comput. Stat. Data Anal. 2010, 54, 1167–1178. [Google Scholar] [CrossRef]
Figure 1. Example of sprint performance feedback provided by Gait Analyser (adapted from Dunn and Kelley [1]). Relationship between step rate, step length and sprint velocity indicated by circle size and contours. Circle colour corresponds to each athlete; best performances are indicated by a star.
Figure 1. Example of sprint performance feedback provided by Gait Analyser (adapted from Dunn and Kelley [1]). Relationship between step rate, step length and sprint velocity indicated by circle size and contours. Circle colour corresponds to each athlete; best performances are indicated by a star.
Proceedings 49 00026 g001
Figure 2. (a) Athlete tracking data (n = 9); (b) Wavelet-based step signals extracted for analysis (Usain Bolt plotted in solid green; foot contacts are highlighted by red circles).
Figure 2. (a) Athlete tracking data (n = 9); (b) Wavelet-based step signals extracted for analysis (Usain Bolt plotted in solid green; foot contacts are highlighted by red circles).
Proceedings 49 00026 g002
Figure 3. (a) Windowed (~12–20 s into race) wavelet-based step signal (foot contacts are red circles); (b) corresponding step time intervals (coloured circles) and filtered step frequency estimate (red line).
Figure 3. (a) Windowed (~12–20 s into race) wavelet-based step signal (foot contacts are red circles); (b) corresponding step time intervals (coloured circles) and filtered step frequency estimate (red line).
Proceedings 49 00026 g003
Figure 4. (a) Sagittal plane image and identified 2D locations; (b) frontal view of 3D world coordinates for self-scaled locations; (c) knee joint angle estimates for left (green) and right (blue) legs.
Figure 4. (a) Sagittal plane image and identified 2D locations; (b) frontal view of 3D world coordinates for self-scaled locations; (c) knee joint angle estimates for left (green) and right (blue) legs.
Proceedings 49 00026 g004
Figure 5. (a) Broadcast footage with step count and frequency information. (b) Step frequency for all athletes (Bolt and Gatlin highlighted by solid green and red lines respectively) aligned to race time.
Figure 5. (a) Broadcast footage with step count and frequency information. (b) Step frequency for all athletes (Bolt and Gatlin highlighted by solid green and red lines respectively) aligned to race time.
Proceedings 49 00026 g005
Figure 6. (a) Broadcast footage of the Women’s 2018 New Balance 5th Avenue Mile with co-designed New Balance Athletics ‘information centre’, presenting Jenny Simpson’s current step count and step frequency. (b) Training setting footage, with co-designed, spatial-temporal feedback visualisations.
Figure 6. (a) Broadcast footage of the Women’s 2018 New Balance 5th Avenue Mile with co-designed New Balance Athletics ‘information centre’, presenting Jenny Simpson’s current step count and step frequency. (b) Training setting footage, with co-designed, spatial-temporal feedback visualisations.
Proceedings 49 00026 g006
Table 1. Identification rate, LOA and RMSE for step frequency and step length estimates.
Table 1. Identification rate, LOA and RMSE for step frequency and step length estimates.
EnvironmentIdentified StepsAbsolute LOAr2Ratio LOARMSE
Step Frequency (Hz)100 m sprint400/400 (100%)−0.12 ± 1.740.240.98 (×/÷1.43)0.12
1-mile endurance827/847 (97.6%)0.06 ± 0.320.121.02 (×/÷1.10)0.17
Training setting56/56 (100%)0.09 ± 0.590.471.03 (×/÷1.21)0.31
Step Length (m)Training setting56/56 (100%)0.79 ± 0.590.911.81 (×/÷1.42)0.85
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Dunn, M.; Chiu, C.-Y.; Kelley, J.; Haake, S. Technologies to Aid Public Understanding in Running Performance. Proceedings 2020, 49, 26. https://doi.org/10.3390/proceedings2020049026

AMA Style

Dunn M, Chiu C-Y, Kelley J, Haake S. Technologies to Aid Public Understanding in Running Performance. Proceedings. 2020; 49(1):26. https://doi.org/10.3390/proceedings2020049026

Chicago/Turabian Style

Dunn, Marcus, Chuang-Yuan Chiu, John Kelley, and Steve Haake. 2020. "Technologies to Aid Public Understanding in Running Performance" Proceedings 49, no. 1: 26. https://doi.org/10.3390/proceedings2020049026

APA Style

Dunn, M., Chiu, C. -Y., Kelley, J., & Haake, S. (2020). Technologies to Aid Public Understanding in Running Performance. Proceedings, 49(1), 26. https://doi.org/10.3390/proceedings2020049026

Article Metrics

Back to TopTop