Level of Agreement, Reliability, and Minimal Detectable Change of the MusclelabTM Laser Speed Device on Force–Velocity–Power Sprint Profiles in Division II Collegiate Athletes

This study examined the level of agreement (Pearson product-moment correlation [rP]), within- and between-day reliability (intraclass correlation coefficient [ICC]), and minimal detectable change of the MusclelabTM Laser Speed (MLS) device on sprint time and force–velocity–power profiles in Division II Collegiate athletes. Twenty-two athletes (soccer = 17, basketball = 2, volleyball = 3; 20.1 ± 1.5 y; 1.71 ± 0.11 m; 70.7 ± 12.5 kg) performed three 30-m (m) sprints on two separate occasions (seven days apart). Six time splits (5, 10, 15, 20, 25, and 30 m), horizontal force (HZT F0; N∙kg−1), peak velocity (VMAX; m∙s−1), horizontal power (HZT P0; W∙kg−1), and force–velocity slope (SFV; N·s·m−1·kg−1) were measured. Sprint data for the MLS were compared to the previously validated MySprint (MySp) app to assess for level of agreement. The MLS reported good to excellent reliability for within- and between-day trials (ICC = 0.69–0.98, ICC = 0.77–0.98, respectively). Despite a low level of agreement with HZT F0 (rP = 0.44), the MLS had moderate to excellent agreement across nine variables (rp = 0.68–0.98). Bland–Altman plots displayed significant proportional bias for VMAX (mean difference = 0.31 m∙s−1, MLS < MySp). Overall, the MLS is in agreement with the MySp app and is a reliable device for assessing sprint times, VMAX, HZT P0, and SFV. Proportional bias should be considered for VMAX when comparing the MLS to the MySp app.


Introduction
One focus of sprint research is the interpretation of the force-velocity-power (FVP) profile for guiding individualized program development [1]. The common FVP profile consists of theoretical horizontal force output per body mass (HZT F 0 ; N·kg −1 ), theoretical maximal running velocity (V MAX ; m·s −1 ), maximal mechanical horizontal power output per body mass (HZT P 0 ; W·kg −1 ), the slope of the force-velocity curve (S FV ), and the ratio of decreasing force with increasing velocity (D RF ). Customizing training to the athlete's specific FVP profile may allow coaches to develop optimal individual training interventions [1][2][3]. The benefits of FVP profiling include addressing variability in playing level [4,5], sport position [6], and age [7] as well as the ability to take into account external variables, such as time of season [8]. Team sports, such as rugby [3,9], soccer [4,7], ice hockey [10], and netball [11], as well as individual athletic activities, such as ballet [12] and gymnastics [13], have utilized this approach.
An important concern with FVP profiling is the ability to obtain reliable data for within-and between-day testing. Inconsistencies have been reported in jumping [14] and short (<10 m) sprinting performance [15], which causes concern about the efficacy

Participants
Twenty-two Division II university athletes (20.1 ± 1.5 y; 1.71 ± 0.11 m; 70.7 ± 12.5 kg) volunteered to participate in this study. The sample included 16 females and 6 males (13 female soccer, 4 male soccer, 2 male basketball, 3 female volleyball). Based on an a priori power analysis using G*Power 3.1 software [40] for a one-tailed Pearson productmoment correlation between the two devices (MLS vs. MySp), we adopted an α = 0.05, ρ H 1 correlation coefficient = 0.81, and ρ H 0 correlation coefficient = 0.5. The sample size of 22 participants produced a power of 0.81. A post hoc power analysis was used to determine the actual power of the study as 0.95, using a sample size of 22 participants and a Pearson-product moment correlation as 0.87.
Inclusion criteria were (1) 18 to 24 years old, (2) active members of their respective sports teams, and (3) free of any physical limitations, defined as having no lower or upper body musculoskeletal injuries that affected maximum sprinting ability. Exclusion criterion included having a musculoskeletal upper or lower body injury that affected the sprinting exercise. Before enrollment, all participants completed the Physical Activity Readiness Questionnaire and a medical history questionnaire.

Procedures
Participants reported to the research facility on two days for sprint testing, and each visit was separated by seven days. Prior to maximal sprint testing, participants were weighed on a digital scale (Taylor Precision Products, Oak Brook, IL, USA) with full clothing and shoes, and their body mass in kilograms was entered into the MLS and MySp app. Next, participants performed a standardized 15 min warm-up, consisting of 5 min of jogging, 5 min of lower limb dynamic stretching, and 5 min of progressive 30-m sprints at 50%, 70% and 90% effort. After the warm-up, participants performed three 30-m sprints at maximal effort, with 5 min rest in between sprints. A 30-m sprint distance is used in previous research [15,27,36] and is the distance setting for the MySp app. All participants started in a two-point stance, with no false step, and were instructed to start the sprint at any time. All participants were familiar with sprint testing, and all warm-ups and sprinting sessions were supervised by a certified strength and conditioning specialist (CSCS-NSCA). Each sprint was simultaneously assessed by the MLS and MySp app technology.

Musclelab Laser Speed
Measurements of instantaneous split times and FVP data were recorded using MLS (Musclelab TM 6000 ML6LDU02 Laser Speed device, Ergotest Innovations, Stathelle, Norway), sampling at 2.5 KHz (Figure 1). Raw data were analyzed using the Musclelab software (version 10.213.98.5188), which measures continuous velocity (V h (t)) and distance to create an individual FVP profile. A mathematical model is fitted to the recorded velocity/time by calculating the time constant, "tau" (τ). Thus, the velocity/time can be retrieved from the model as where V MAX is observed maximal velocity. A mathematical derivation allows calculation of horizontal acceleration/time: Sports 2022, 10, 57 4 of 16 5 mm and a range of 75 m. The MLS measures continuous distance and FVP profile durin sprinting. To ensure that all participants performed the sprint in a straight line, th investigators created a lane width of 0.66 m, which is considerably less than a standar track, ranging from 1.07-1.22 m, depending on the level of competition. Sprin performance measurements were available immediately after each sprint. The hea strength and conditioning coach was the sole operator for all MLS sprint trials.

MySp App
The MySp app videos were filmed with an iPad (7th generation; iOS 14.4.2 with built in slow-motion video support at 120 fps at a quality of 720p) according to previousl validated methodologies [20]. Notably, Romero et al. [20] used a 40-m track, whereas th current study implemented a 30-m track, consequently slightly altering our paralla measurements ( Figure 2). The iPad was mounted to a tripod (height, 1.46 m) to recor each sprint, assessing the frontal plane to film the sprint from the side. The video paralla was corrected to ensure that the 5, 10, 15, 20, 25, and 30 m split times were measure accurately. As per [20], the marking poles were not exactly at the associated distances bu rather, at the adjusted positions. The iPad camera filmed the participants' hips as the crossed the markers when they were precisely at the targeted distances ( Figure 3). The horizontal force/time can then be calculated: where F air is force caused by wind drag. F h (t) is then expressed as a function of V h (t), and a linear fit is applied. It gives the form F h = Av h + B, also known as F/V profile, where A and B are polynomial constants. The MLS is designed to operate in a typical indoor environment, such as a sports hall or gymnasium, on a flat surface of 29-60 m. MLS was placed 3 m behind the starting line on a tripod of a height of 0.91 m and oriented to the participant's lower back proximal to his or her center of mass. The distance behind the start line for the MLS is similar to previous methodology [25]. The MLS has a standard aiming scope (Strike Red ® Dot 1 × 30) with a pointer beam (605 nm) precision width of 1-5 mm and a range of 75 m. The MLS measures continuous distance and FVP profile during sprinting. To ensure that all participants performed the sprint in a straight line, the investigators created a lane width of 0.66 m, which is considerably less than a standard track, ranging from 1.07-1.22 m, depending on the level of competition. Sprint performance measurements were available immediately after each sprint. The head strength and conditioning coach was the sole operator for all MLS sprint trials.

MySp App
The MySp app videos were filmed with an iPad (7th generation; iOS 14.4.2 with built-in slow-motion video support at 120 fps at a quality of 720p) according to previously validated methodologies [20]. Notably, Romero et al. [20] used a 40-m track, whereas the current study implemented a 30-m track, consequently slightly altering our parallax measurements ( Figure 2). The iPad was mounted to a tripod (height, 1.46 m) to record each sprint, assessing the frontal plane to film the sprint from the side. The video parallax was corrected to ensure that the 5, 10, 15, 20, 25, and 30 m split times were measured accurately. As per [20], the marking poles were not exactly at the associated distances but, rather, at the adjusted positions. The iPad camera filmed the participants' hips as they crossed the markers when they were precisely at the targeted distances ( Figure 3).

Independent Observers
Two independent observers were asked to time stamp the exact start of the sprint initiated by the participants' movement and each split time for the entire sprint. The rationale for two independent observers using the MySp app was to ensure that each observer accurately identified the start of the sprint and the crossing of the hips (i.e., split Sports 2022, 10, 57 6 of 16 times) at each marking point. The start of the sprint was defined as the moment when the back leg plantar flexed, indicating that the participant applied force to the ground. Stamping the start time of the participants' movement, instead of from an external verbal cue (e.g., "set, go"), avoided any confounding effects of participants' reaction time. The first three successful repetitions for each testing day were used in the analysis.

Statistical Analysis
Data analysis was performed using IBM SPSS, Version 27.0 (SPSS Inc., Chicago, IL, USA) software for Windows. Descriptive data for participants' characteristics and experimental variables were calculated as means and standard deviations with 95% confidence intervals. Normality of the distributions for each dependent variable was tested using a Shapiro-Wilk test. Absolute reliability was calculated using ICCs for the within-day (within the three sprints of each session) and between-day (mean of each session) data to determine the reliability of the MLS.
The ICC results were interpreted as 0.2-0.49 = low, 0.50-0.74 = moderate, 0.75-0.89 = high, 0.9-0.98 = very high, and 0.99 = extremely high [41]. Internal consistency for the MLS was examined by calculating the coefficient of variation (CV). A coefficient of variation (SD/mean) of <10% was considered acceptable for reliability and <5%, acceptable for fitness testing standards [42]. The CV was calculated for each participant for the three sprints across both testing days. The average CV of all participants for each dependent variable was reported.
The validated MySp app was selected as the device to assess the level of agreement with the MLS [20]. Level of agreement was assessed by an examination of the significance of the paired t-test mean difference, ICCs, and Pearson product-moment correlation (r P ) with the 95% confidence intervals. Bland-Altman plots were created (GraphPad Prism version 9.2 for Windows; GraphPad software, La Jolla, CA, USA) to examine the linear regression analysis of the difference and average scores across the devices [20,43]. Examination of the 95% confidence intervals for the slope and y-intercept across the split times and FVP were used to determine proportional and fixed bias, respectively.
A one-way repeated-measures analysis of variance (ANOVA) was used to calculate the mean square error for the within-and between-session data. The standard error of measurement (SEM) was calculated by taking the square root of the mean square error [44], and the MDC was detected at the 95% confidence interval (MDC 95 ). The MDC 95 provided assurance that a true change had occurred, outside of error; MDC 95 = SEM × √ 2 × 1.96 [45]. The smallest worthwhile change (SWC) was calculated for within-day testing by selecting the best sprint trial for each session, then multiplying the between-participant standard deviation by 0.2. For between-day testing, the average standard deviation across both days was multiplied by 0.2 [46].
To ensure the accuracy of the MySp app analysis, two independent observers analyzed each participant's video for sprint times. Six independent t-tests were used to calculate the mean differences across the six splits (5, 10, 15, 20, 25, 30 m) to determine whether significant differences in sprint times existed between observers. If no significant differences were found, the data from the primary investigator was used in the analysis. If significant differences were found, the average of the two observers was recorded.

Within-Day Reliability
Normality was satisfied for 95.8% of the data, using a Shapiro-Wilks test (p > 0.05). Out of 120 variables (10 scores x 6 sprints x 2 devices), five variables did not satisfy normality; (1)  Raw data means, standard deviations, ICCs, CVs, SWCs, SEMs, and MDC 95 values with 95% confidence intervals were calculated for the sprint times and FVP profile for the three sprints across both testing days (Tables 1-4). Except for HZT F 0 (0.71 and 0.69) and Day 1 for S FV (0.71); nine of the ten variables reported high to very high (0.8-0.98) within-day reliability scores for both days. The average ICCs across all variables for Day 1 and Day 2 were 0.90 and 0.92, respectively. The average coefficients of variation across all variables for Day 1 and Day 2 were highly acceptable, at 2.4% and 1.8%, respectively. The MDC 95 values for split times ranged from 0.06 to 0.11 s on Day 1 and 0.05 to 0.14 s on Day 2. The MDC 95 values for FVP on Day 1 were 1.16 N·kg −1 (HZT F 0 ), 0.25 m·s −1 (V MAX ), 1.99 W·kg −1 (HZT P 0 ) and 0.29 N·s·m −1 ·kg −1 (S FV ); and for Day 2, were 0.83 N·kg −1 (HZT F 0 ), 0.30 m·s −1 (V MAX ), 1.41 W·kg −1 (HZT P 0 ), and 0.12 N·s·m −1 ·kg −1 (S FV ).

Between-Day Reliability
Between-day reliability values are presented in Table 5 as an average across both testing days. Seven out of ten scores were above 0.9 in addition to all CVs ≤5%, indicating very good to acceptable reliability. The average ICC and CV were 0.90 and 1.6%, respectively, across all variables between testing days. For the time between the days, the range of MDC 95 values for split times was 0.08-0.14, and values for FVP were 0.66 N·kg −1 (HZT F 0 ), 0.28 m·s −1 (V MAX ), 1.43 W·kg −1 (HZT P 0 ), and 0.11 N·s·m −1 ·kg −1 (S FV ).

MySp App Reliability
The descriptive and reliability data for the MySp app are reported in Table 6. The average ICCs across all variables for Day 1 and Day 2 were excellent, at 0.91 and 0.90, respectively. The average coefficients of variation across all variables for Day 1 and Day 2 were highly acceptable, at 2.6% and 2.5%, respectively.

Level of Agreement
The mean difference score and r P were calculated to assess the level of agreement between MLS and MySp app ( Table 7). The r P ranged from 0.44 to 0.98, with low agreement for HZT F 0 (0.44), moderate for S FV (0.60), and the remaining variables showing high levels of agreement, greater than 0.88. Three of the ten variables had no statistical difference in mean difference scores (p > 0.05) based on the paired t-test. A statistical difference (p < 0.05) was reported for four split times 15-30 m with the MySp reporting faster times than the MLS. A statistical difference (p < 0.05) was reported for HZT F 0 with the MySp app reporting lower HZT F 0 values compared to MLS (6.85 < 7.1, mean difference = 0.25). A statistical difference (p < 0.001) was reported for V MAX (MySp faster than MLS, 8.08 > 7.77 m·s −1 , mean difference = 0.31 m·s −1 ) and S FV (p = 0.01), MySp > MLS, −0.83 > −0.87, mean difference = 0.04. No proportional or fixed bias was observed for split times, HZT F 0 , HZT P 0 , and S FV , indicated visually by the Bland-Altman plots (Figures 4-7), which show a low R 2 for the linear regression of HZT F 0 , HZT P 0 , and S FV (0.12, 0.14 and 0.11, respectively). In addition, except for V MAX , all confidence intervals, including the split times, contained zero for the slope and intercept, indicating that the differences between the devices were the same across the six sprints. For V MAX , the R 2 of the linear regression plots has a large effect

Inter-Observer Analysis
Six independent t-tests were used to determine significant differences among six sprint times between two independent observers. Mean difference and p-values for the 5 m

Inter-Observer Analysis
Six independent t-tests were used to determine significant differences among six sprint times between two independent observers. Mean difference and p-values for the 5 m

Discussion
This study reported the level of agreement, reliability, and MDC of the MLS device during maximal 30-m sprints in a sample of Division II Collegiate athletes. The major finding is that the MLS displayed a moderate to excellent level of agreement and reliability for nine of ten variables, with the exception of HZT F0. A second finding is that there was a significant proportional bias for VMAX as compared the MLS to MySp app, with the MySp app detecting faster velocities (mean difference = 0.31 m•s −1 ). Finally, significant differences occurred in four split times (15-30 m), HZT F0, and SFV between the two devices, with the MySp app having faster times, lower HZT F0, and a velocity-dominant slope.
The low to moderate ICC and rP for HZT F0 and moderate ICC and rP for SFV indicate a partial rejection of our primary hypothesis that the MLS would agree with the MySp app for all components of the FVP profile. Our second hypothesis that the MLS is reliable for within-and between-day testing for split times and the FVP profile was primarily supported for nine of ten variables. Despite proportional bias for VMAX, our finding is consistent with a previous study showing the mathematical model for the MySp app to have a 0.32 m•s −1 bias compared to force plate analysis [47].
Our ICCs were both higher [25,28] and similar [27,31] as compared to previous studies on the reliability of sports lasers. Within-and between-day ICCs and CVs across all split times were above 0.83 and less than 5%, respectively. Both HZT P0 and VMAX within-and between-day data showed ICCs and CVs above 0.92 and less than 5%. Notably, HZT F0 showed acceptable internal consistency with CVs less than 5% for both within-(4.7% and 3.8%) and between-day (2.7%) sessions. Pearson-product moment correlations and absolute agreement ICCs for all split times, HZT P0, VMAX, and SFV ranged from 0.60 to 0.98 and 0.68 to 0.95. Similar to our reliability data, the MLS did not agree

Discussion
This study reported the level of agreement, reliability, and MDC of the MLS device during maximal 30-m sprints in a sample of Division II Collegiate athletes. The major finding is that the MLS displayed a moderate to excellent level of agreement and reliability for nine of ten variables, with the exception of HZT F 0 . A second finding is that there was a significant proportional bias for V MAX as compared the MLS to MySp app, with the MySp app detecting faster velocities (mean difference = 0.31 m·s −1 ). Finally, significant differences occurred in four split times (15-30 m), HZT F 0 , and S FV between the two devices, with the MySp app having faster times, lower HZT F 0 , and a velocity-dominant slope.
The low to moderate ICC and r P for HZT F 0 and moderate ICC and r P for S FV indicate a partial rejection of our primary hypothesis that the MLS would agree with the MySp app for all components of the FVP profile. Our second hypothesis that the MLS is reliable for within-and between-day testing for split times and the FVP profile was primarily supported for nine of ten variables. Despite proportional bias for V MAX , our finding is consistent with a previous study showing the mathematical model for the MySp app to have a 0.32 m·s −1 bias compared to force plate analysis [47].
Our ICCs were both higher [25,28] and similar [27,31] as compared to previous studies on the reliability of sports lasers. Within-and between-day ICCs and CVs across all split times were above 0.83 and less than 5%, respectively. Both HZT P 0 and V MAX within-and between-day data showed ICCs and CVs above 0.92 and less than 5%. Notably, HZT F 0 showed acceptable internal consistency with CVs less than 5% for both within-(4.7% and 3.8%) and between-day (2.7%) sessions. Pearson-product moment correlations and absolute agreement ICCs for all split times, HZT P 0 , V MAX, and S FV ranged from 0.60 to 0.98 and 0.68 to 0.95. Similar to our reliability data, the MLS did not agree with the MySp app data for HZT F 0 , with r P and ICC at 0.44 and 0.55, respectively. Bland-Altman plots show no fixed or proportional bias, except for proportional bias for V MAX (MySp > MLS). The absence of bias is supported by all split times, HZT F 0 , HZT P 0 , and S FV as having zero in the 95% CI for the slope and intercept. V MAX did not have zero in the 95% CI for slope (−0.2-−0.02) but did have zero for the intercept (−0.15-1.29), indicating no fixed, but proportional, bias.
MDC is defined as the smallest change in a variable that reflects a true change in performance [48]. MDC is important, particularly when monitoring athletes over several trials, as between-trial variation may suggest a change that has not exceeded a threshold error [36,49]. Studies by Edwards et al. [36] and Ferro et al. [37] used radar and laser technology, respectively, reported MDCs at the 90% confidence interval. Our split time and FVP profile MDC data were similar to those of Edwards et al. (2021) when comparing the average of three trials. SWC is defined as the smallest change in a metric that is likely of practical importance [50]. In four splits (15-30 m) and V MAX , the CV% was approximately equal to the SWC%, indicating that the MLS had an "okay" sensitivity rating in terms of detecting real change [51]. In contrast, the CV% for HZT F 0 , HZT P 0 , and S FV was greater than the SWC%. For Day 1 and Day 2, HZT F 0 was CV (4.7%) > SWC (1.6%) and 3.8% > 1.2%, HZT P 0 was 3.7% > 2.5% and 3.5% > 2.2%, and S FV was 7.0% > 1.5% and 4.0% > 1.1%. Therefore, all three measurements had "marginal to poor" within-day sensitivity [51,52].
Coaches find FVP profiling to be useful as they allow for a more individualized training approach, and if the correct data collection methodology is used, FVP can provide accurate monitoring of progression [1]. However, the value of FVP profiling is subject to debate [14,16]. Studies report HZT F 0 to have moderate reliability [15,17,36,53], specifically for sprints < 10 m [15,36]. Our data are consistent with this finding for HZT F 0 , which achieved moderate ICCs for within-(0.71 and 0.69) and between-day (0.77) testing. Similar inconsistencies for HZT F 0 have been reported when monitoring jumping with large variations between trials noted [14].
Comparisons between the MLS and MySp app can be made in terms of their practical uses in real-world settings considering that both are accurate and valid sprint testing devices. The first advantage of the MLS is that it provides immediate feedback to the coach and sprinter allowing for better coaching instruction during the training session, while the MySp app uses video technology requiring further analysis post-sprint. The second is that the MLS accommodates a variety of settings, such as a field or gymnasium, with minimal setup time, while the MySp app requires a 15-to 20-min setup time (with two people) to ensure that all of the distances and parallaxes are accurate and that the viewing area is clear. This kind of setup can be challenging for short-staffed strength and conditioning coaches who might have one coach per training group. A final advantage is that the MLS can be combined with other measurement devices (e.g., IMU, contact grid) to access more sophisticated data, such as step velocity, step length, and contact time [32]. However, the MLS costs approximately $6000, whereas the MySp app is $10. Further, the MLS does not display the entire FVP profile, as does the MySp app. In this regard, [27] suggests that S FV and D RF are the most important factors in the FVP profile, which the MLS does not record, but the MySp app does. Nevertheless, according to [27], the use of laser technology with trained sprinters resulted in poor reliability for both S FV and D RF .
This study has certain limitations. A research-grade criterion variable, such as a high-speed camera or force plate system, which the study lacked, would provide a better means to validate the MLS device. The rationale for using the MySp app was that the investigators wanted to provide two easily accessible devices for practitioners to make the comparison. A suggestion for future research would be a follow-up validation study that compares the MLS to a criterion variable. Second, the significant difference across the four split times, HZT F 0 , and proportional bias for V MAX between the MLS and MySp app could be due to the sampling rate of the iPad camera (i.e., 120 fps), which is equivalent to a high-speed smartphone [19]. Romeo et al. [20] suggest using an iPhone at a sampling rate of 240 fps when utilizing the MySp app. A camera with a higher sampling rate might have reduced some of the statistical differences across the split times and our bias due to providing a more accurate identification point of the sprinter's start and the hips as they crossed the specified marking poles. Notably, previous literature has used 120 fps to assess sprint performance [54] and treadmill running [55]; however, a practical suggestion for coaches is to have an iPhone, as opposed to an iPad, readily available when using the MySp app. Third, the inherent limitations of testing a sample of novice sprinters may have caused variation in our HZT F 0 data. A contributor to the unwanted variation in HZT F 0 , particularly in the early phases of the sprint (<10 m), is that the changes in the lumbar point (where the laser is aimed) to the center of mass of the sprinter decreases as the sprinter continues to rise to an upright posture [30]. This notion is supported by Talukdar et al. [56], who reported HZT F 0 to have the highest CV in their data set of young female team sport athletes (nevertheless, this study reported acceptable overall reliability for HZT F 0 [ICC = 0.89, CI 0.77-0.94], using radar). One reason for this variation may be that novice sprinters rise too early and are inconsistent at the start of the sprint, and, thus, are not consistent in applying force at the start [56]. The start positon (two-point vs. three-point) maybe a source of error for HZT F 0 due to different athletes becoming accustomed to different starts. The current study used a two-point stance, which is similar to recent research when assessing 30 m sprint performance [57]. Nonetheless, teaching athletes to have more consistency in their start position and sprinting technique (i.e., avoid rising too fast) in the first 10 m may reduce this variation. Therefore, caution should be used with HZT F 0 data when evaluating progression and the training prescription.

Conclusions
The current study found that the MLS displays excellent agreement with the MySp app for most performance measures and that the MLS is a reliable (within-and between-day) device for measuring 30-m split times, V MAX , HZT P 0 , and S FV . Nevertheless, the MLS has moderate to poor accuracy in measuring HZT F 0 . Coaches and practitioners need to be aware of the significant proportional bias for V MAX , with the MySp app's reporting higher sprint velocities than the MLS in the faster runners and a velocity dominant slope for S FV . The MLS is sensitive to a change in 15-30 m sprints and V MAX . Finally, the MDC data add to the knowledge available for coaches and practitioners in terms of identifying an actual change in performance.