Assessing Jump Performance: Intra-and Interday Reliability and Minimum Di ﬀ erence of Countermovement Jump and Drop Jump Outcomes, Kinetics, Kinematics, and Jump Strategy

.


Introduction
In high-performance sports settings, monitoring an athlete's readiness, fatigue, and subsequent recovery in response to training loads is critical to optimize performance outcomes [1,2].Effective athlete monitoring strategies should be minimally invasive, reliable, and time-efficient to avoid stress or fatigue on the athlete [2,3].The countermovement jump (CMJ) and drop jump (DJ) tests are widely used for evaluating lower-limb power and neuromuscular function in athletes due to their simplicity, nonfatiguing nature, and time efficiency [4][5][6][7][8][9].This type of vertical jump-based testing is not only simple and time-efficient but has also proven to be sensitive to fatigue induced by exercise on the neuromuscular system in various sports [10,11].For instance, the CMJ is capable of reflecting changes following both chronic [12] and acute training interventions [13][14][15].Similarly, both the CMJ and the drop jump (DJ) have demonstrated being the two vertical jump tests that exhibit prolonged performance decrements post-exercise, suggesting an extended sensitivity compared to other tests such as the squat jump (SJ) or 20 m sprint times [16].Therefore, the assessment of vertical jump performance can be used as a valuable tool for coaches and trainers to monitor athletes' physical progress and make informed decisions based on objective estimations of the neuromuscular system function.
These tests have been studied in various populations, especially the CMJ [4,6], but research is limited regarding the reliability of these tests in female athletes specialized in jumping.Most studies that have examined the reliability of vertical jump tests have been predominantly conducted in non-specialized sports populations for jumping (i.e., vertical jumping is not the primary movement pattern during competition and/or training).The characteristics of the specific sports activity can lead to adaptations over time in contraction time and force production, both in CMJ and DJ, providing different jump patterns as previously explained through principal component analysis [17].On the other hand, when the reliability of these tests has been analyzed in volleyball populations, the research has primarily focused on men.However, none of these studies have reported on the absolute and relative reliability of jumping metrics of both tests in female volleyball players, despite possible differences that may arise due to athletes' sport background and gender [18].
Furthermore, force plate jump testing is becoming more accessible to training coaches due to advances in technology and reductions in cost [19].In this sense, utilizing force platform technology for analyzing vertical jump kinetics, kinematics, and jump strategy offers several advantages as it allows for a more comprehensive neuromuscular evaluation beyond just outcome measures, such as jump height or contact time [20].By including force-time and strategy metrics, coaches can gain useful insights into an athlete's neuromuscular function.These metrics appear to be more responsive to change compared to jump height measures.This is especially noticeable after intense exercise, during recovery from injury, throughout long-term athlete development, and when evaluating neuromuscular function in various age groups [7,21].While the advantages of force platform technology are clear, there are several factors that need careful consideration.Numerous studies have recognized the limitations of relying solely on outcome measures and emphasized the importance of concurrently monitoring kinetics, kinematics and jump strategy as well [7,20].However, several factors may affect the reliability of volitional tests, such as fatigue, learning effects, motivation, and/or hormonal status that may be considered as sources of measurement error during testing [22].
Even though CMJ and DJ metrics obtained with force platforms can offer valuable information about female jumping players' performance [23][24][25], and there are likely differences in jumping force time metrics compared to other athletes [17,18], the reliability of CMJ and DJ force-time metrics within a session (intrasession) and between different days or weeks (interday) are not thoroughly researched.Therefore, it seems crucial to evaluate the reliability of these tests with the aim of providing accurate information and minimum cut-off values on metrics associated with athletes' performance and neuromuscular status.The CMJ and DJ assessments represent an easy method to assess performance, demanding minimal equipment.Their reliability has been observed mainly in male team athletes.However, the intra-and intersession reproducibility of these evaluations in an ecologically valid setting, specifically among female volleyball players, remains an area yet to be thoroughly investigated.Given the potential influence of diverse sporting backgrounds on these metrics, the main objective of the present study is to analyze the absolute and relative reliability of both CMJ and DJ metrics within and between sessions in the context of semi-professional female volleyball players.The hypothesis for this study is that concentric metrics of the CMJ and DJ tests will exhibit the highest reliability, both within a single testing session (intrasession reliability) and across different testing days (interday reliability) in semi-professional female volleyball players.
In summary, this study aims to contribute to the existing literature by providing valuable information on the reliability of the CMJ and DJ outcomes, kinetics, kinematics, and jump strategy using force platforms in semi-professional female volleyball players.This information can allow volleyball coaches and strength and conditioning professionals to better understand the reliability of several metrics of vertical jumps.

Participants
Sixteen volunteers from the same semi-professional female volleyball team were recruited for this study (Table 1).Prior to testing, all participants underwent a thorough medical screening as per their team's medical protocols to ensure they were free from any lower-body injuries that could potentially impact their jumping performance.Players with a lower body injury in the 3 months prior to the first testing session were excluded from the study.Additionally, all participants had at least two years of experience in strength and power training and provided written consent to participate in the university research ethics committeeapproved project (16_23_RNM_FP).

Procedures
To identify the intra-and interday reliability of the CMJ and DJ metrics, a randomized cross-over within the subject design was used (Figure 1).Participants performed 3 CMJs and 3 DJs trials on a force platform, in a randomized order, separated by one week.Each attempt was separated by at least 3 min of passive recovery.Each jump for both tests were performed on a Force-Decks FD4000 Dual Force platform (ForceDecks London, United Kingdom) with a sampling rate of 1000 Hz.The vertical ground reaction force data obtained from each jump were inputted into the ForceDecks software (ForceDecks, London, United Kingdom) for analysis.A fourth-order Butterworth low-pass filter with a cut-off frequency of 50 Hz was used to generate all the dependent variables for each jump.All dependent variables were calculated using forward dynamics [19].One week prior to the first testing session, participants performed a familiarization session to ensure an appropriate CMJ and DJ technique.For participants' description, body composition was analyzed through bioelectrical impedance [26].All assessments were conducted in a room adjacent to a multi-sport indoor court within the same sports facility.During the in-season training period, all testing was conducted within a 3-week time frame.Each subject participated in three different sessions of jump assessments, consisting of one familiarization session and two evaluation sessions.Participants performed 3 countermovement jump (CMJ) and 3 drop jump (DJ) exercises in each session.The order of the jump type was randomized, and during the second evaluation session, subjects performed the jumps in the same order.A rest period of at least 3 min was given between each jump trial.To ensure consistency, both testing sessions were conducted within the same hour in the afternoon, between 19:00 and 20:00, as the previous literature has indicated the influence of time of day on jump performance [27,28].To standardize the two assessments without compromising the ecological validity of the results, the tests were conducted on the same training day within their typical weekly microcycle structure (Three training sessions per week and one competition on Sunday).The assessments were performed on match day (MD)-3 (Thursday).Additionally, the training loads of the two preceding workouts of that week (MD + 1 and MD-4) were analyzed to ensure the same training load.A previously described TL quantification method was employed: sRPE = volume in minutes x RPE [29,30].In both evaluation weeks, the training load was similar in both MD + 1 and MD-4 (p > 0.05).Additionally, subjects were instructed not to engage in any physical exertion before arriving for testing.For ecological validity, subjects wore their standard practice gear, including their chosen shoes, and they were required to wear the same pair of shoes during both testing sessions.All testing was conducted at the participants volleyball training facility.No dietary restrictions were implemented; however, athletes were advised to maintain their normal dietary intake.For testing days, participants were asked to refrain from alcohol and caffeine for 24 h prior to the session to minimize their impact on jump performance [31,32].To minimize the effect of instructions on jumping performance, the instructions provided to participants were standardized.A 3-2-1 countdown followed by a verbal stimulus was performed to support maximal effort during jump execution.A standardized warm-up was performed before each testing session using the RAMP (Rise-Activate and Mobilize-Potentiate) [33] method, consisting of 5 min of jogging, 5 min of dynamic stretching, mobility, and core activation, ending with submaximal 1 × 5 CMJ and 1 × 5 rebound CMJ.A total of 2 min of passive rest was provided between each warmup block.The warm-up gradually increased in intensity to prepare participants for maximal performance during jump testing.
To ensure proper weighing of the subjects (and thus a proper forward dynamic process) the force platforms were zeroed.Immediately after, the subjects were placed on the force platforms and held as still as possible for at least 1 s to ensure proper weighing [21].The center-of-mass (COM) velocity was calculated by dividing the vertical ground reaction force (minus body weight) by body mass and then integrating the product using the trapezoid rule [34].Instantaneous power was calculated by multiplying the vertical force by the COM velocity.COM displacement was determined by double integrating the vertical force data.To consider a jump successful, participants had to perform it with their arms akimbo and remain completely still for at least one second during the weighing phase.The onset of movement was determined when a drop of 20 N from the baseline force (recorded during the weighing phase) was observed.Several variables derived from the force-time data were included in the reliability analysis because they may be of interest to strength and conditioning coaches for different reasons [7,13].For intersession reliability the mean of the 3 jumps was used for analysis while intersession reliability was calculated with the 3 jumps of the first testing day.
The drop jump test was performed by dropping from a 30 cm box, which has proven to be one of the optimal heights for this test [35], following previous guidelines using just one force platform [5].Before stepping onto the 30 cm box, the participants weighed themselves on the strength platforms.The weight recorded during the weighing phase was used throughout the drop jump test.For this purpose, the mean forces during at least 5 s were recorded until the body weight fluctuates by no more than +/− 0.1 kg.At this moment, the Forcedecks ® software (v2.0.7782;Vald Performance, Brisbane, QLD, Australia) accepted this weight as the weight of the subject.
After weighing, participants remain for one second on top of the box and then drop onto the force platforms after a 3-2-1 countdown.The moment when the force plates recorded the landing was determined by detecting the initial force that exceeded a threshold of 20 N. The landing velocity was estimated from the height of the box using the conservation of mechanical energy principle, as the square root of 2 × 9.81 × box height (in m).Similar to the countermovement jump (CMJ), various kinetics, kinematics, and strategy variables of different phases of the jump were incorporated into the reliability analysis (Tables 2 and 3).

Statistical Analysis
Inferential statistical tests were carried out using IBM SPSS Statistics v26.0 (IBM Corp., Armonk, NY, USA) while reliability tests were carried out with previously published Excel ® spreadsheets [36].
The sample size estimation was conducted according to the guidelines established by Borg et al. [37] regarding sample size calculation in reliability research.The method of estimation for two repeated samples was employed, assuming a precision of 0.1 for the intra-class correlation (ICC) and a true ICC of 0.9, based on previous reports in team sports related to vertical jump height [3,38,39].This resulted in an estimated sample size of 15 participants with a confidence level of 95%.
Intrasession reliability (repeatability) was computed using the three CMJ s and DJ s recorded during the first experimental session while intersession reliability was calculated using the mean of three trials of each of the experimental days (day 2 and 3).Between trial mean differences and repeated measures, ANOVA were used to identify intrasession bias.Bonferroni's post-hoc test was used to check pairwise comparisons.Similarly, between days, mean differences and paired T-Test were carried out to identify between sessions bias [40].The Shapiro-Wilk normality test was conducted to assess the normal distribution of the data.If any dependent variable did not meet this assumption, the corresponding nonparametric statistical test was employed.Intraclass correlation coefficients (ICC) were calculated as a measure of relative reliability [41].Intra-class correlation coefficients (ICC) with 95% confidence intervals (95%CI) were analyzed as follows: poor reliability, <0.5; moderate reliability, 0.5-0.75;good reliability, 0.75-0.90;and excellent reliability, >0.90 [42].Absolute reliability was analyzed using the coefficient of variation (CV) [43], while relative reliability was measured as the standard error of measurement (SEM) and minimum difference (MD) to be considered "real".The CV was calculated as between trials SD/mean × 100.Acceptable CV was set at <10% [43].The SEM was calculated as follows:   × √1 − .MD was calculated constructing a 90 and 95% confidence interval (CI) for the SEM using the z-score associated with each CI percentage [41].Group data are presented as means ± SD, and the level of significance was set at p < 0.05.

CMJ
The repeatability of the CMJ variables is displayed in Table 2. Several variables presented differences between trials (p < 0.05).For all jump outcomes, kinetics, and kinematics, the highest score was obtained in trial 3, being the highest differences between trials 3 and 1 (Table 2).Relative to jumping strategy, the deepest countermovement was also observed during the last trial (p = 0.034).Jump outcomes and concentric kinetics and kinematics displayed excellent absolute reliability (ICC ranging from 0.91 to 0.98) while some eccentric variables displayed lower absolute reliability than concentric metrics (Table 2).The relative reliability of the dependent variables is also shown in Table 2.The intersession reliability of the CMJ is displayed in Table 3.No intersession significant bias was identified for any metric (p > 0.05).However, CMJ eccentric kinematics tended to be lower during day 2, presenting good absolute reliability but not excellent (ICC range: 0.80 to 0.85).

DJ
Similarly, Table 4 displays the repeatability of the DJ metrics.A total of 7 out of the 15 variables analyzed in the DJ test displayed a main effect of the trial (Table 4), with the third one being the one that presented the highest values.Absolute reliability was excellent (ICC > 0.91) for jump height (imp-mom), jump height (flight time), concentric impulse, and concentric velocity.Acceptable relative reliability (CV < 10%) was observed in all variables except for RSI (flight time/contact time), RSI (JH/contact time), and contact time.DJ presented a systematic bias in the weekly reliability of most of the variables analyzed, in jump outcomes, kinetics, kinematics, and jumping strategy (Table 5).Acceptable CVs were observed for all DJ except for RSI (JH/contact time) (CV = 10.64%).deviation; ICC = intra-class correlation coefficient; CI = confidence interval; LL = lower limit; UL = upper limit; SEM = standard error of measurement; MD = minimum difference; and CV = coefficient of variation..5 SD = standard deviation; ICC = intra-class correlation coefficient; CI = confidence interval; LL = lower limit; UL = upper limit; SEM = standard error of measurement; MD = minimum difference; CV = coefficient of variation; and x͂ = median..9SD = standard deviation; ICC = intra-class correlation coefficient; CI = confidence interval; LL = lower limit; UL = upper limit; SEM = standard error of measurement; MD = minimum difference; and CV = coefficient of variation.

Discussion
The objective of this study is to present valuable findings regarding the reliability of the CMJ and DJ outcomes, as well as the kinetics, kinematics, and jump strategy employed by female volleyball players using force platforms.The initial hypothesis has been partially fulfilled.There is a significant increase in the main outcomes of the countermovement jump (CMJ) and drop jump (DJ) within the same session, with the third attempt showing the highest performance values (i.e., increases in jump height (imp-mom), mean concentric force, and mean concentric power) accompanied by changes in the strategy of the jump, showing greater countermovement depths.Moreover, CMJ did not show intersession differences, whereas this occurred in 11 out of the 15 variables analyzed in the DJ.According to the criteria of relative reliability, most of the analyzed variables in the CMJ, both within and between sessions, showed high reliability (ICC > 0.9; CV < 5%).However, some eccentric variables exhibited lower and questionable (ICC > 0.78; CV <11.6%) intersession reliability compared to concentric metrics, indicating potential variability in the eccentric phase of the jump (Table 3).Intrasession absolute reliability in the DJ variables was excellent for jump height (imp-mom), concentric impulse, concentric velocity, and the jump height though flight time method.However, some variables, such as RSI (flight time/contact time), RSI (JH/contact time), and contact time, showed a lower ICC (Range: 0.68 to 0.79) and CV over the cut-off value of 10%, indicating potential limitations in their use due to their lower reliability.Only concentric and eccentric impulses meet the excellent reliability criteria after intersession analysis (Table 5), demonstrating their validity as potential metrics in the longitudinal assessment of the DJ.

CMJ
Vertical jump height (imp-mom) and, more recently, RSImod are common CMJ reported metrics in the reviewed literature due to their association with various performance markers and their sensitivity to detect fatigue [7,23].In this study, significant differences were observed in these metrics between trials, indicating an improvement in jump capacity with shortened contraction time, probably attributed to a learning and/or a warm-up effect, which enables the maximizing of jump height (imp-mom).Another potential source of error that could have influenced this increase in vertical jump performance is the inherent technical variability in human movement.In this case, executing the jump with a shorter contraction time may be associated with a shallower countermovement depth and/or higher eccentric velocities [44,45], which could result in greater concentric impulses due to a more effective utilization of the stretch-shortening cycle [46].Both the impulse-momentum method and flight time estimation, along with the modified RSI, exhibited excellent intraclass correlation coefficients and coefficients of variation (Table 2).Comparing our findings with previous research [4], a similar pattern emerged.They also reported a percentage difference of 4.0 ± 3.3% between trial 1 and trial 2 in NCAA D-1 volleyball players and excellent reliability (ICC > 0.93), which aligns with our investigation (Table 2).Despite the excellent reliability observed and the high familiarization of the participants with vertical performance tasks, these results suggest that it may be necessary to discard the first attempt in order to minimize sources of error from a possible learning effect or completion of warm-up.The majority of the kinetic, kinematic, and jump strategy variables of the present study demonstrated excellent reliability, except for certain eccentric phase variables such as peak force/BM, braking impulse, braking RFD, peak power, and peak power/BM (Table 2).Heishman et al. [39] also reported excellent intrasession reliability for no arm swing CMJ "typical variables", including performance metrics (ICC range = 0.873 to 0.967; CV range = 8.3 to 1.9%) concentric mean force (ICC = 0.965; CV = 2.8%) and power (ICC = 0.968: CV = 4.3%), concentric impulse (ICC = 0.987; CV = 2.2%), and concentric peak velocity (ICC = 0.958; CV = 1.9%).However, the reliability of kinetic, kinematic, and strategy variables during the eccentric phase of the jump generally exhibited lower values compared to the concentric phase (ICC range = 0.319 to 0.999; CV range = 23.5% to 0.4%).Considering the presented findings, it is important to note that variables with an ICC > 0.9 and a CV of < 5% exhibit excellent reliability and can be confidently utilized.The decision to incorporate these variables into practice or research should be based on their internal logic and sensitivity to changes induced by intense exercise, as they have the potential to effectively detect fatigue [7].Ultimately, coaches and sports scientists should carefully evaluate and select the most appropriate variables based on these considerations.No differences (p > 0.05) were detected between the measured variables across the testing days when using the mean of three jumps.Intersession analysis showed that 52% (16/31) of the variables examined in the countermovement jump (CMJ) demonstrated excellent relative reliability.Furthermore, 71% (22/31) of the variables exhibited a coefficient of variation (CV) below 5%.Except for eccentric braking RFD, all variables displayed an ICC above 0.7 and a CV below 10%, indicating satisfactory reliability across all analyzed variables.Notably, the variables that exhibited the highest reliability were eccentric force/BM (ICC = 1.00;CV = 0.78%), concentric impulse (ICC = 0.98; CV = 1.66%), and concentric peak velocity (ICC = 0.93; CV = 1.61%).Similarly, Anicic et al. [9] showed that jump height, regardless of the calculation method, as well as RSImod, exhibited excellent intersession reliability criteria.Previously reported jump height data showed a CV of < 5%, a slightly lower reliability than observed in the present study (CV = 3.75%).This is likely due to the greater familiarity of our population compared to active individuals who are not specialized in jump sports [9].Accordingly, again with previous research [9,39], variables related to force production and impulse were the most reliable, particularly in the concentric phase of the jump (CV < 4.35%), as also occurred in intrasession analysis.On the other hand, propulsive RFD has been shown to have low reliability [9].In this sense, to detect longitudinal changes in force production rate within early time windows (50-100 ms), concentric impulse at both 50 ms and 100 ms has proven to be a highly reliable and less variable alternative (ICC > 0.88; CV < 4.35%) with a relative reliability of 6.37 and 10.34 Ns.CMJ (imp-mom) and RSImod demonstrated intrasession MD90CI values of 10.22 (2.9 cm) and 15.14% (0.07 m/s), respectively.Moreover, among the intrasession variables examined, the most sensitive ones were concentric mean force/BM, eccentric mean force, concentric peak velocity, and concentric impulse, all of which exhibited a relative reliability of less than 5.18% (Table 2).On the other hand, the most sensitive intersession variables were eccentric mean force, concentric impulse, and concentric peak velocity, with reliable cut-off thresholds at the 90%CI of 12.79 N, 12.79 Ns, and 0.11 m/s, respectively.Our cut-off thresholds are larger than previously observed due to differences in the statistical technique used [9,39] (MD vs Typical Error [36]).These results indicate that these cut-off thresholds are of practical significance to assess acute relative reliability in female jumping dominant sports.

DJ
Neuromuscular function could be assessed through drop jump (DJ) tests involving one dual force plate to directly measure force-time data.This procedure has been shown to be valid in recent studies [5,8].However, the reliability of the metrics derived from forward dynamics procedures are not established yet.Limited research has examined the between session reliability of certain variables, including GCT [47,48], RSI, and jump height (imp-mom) [47].However, none of these studies have comprehensively investigated the intraday reliability of various kinetic, kinematic, and jump strategy variables.Consequently, the comparability of our findings is hindered.A noteworthy finding was the significant increase in performance metrics such as force and power observed in jumps 2 and 3 compared to jump 1 (Table 3).This suggests that the first jump may not accurately represent an individual's true performance.As a result, and like for CMJ testing, it is recommended to exclude the initial jump when conducting the DJ test to enhance the reliability of the data.The use of augmented feedback could have contributed to the intrasession performance improvement in both jump tests.In this context, players had immediate access to feedback as they could observe the recorded data in the computer.This practice may have led to greater increases in motivation, ultimately resulting in higher jump heights.Specifically, this motivational effect is supported as the augmented feedback was the only method (compared to internal or external attentional focus) that enhanced performance in the CMJ [49].Another possible reason for the improvement could be short-term adaptation in technical learning, confirmed by immediate performance feedback in jump tests, allowing players to consolidate better technical efficiency in jumping [49].Moreover, relative reliability was excellent for 5/15 metrics, while just 3 presented lower than 5% CV (concentric impulse, eccentric impulse, and concentric peak velocity).Jump height (imp-mom) met acceptable criteria (CV < 6.78%).In addition, these four variables were also the most sensitive, exhibiting a MD90CI of 8--18%.In contrast, the reliability observed in our study for RSI and contact time did not meet acceptable reliability thresholds (Table 3).A total of 11 out of the 15 variables in the DJ demonstrated significant differences between day 1 and day 2 (Table 5).Only two variables exhibited excellent reliability (concentric and eccentric impulse).Moreover, acceptable between days reliabilities were also observed for jump heights with similar drops (30 cm) in adults with resistance training experience [47] despite the differences in the time window between sessions (two vs. seven days).Nevertheless, it is important to note that contact time and countermovement depth did not meet the established reliability thresholds, indicating that the DJ test may involve a non-reproducible jumping strategy.This finding directly affects the duration athletes spend applying vertical force to the ground, thereby impacting the reliability of force production and RSI [50].However, it does not seem to have an impact on the reliability of concentric impulse and concentric peak velocity, as participants adapted the jumping strategy to compensate for the lower force production giving similar impulse and change in center of mass velocity [51].According to the discussed results, reliability of DJ metrics suggest that the concentric and eccentric impulse should be prioritized in the assessment and monitoring of DJ performance should be performed over time.Practitioners are advised to carefully consider the jump strategy when assessing the drop jump to ensure consistent contact times and countermovement depths.This is crucial to minimize any potential influence on other metrics [50].
While the present study provides valuable insights into the intra-and intersession reliability of various performance metrics, it is important to acknowledge certain limitations that should be considered when interpreting the findings.The study included a relatively small sample of 16 female volleyball players.This limited sample size has direct implications for the interpretation of intersession differences, as statistical power will be severely limited.In this sense, low statistical power increases the probability of committing type II errors, restricting the ability of the statistical test to detect true changes.Variations in jump technique, joint kinematics, and skill levels across different populations could influence the reliability of the measured variables.Although attempts were made to standardize the testing conditions variations in environmental conditions, participant readiness, or instructions given may introduce additional sources of noise to the test.Therefore, from a practical standpoint, strength and conditioning coaches should establish a highly standardized data collection protocol that minimizes jump technique modifications, learning effects, and ensures proper preparation for subsequent SSC movements.

Conclusions and Practical Applications
The first repetition of an assessment does not seem to accurately represent the true jumping capacity, both for the CMJ and the DJ, in the analyzed cohort.Despite this, most variables analyzed for the CMJ showed excellent reliability, justifying their use.In contrast, the DJ demonstrated lower reliability compared to the CMJ, both intrasession and intersession.However, the performance, kinetic, and kinematic metrics of the DJ, overall, meet the minimum acceptable threshold, while the jumping strategy does not.Both tests can be a suitable alternative for monitoring neuromuscular performance, with the CMJ being more suitable for detecting smaller changes due to its greater relative reliability.Both tests are reliable between weeks; however, coaches should be aware that the DJ has failed to be reliable in terms of jumping strategy and kinetics.Therefore, while the result is reliable, the way to achieve that result is not.

Figure 1 .
Figure 1.Flow chart of the experiment.

Table 1 .
Descriptive characteristics of the participants.

Table 2 .
Repeatability of CMJ F-T derived variables.

Table 3 .
Within week reliability of CMJ F-T derived variables.

Table 4 .
Repeatability of DJ F-T derived variables.

Table 5 .
Within week reliability of DJ F-T derived variables.