Next Article in Journal
Harnessing Artificial Intelligence in Sports Training: Evidence from Romanian Professionals Using SEM Analysis
Previous Article in Journal
A Multi-Task Deep Learning Framework for Road Quality Analysis with Scene Mapping via Sim-to-Real Adaptation
Previous Article in Special Issue
The Effects of Velocity- Versus Percentage-Based Resistance Training on Lower Limb Explosive Power and Footwork Movement Speed in Elite University Badminton Players
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Concurrent Validity and Reliability of Chronojump Photocell in the Acceleration Phase of Linear Speed Test

by
Lamberto Villalón-Gasch
,
Jose M. Jimenez-Olmedo
*,
Alfonso Penichet-Tomas
and
Sergio Sebastia-Amat
Research Group in Health, Physical Activity, and Sports Technology (Health-Tech), Faculty of Education, University of Alicante, 03690 San Vicente del Raspeig, Spain
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(16), 8852; https://doi.org/10.3390/app15168852
Submission received: 14 July 2025 / Revised: 30 July 2025 / Accepted: 5 August 2025 / Published: 11 August 2025
(This article belongs to the Special Issue Advances in Sports Science and Novel Technologies)

Abstract

This study examined the criterion validity of the Chronojump photocell compared to the Witty photocell and studied the instrument’s within-session test–retest reliability. Forty-five university students and physically active males performed ten trials on 10 m linear sprints. Times were recorded simultaneously by both devices to obtain paired outcomes. The main results displayed the significant mean differences in physically active university students between devices (diff = 0.05 s, p < 0.001; trivial ES = 0.1), high Spearman’s correlations (rs = 0.99; p < 0.0001), substantial CCC (0.98), and SEE = 0.05 s. Bland-Altman’s plot denoted low systematic errors (0.05 s) with no heteroscedasticity (R2 = 0.004). The within-session reliability (internal consistency) was high (ICC = 0.88; SEM = 1.14 s; CV = 0.23). In addition, the sensitivity of the instrument showed values of SWC = 0.08 s and SDC = 0.4 s. In conclusion, the Chronojump photocell is a valid instrument for the estimation of time in the acceleration phase of a linear sprint in physically active university students. Additionally, this device is shown to be a reliable tool in the measurement of consecutive trials of linear sprints.

1. Introduction

Accurately measuring athletic performance has always been a key objective in sports performance [1], since enhanced precision in assessing physical and physiological parameters enables more informed and rigorous decisions in exercise prescription and ultimately allows athletes to boost their performance and to minimize injury risks [2]. Among the various physical attributes related to sports, speed stands out as a fundamental ability integral to most disciplines [3]. Likewise, specific abilities derived from speed, such as linear or curved sprinting, acceleration and deceleration, agility, change of direction (COD), and the capacity to repeat and combine these elements, are crucial determinants of success in many sports, particularly in team sports [4,5,6]. Consequently, precise speed measurements are essential for optimizing performance and advancing research in sports science [7,8].
To achieve that precision there are several devices available; the gold standard is considered the FAT (Fully Automatic System) (i.e., Photo Finish) due to its precision (± 0.005 s), although these systems are expensive and not very accessible. On the other hand, manual timing represents the cheapest and easiest-to-use option but with a considerable loss of precision and reliability [9]. In between these two options, there is a wide range of possibilities, among which the following can be highlighted since they are frequently used: radar and laser systems, which have great precision but are limited to linear courses [10,11]. Additionally, systems based on high-speed videography have great precision; in fact, they are considered the gold standard in the absence of the photo finish. Still, data acquisition and processing are very hard [12]. Other alternatives include global position systems (GPS), which can only be used outside, or local position systems (LPS), which can be used inside but require an expensive system, and their precision is lower, with a bias of 7.2% when compared with photocells [13], especially in COD [14]. Finally, we also have electronic timing gates, devices that are frequently used to measure speed in sports [15].
Timing gates, commonly called photocells, have emerged as a fundamental tool for determining passing times and total race time. These devices can automatically detect the passage of an object through a beam of light that is reflected by a mirror whenever an object or person cuts that beam [7], offering high temporal resolution and high reliability.
Nevertheless, the accuracy and precision of photocells are critical for ensuring the reliability of performance assessments. Timing gates with lower error rates reduce bias and provide a more valid basis for interpreting human performance variability, minimizing erroneous inferences driven by device limitations rather than actual performance differences [7,16]. Additionally, experimental conditions, such as starting positions and procedural consistency, can significantly influence the precision of results [9]. Thus, understanding the validity and reliability of photocells under various conditions is essential for selecting appropriate systems tailored to specific objectives.
However, the diversity of models available on the market raises questions about the comparability of the data obtained and the influence of each photocell’s technical characteristics on the results. Single-beam systems are popular due to their affordability and accessibility; however, they are prone to measurement errors, as they can be prematurely triggered by movements such as a swinging arm or leg [17]. To mitigate such errors, more advanced systems incorporating double-beam setups and signal-processing algorithms have been developed [15]. However, this increase in precision is directly related to a price rise. Despite these advancements, inconsistent practices persist regarding the placement of timing gates, with setups varying from knee to hip and even head height [18,19,20]. These inconsistencies in the acceleration phase, the first 10 m, are influenced via starting [8]. For these reasons, data from different starting methods cannot be interchangeable. Additionally, limited research on the effects of gate height on performance measurements, particularly for single-beam systems, highlights a significant gap in the literature [18,19].
While direct reports of Spearman’s correlation comparing photocells to the ultimate gold standards (FAT with dual-beam photocells or high-speed video) for linear sprint time are somewhat limited in the provided research, several studies offer valuable insights through comparisons with other measurement tools where photocells serve as a reference. In this sense, Altman et al. (2018) found low values of validity (Intraclass Correlation Coefficient (ICC) 0.134 to 0.597 and r 0.351 to 0.597) when comparing photocells (TAG Heuer) with high-speed cameras in the first 10 m of a linear sprint. However, for distances longer than 20 m, the validity of photocells seems to be excellent [21,22]. Regarding the reliability of single-beam photocells, those devices are considered reliable tools but sensitive to factors such as starting position, device height, and all those previously mentioned. Bond et al. (2017) [23] analysed the intra-session and inter-device reliability of the Test Center (TC) Photogate (V), finding typical errors of less than 0.05 s in both conditions. In the same way, the TC photogate is shown as a reliable tool with a coefficient of variation (CV) of 1.4% and a standard error of the measure (SEM) of 0.02 s [24]. In the 20 m sprint, the single-beam photocell Cronox Sports shows an ICC of 0.982 and CV 1.4% [25].
Therefore, it seems clear that single-beam photocells are reliable for distances of more than 20 m, but for shorter distances, both their reliability and validity must be studied. It is important to acknowledge that the validity and reliability of photocells can be diminished over very short distances, that is, in the acceleration phase. This phenomenon arises because the inherent system delay and the resolution of the timing gate’s clock become more impactful when the total time measured is extremely brief. For instance, a 1–3% error that might be negligible over a 30-m sprint becomes substantially more significant, proportionally, over a 5-m dash. At shorter distances, even minuscule variations in trigger points (e.g., due to limb swing or subtle differences in body position at the gate) or the slight latency in sensor activation can represent a larger percentage of the total time, thus reducing the true accuracy and consistency of the measurement. Consequently, while photocells are highly precise tools, their practical validity and reliability for measuring ultra-short sprints may be compromised, leading to potentially misleading interpretations of performance [7,23].
One notable example of a photocell is the Witty single-beam photocell (Microgate Witty Wireless Training Timer, Bolzano, Italy), which has become a widely used tool due to its precision and reliability in linear sprint [26,27] and change of direction [28]. On the other hand, the Chronojump wireless photocell (Chronojump Boscosystem, Barcelona, Spain) offers a more affordable solution. It is designed with a single-beam design too, but contrary to what Witty does, data transmission is carried out through wiring to a laptop with the open-source software Chronojump Bosco system, offering functionalities such as race analysis, allowing for customization and adaptability to specific research or training needs. However, to the best of the knowledge of the authors of this manuscript, the validity and reliability of the Chronojump photocell has not been previously studied in linear sprint and, therefore, not in the acceleration phase of a sprint.
Given the importance of obtaining accurate and reliable measurements, this paper aims to underscore the critical role of Chronojump single-beam timing gates in sports analysis when compared to the Witty photocell, emphasizing the need for high-quality devices that ensure both validity and reliability [29]. For this purpose, this study aims to carry out the analysis of the concurrent validity of the Chronojump photocell when compared with the widely used Microgate Witty photocell, which will be considered as a criterion instrument. In addition, it is intended to carry out a succinct analysis of the internal consistency of Chronojump, expressed as within-session test–retest reliability for multiple attempts in 10 m sprint.

2. Materials and Methods

2.1. Participants

The required sample size was determined a priori using G*Power software (version 3.1.9.6, University of Düsseldorf, Düsseldorf, Germany). Based on a correlation analysis with an expected effect size of r = 0.7 under the alternative hypothesis (H1), a significance level of α = 0.05, and a statistical power of 0.95, a minimum of 42 participants was estimated [23]. Ultimately, forty-six students from the Sports Science degree program at the University of Alicante voluntarily participated in this study (mean ± SD: body mass = 68.3 kg ± 10.0 kg; height 1.73 m ± 0.08 m). The participants were healthy people with no injuries or illnesses that might affect the results. All the participants were informed about the procedures and gave their written consent. They were informed of the characteristics of the intervention according to the Ethical Principles for Medical Research Involving Human Subjects of 1975 (revised in Fortaleza, Brazil, in 2013) in the Declaration of Helsinki of the World Medical Association (WMA). This research was approved by the ethics committee of the University of Alicante (UA-2019-01-31).

2.2. Procedure

Before data collection, the participants completed a standardized 15-min warm-up consisting of three phases: 5 min of light jogging, followed by 5 min of joint mobility exercises and dynamic stretching, and concluding with 5 min of progressively intense sprints over a 10-m distance. The order of participants was randomized, and each participant completed 10 trials, with a 3-min rest interval between each attempt. The starting position was set at 40 cm behind the photocells to further reduce the risk of false triggers. After the warm-up, timing measurements were recorded for 10-m sprints. To ensure consistency, both photocells were positioned at an equal distance from the starting and finishing points. The photocells were placed at heights of 65 cm and 75 cm from the ground to minimize errors caused by arm movements interrupting the sensor beams [22,30].

2.3. Instruments

This study used two photocell systems: a commercial model (Microgate Witty Wireless Training Timer, Bolzano, Italy) and an open-source model (Chronojump Boscosystem, Barcelona, Spain). Each system was composed of a set of photocells placed at the initial mark point (start) and 10 m (stop). A pair of single-beamed photocells consists of a transmitter that emits an infrared beam to an infrared light reflector, which is located opposite, so the beam goes back to the transmitter. The infrared photocells were connected to a handheld (Witty timer) or PC-based (Chronojump) microcontroller. The Witty System used a photocell-integrated radio system. The single-beam timing gate was triggered when the light beam was interrupted, and, consequently, a signal was remotely transmitted to the timer device that recognizes the photocell ID number. The redundant radio transmission ensures that the data collected was transmitted to the Witty timer device with a high precision (±0.004 s). The Witty Manager software for Windows 7 was used to import the test data. The Witty System was used as the reference standard to measure sprint performance [26]. On the other hand, the Chronojump photocells were connected to the software through a female connection that sent a signal when the light beam was disrupted. Chronojump 2.1.0–36 (Windows) software was employed for importing data. All the measurements were collected simultaneously with both photocell systems. Infrared cells of both models were allocated in the same vertical axis using a single tripod. The Witty infrared cell was allocated at 65 cm height, and the Chronojump infrared cell at 75 cm height. There was 1.5 m between the infrared cell and the corresponding reflector. The devices were calibrated before their placement.

2.4. Statistical Analysis

Descriptive statistics were used to describe the data, reporting the mean and 95% confidence limits of the sprint times of both photocells. Normality was confirmed by a Kolmogorov-Smirnov test, which resulted in a non-normal distribution. To analyze the existence of a systematic error, significant differences in the values of time were calculated using a t-test for paired samples, and the effect size (ES) was determined as Hedges’ corrected effect size g, interpreted as trivial (<0.19), small (0.2–0.59), moderate (0.6–1.19), large (1.2–1.99), very large (2.0–3.99), and huge (>4.0) [29,31,32].

2.4.1. Agreement Between Devices

The agreement between Chronojump and Witty Photocell Times was assessed with Bland-Altman plots, allowing for the visualization of the differences between the two photocells and the extent to which the differences vary depending on the magnitude of the measurement. The limits of agreement (LoA) were calculated as LoA = ±1.96 × SD and represent the range within which 95% of the observed differences between the two methods of measurement are expected to fall. To control the presence of random errors and proportional bias between devices, the bivariate Pearson’s product–moment correlation coefficient of the differences was calculated (R2), where an R2 value greater than 0.1 would indicate their presence.
The grade of concordance was conducted by Lin’s concordance index (CCC); this statistic is expressed as ρc = ρ × Cb, providing information on how close the pairing of data in both instruments is (ρ, precision) and how close to ideal the relation is (Cb, accuracy) [33]. The results obtained are categorized as poor (≤0.9), moderate (0.89–0.95), substantial (0.94–0.99), and near perfect (≥0.99) [34]. Additionally, a correlation analysis was conducted to evaluate the strength and direction of associations between measurements from different methods by computing Spearman’s bivariate correlation (rs). The following thresholds were used for the interpretation of rs: impractical (≤0.45), very poor (0.45–0.70), poor (0.70–0.85), good (0.85–0.95), very good (0.95–0.995), and excellent (≥0.995) [16,34].
In addition, a Passing and Bablok regression [35] for non-parametric samples was conducted to test the linear relationship between the paired data from both instruments. The relationship was confirmed by the equation y = ax + b, whereby the values of variable y (Microgate time) can be predicted as a function of another variable x (Chronojump time), where a (slope) that represents the ideal value would be 1. The slope provides information on the proportional differences between the two methods. On the other hand, b is the cut-off point with the x-axis (intercept) in whose ideal value, in this case, would be 0 and represents the systematic differences between the two devices in a measurable mode. The standard error of estimate (SEE) was also determined. Low values of SEE indicated that the points were closer to the regression line, and the magnitude of the estimated error was smaller.

2.4.2. Reliability

Regression and Bland-Altman analyses were used once more to determine the reliability of the instrument in the measurement of consecutive trials within-session. Additionally, the test-retest reliability was assessed via intraclass correlations (ICC) according to the indications of the spreadsheet for the consecutive pairwise analysis of trials for the reliability [34]. The ICC were interpreted as poor (>0.49), moderate (0.50–0.74), good (0.75–0.89), and excellent (>0.90) [36]. The magnitude of the error was estimated by calculating the standard error of measurement (SEM); this statistic was calculated as SEM = Sd/√2, where Sd is the standard deviation of the difference of the values [37]. This statistic provides information on the error in absolute terms from the analysis of the dispersion of the values around the true value [38]. Relative reliability was established by the coefficient of variation (CV) calculated from SEM as CV = (SEM/mean). The sensibility analysis was carried out by smallest worthwhile change (SWC) and smallest detectable change (SDC).
All statistical analyses were performed via the MedCalc Statistical Software version 20.100 (MedCalc Software Ltd., Ostend, Belgium). For reliability, a spreadsheet developed by W. G. Hopkins was used [34].

3. Results

Table 1 shows a statistical difference between devices (0.046 s) with a trivial effect size. However, a substantial correlation is observed (CCC = 0.98), with near-perfect accuracy (Cb = 0.99) and substantial precision (ρ = 0.98).
Concerning the concordance between both devices, as it can be appreciated in Figure 1a, a near-perfect correlation is observed (rs = 0.99; p < 0.0001) and there is no significant deviation from linearity (p = 0.12) in the Cusum test. This strong linear association is also detected with the appreciation of a slope very near to unity (0.989). Furthermore, regarding systematic and random bias, an SEE of 0.05 s is appreciated, with a y-axis crossing point of 0.066 s.
The Bland-Altman plot (Figure 1b) gives similar values of systematic error with a mean difference of −0.05 s, although the existence of a proportional error cannot be deduced from its observation, as is also demonstrated by the low value of R2 (0.0004). Regarding the random error, it can be seen that most of the points are within the LoA (96.5% of points). The internal consistency of the Chronojump photocell was studied through the analysis of within-session test–retest reliability of the first three trials and compared with the values of the Witty photocell (Table 2). Both devices have similar values in most of the indicators. In this sense, a small change in mean can be observed, interpreted by a standardized change in mean fluctuating from −0.18 s to 0.19 s, which is small; additionally, moderate noise is appreciated with more reliable values of SEM and standardized SEM for Chronojump. Regarding the sensibility, again, Chronojump shows lower values of SDC but higher values of SWC; in any case, both are very similar. On the same line, CV and ICC are very similar and have a good correlation.
Focusing on the analysis of the internal consistency of the Chronojump photocell, Figure 2 visually exhibits a linear dependency between attempts, with low errors and high correlations. Additionally, no proportionality in error is observed in any of the trials compared. Similarly, the systematic errors are placed in the hundredth of a second.

4. Discussion

This study aims to carry out the analysis of the concurrent validity of the Chronojump photocell when compared with the widely used Microgate Witty photocell, which will be considered as a criterion instrument. In addition, it is intended to carry out a succinct analysis of the internal consistency of Chronojump, expressed as within-session test–retest reliability for multiple attempts in a 10 m sprint. From the knowledge of the authors, there are no studies in which the validity and reliability of the Chronojump photocell have been studied. The main results of this research show that the Chronojump photocell is a valid and reliable instrument for measuring sprints in the acceleration phase (first 10 m) in physically active university students.
Regarding the concordance between Chronojump and Witty photocells, even though the Wilcoxon test reports significant differences in the 10 m times, these values present a trivial ES. Additionally, an almost perfect correlation is appreciated, which implies a linear relationship between both devices, as can be seen in Figure 1. The systematic error can be established around 0.066 s (intercept of regression equation), and a random error of 0.05 s is detected (SEE). As can be seen, the difference in the means, the intercept, and the mean of the differences in the Bland-Altman analysis are near 0.05 s; that is, the Chronojump photocell underestimates the measurements by 0.05 s compared to the Witty photocell. This relationship can be expressed as t W i t t y = 0.066 + 0.989 t C h r o n o j u m p to make interchangeable measures from both devices. Plus, an excellent concordance is shown (CCC = 0.984), both in its precision (ρ = 0.99) and accuracy (Cb = 0.99) factors. These results do not agree with those found by Altman et al. (2018), in which it is suggested that a validity below 10 m in the sprint is questionable (r = 0. 592; ICC = 0.278). However, in this study, a high-speed video recording system (100 Hz) was used as a criterion instrument. On the other hand, Bastida Castillo et al. (2017) did find high validity values with perfect correlations and an SEM of 0.02 s, although in this case, the distances were always greater than 10 m, and, again, different devices were used as criteria instruments. In this case, a global position system (GPS) and an inertial measurement unit were integrated (WIMU Pro, RealTrack Systems, Almeria, Spain). A similar study found differences between TC photogate photocell (Brower Timing System LLC, Draper, Utah, USA) and GPEXE GPS (Exelio SRL, Udine, Italy), with the observation of a systematic error of 2.28 s and a very large effect size [6,21,23,39]. As mentioned before, comparing photocells to the ultimate gold standards (FAT with dual-beam photocells or high-speed video) for linear sprint time is somewhat limited; several studies offer valuable insights through comparisons with other measurement tools, where photocells serve as a reference. In this sense, Hasn et al. (2024) compared a smartphone application (Photo Finish app) to Witty photocells to measure 10 m and 20 m sprint times. Although Spearman’s correlation was not explicitly reported, the study found high coefficients of determination (R2 = 0.996 and 0.990 for 10 m and 20 m, respectively) between the application and the photocells and an SEE of 0.03 s in 10 m [40]. Differences between the two devices can be seen in the slightest variations in the position on the vertical axis, which can lead to the cuts with the photocell not occurring at the same time since the height affects the validity [22]. Consequently, if we take these values into account, we can conclude that the validity of Chronojump for physically active subjects in the 10 m distance race is acceptable. However, the systematic error of around 0.05 s to 0.06 s, according to the analysis used, may be a handicap, especially for those populations that require greater precision, such as elite athletes, since it represents a relative error of around 2% to 3%, which can be critical for the sake of achieving performance.
Concerning the reliability of Chronojump photocells, the Chronojump photocell seems to be a reliable device when its within-session test–retest reliability is examined. This internal consistency can be observed in absolute values, such as the SEM (0.14 s) when the first three attempts are analyzed. In addition, acceptable sensitivity is also observed since SWC is 0.08 s and SDC has somewhat higher values (0.040 s). As can be seen, SDC is greater than SEM; therefore, the instruments can detect the minimum change without being covered by the instrument’s noise. However, in the case of SWC, the opposite occurs, which can be a problem if the aim is to detect times shorter than 0.14 s since the noise would be above the 0.08 s at which SWC is set. The values of absolute reliability are very similar and even slightly better than those obtained by the hired Witty photocell (SEM = 0.15 s, SWC = 0.07 s, and SDC = 0.42 s). Therefore, according to these results, Chronojump can be considered a reliable device in the acceleration phase of linear sprint. These results agree with those found by Bond et al. (2017), who found, for distances of 9 m, an SEM of 0.05 s for the TC photogate photocell [23] also in the within-session analysis of five trials. On the other hand, lower SEM values 0.02 s were appreciated by Haugen et al. (2014) for the TC Photogate, but for longer distances [24].
In terms of relative reliability of Chronojump, goof ICC (from 0.82 to 0.92) values are observed, and, once again, they are higher than those obtained by Witty (from 0.78 to 0.93). Additionally, a standardized SEM of 0.36 s is observed, which can be considered a moderate error [41]. These values are comparable to those found for the photocell Newtest of reference [42] 300 series (Powertimer, 2000, Oulu, Finland), for which higher ICC values (0.98) and low standardized SEM values (0.017 s) were observed; in this case, only two attempts were analyzed at distances of 30 m [43]. On the other hand, the coefficients of variation are very similar for both methods; however, they are greater than those found by Haugen et al., although at a distance of 20 m. In the 20-m test, which included five attempts per subject, the measurements recorded by the single-beam Cronox Sports system (Madrid, Spain) yielded comparable results, albeit slightly better, ICC values and improved CV values in terms of reliability [25].
In summary, although some reliability values (especially regarding CV) are worse than those observed for other devices considered valid, the Chronojump photocell can indeed be considered a reliable element. This is because those devices underwent analysis on superior distances and, therefore, are exempt from the possibility of committing errors due to undesired cuts that are typical of the acceleration phase, which is the objective of this study. On the other hand, it is important to emphasize that in terms of ease of implementation, the Chronojump photocell, being a wired device, is less manageable and requires prior planning to collect data. This may pose challenges in certain contexts. However, its ability to provide data with a precision comparable to that of the Witty photocell, at a significantly lower cost, makes Chronojump a valid option for conducting 10-m linear speed tests.

Limitations and Future Research

Certain limitations of this research should be acknowledged. First, the common practice of considering photocells as a gold standard in many studies presents a limitation. Therefore, instead of employing a true gold standard, like Fully automatic timing (FAT), for time measurement, we employed a device widely used and accepted by the scientific community as valid (Witty photocell) [27,40]. It is crucial to clarify that our study evaluates concurrent validity rather than absolute validity, given that only FAT systems are considered true gold standards. This distinction is vital for the accurate interpretation of our findings.
Second, the sample consisted exclusively of male university athletes, which may limit the generalizability of our findings to other populations. While our results are valuable for similar demographics, the validity of the instrument is specifically confirmed for this type of population. For highly athletic groups, such as elite athletes, the observed systematic error values (around 2–3%) might be considered too high to be practically acceptable for performance monitoring where marginal gains are critical. This limitation arises because specific physiological characteristics could result in different photocell trigger events in varied populations [43].
Finally, the experimental setup resulted in non-identical vertical positioning of the photocells, potentially leading to variations in trigger times between them [19].
As for future research directions, we propose investigating a more heterogeneous population, including female athletes, to further assess the instrument’s validity across different demographics. Additionally, implementing a crossover design for photocell heights would allow for a more comprehensive understanding of their impact on timing accuracy. Further studies could also explore the impact of different starting methods (e.g., block starts vs. standing starts) or even flying starts on timing accuracy and device reliability. Finally, future research should also delve into how fatigue affects both the validity and reliability of these timing devices by incorporating numerous repeated trials to comprehensively analyze the interplay between biological factors and instrument performance.

5. Conclusions

In conclusion, this study demonstrates that the Chronojump photocell is a valid instrument for measuring the acceleration phase of linear sprint times, when compared to the Witty photocell, in physically active university students. Therefore, data obtained with Chronojump can be approximated as a substitute under specific conditions with those from the Witty photocell, especially when applying the adjustment equation. Furthermore, the Chronojump photocell exhibits reliable internal consistency in intra-session test-retest measurements. Consequently, the Chronojump photocell should be considered by physical activity and sports performance specialists as a practical tool for measuring and monitoring training processes related to short linear distances.

Author Contributions

Conceptualization, A.P.-T.; formal analysis, L.V.-G. and A.P.-T.; funding acquisition, J.M.J.-O.; investigation, L.V.-G. and S.S.-A.; methodology, S.S.-A.; project administration, A.P.-T. and J.M.J.-O.; resources, A.P.-T.; validation, J.M.J.-O.; writing—original draft, L.V.-G. and S.S.-A.; writing—reviewing and editing, L.V.-G. and J.M.J.-O. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

This study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Review Board of the University of Alicante (IRB No. UA-2019-01-31).

Informed Consent Statement

Informed consent was obtained from all the subjects involved in this study.

Data Availability Statement

The data presented in this study are available on reasonable request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CODChange of direction
FATFully automatic systems
GPSGlobal position systems
LPSLocal position systems
ICCIntraclass correlations coefficient
CVCoefficient of variation
SEMStandard error of the measure
WMAWorld Medical Association
PCPersonal computer
IDIdentity
ESEffect size
LoALimits of agreement
CCCLin’s concordance index
rsSpearman’s bivariate correlation
SEEStandard error of estimate
SWCSmallest worthwhile change
SDCSmallest detectable change
SdStandard deviation of the difference of the values
ρPrecision factor in the Lin’s concordance index
CbAccuracy factor in the Lin’s concordance index
CIConfidence intervals at 95%
pProbability at 95%
R2Coefficient of determination
U-CIUpper confidence intervals at 95%
L-CILower confidence intervals at 95%

References

  1. Currell, K.; Jeukendrup, A.E. Validity, Reliability and Sensitivity of Measures of Sporting Performance. Sports Med. 2008, 38, 297–316. [Google Scholar] [CrossRef]
  2. Cronin, J.B.; Hansen, K.T. Strength and Power Predictors of Sports Speed. J. Strength. Cond. Res. 2005, 19, 349–357. [Google Scholar] [CrossRef]
  3. Miller, J.; Comfort, P.; McMahon, J. Laboratory Manual for Strength and Conditioning, 1st ed.; Taylor and Francis, Ed.; Routledge: New York, NY, USA, 2023; ISBN 9781003186762. [Google Scholar]
  4. Horníková, H.; Zemková, E. Relationship between Physical Factors and Change of Direction Speed in Team Sports. Appl. Sci. 2021, 11, 655. [Google Scholar] [CrossRef]
  5. Freitas, T.T.; Pereira, L.A.; Alcaraz, P.E.; Comyns, T.M.; Azevedo, P.H.S.M.; Loturco, I. Change-of-Direction Ability, Linear Sprint Speed, and Sprint Momentum in Elite Female Athletes: Differences between Three Different Team Sports. J. Strength. Cond. Res. 2022, 36, 262–267. [Google Scholar] [CrossRef]
  6. Altmann, S.; Ringhof, S.; Neumann, R.; Woll, A.; Rumpf, M.C. Validity and Reliability of Speed Tests Used in Soccer: A Systematic Review. PLoS ONE 2019, 14, e0220982. [Google Scholar] [CrossRef]
  7. Haugen, T.; Buchheit, M. Sprint Running Performance Monitoring: Methodological and Practical Considerations. Sports Med. 2016, 46, 641–656. [Google Scholar] [CrossRef] [PubMed]
  8. Multhuaptff, W.; Fernández-Peña, E.; Moreno-Villanueva, A.; Soler-López, A.; Rico-González, M.; Manuel Clemente, F.; Bravo-Cucci, S.; Pino-Ortega, J. Concurrent-Validity and Reliability of Photocells in Sport: A Systematic Review. J. Hum. Kinet. 2023, 92, 53–71. [Google Scholar] [CrossRef] [PubMed]
  9. Haugen, T.A.; Tønnessen, E.; Seiler, S.K. The Difference Is in the Start: Impact of Timing and Start Procedure on Sprint Running Performance. J. Strength. Cond. Res. 2012, 26, 473–479. [Google Scholar] [CrossRef] [PubMed]
  10. Thron, M.; Düking, P.; Ruf, L.; Härtel, S.; Woll, A.; Altmann, S. Assessing Anaerobic Speed Reserve: A Systematic Review on the Validity and Reliability of Methods to Determine Maximal Aerobic Speed and Maximal Sprinting Speed in Running-Based Sports. PLoS ONE 2024, 19, e0296866. [Google Scholar] [CrossRef] [PubMed]
  11. Simperingham, K.D.; Cronin, J.B.; Ross, A. Advances in Sprint Acceleration Profiling for Field-Based Team-Sport Athletes: Utility, Reliability, Validity and Limitations. Sports Med. 2016, 46, 1619–1645. [Google Scholar] [CrossRef]
  12. Healy, R.; Kenny, I.C.; Harrison, A.J. Profiling Elite Male 100-m Sprint Performance: The Role of Maximum Velocity and Relative Acceleration. J. Sport. Health Sci. 2022, 11, 75–84. [Google Scholar] [CrossRef] [PubMed]
  13. Schulze, E.; Julian, R.; Skorski, S. The Accuracy of a Low-Cost GPS System during Football-Specific Movements. J. Sports Sci. Med. 2021, 20, 126–132. [Google Scholar] [CrossRef]
  14. Fischer-Sonderegger, K.; Taube, W.; Rumo, M.; Tschopp, M. How Far from the Gold Standard? Comparing the Accuracy of a Local Position Measurement (LPM) System and a 15 Hz GPS to a Laser for Measuring Acceleration and Running Speed during Team Sports. PLoS ONE 2021, 16, e0250549. [Google Scholar] [CrossRef]
  15. Earp, J.E.; Newton, R.U. Advances in Electronic Timing Systems: Considerations for Selecting an Appropriate Timing System. J. Strength. Cond. Res. 2012, 26, 1245–1248. [Google Scholar] [CrossRef]
  16. Hopkins, W.G. Measures of Reliability in Sports Medicine and Science. Sports Med. 2000, 30, 1–15. [Google Scholar] [CrossRef] [PubMed]
  17. Altmann, S.; Hoffmann, M.; Kurz, G.; Neumann, R.; Woll, A.; Haertel, S. Different Starting Distances Affect 5-m Sprint Times. J. Strength. Cond. Res. 2015, 29, 2361–2366. [Google Scholar] [CrossRef]
  18. Yeadon, M.R.; Kato, T.; Kerwin, D.G. Measuring Running Speed Using Photocells. J. Sports Sci. 1999, 17, 249–257. [Google Scholar] [CrossRef]
  19. Cronin, J.B.; Templeton, R. Timing Light Height Afects Sprint Times. J. Strength. Cond. Res. 2008, 22, 318–320. [Google Scholar] [CrossRef]
  20. Shalfawi, S.A.I.; Enoksen, E.; Tønnessen, E.; Ingebrigtsen, J. Assessing Test-Retest Reliability of the Portable Brower Speed Trap Ii Testing System. Kinesiology 2012, 44, 24–30. [Google Scholar]
  21. Bastida Castillo, A.; Gómez Carmona, C.D.; Pino Ortega, J.; de la Cruz Sánchez, E. Validity of an Inertial System to Measure Sprint Time and Sport Task Time: A Proposal for the Integration of Photocells in an Inertial System. Int. J. Perform. Anal. Sport. 2017, 17, 600–608. [Google Scholar] [CrossRef]
  22. Altmann, S.; Spielmann, M.; Engel, F.A.; Neumann, R.; Ringhof, S.; Oriwol, D.; Haertel, S. Validity of Single-Beam Timing Lights at Different Heights. J. Strength. Cond. Res. 2017, 31, 1994–1999. [Google Scholar] [CrossRef]
  23. Bond, C.W.; Willaert, E.M.; Noonan, B.C. Comparison of Three Timing Systems: Reliability and Best Practice Recommendations in Timing Short-Duration Sprints. J. Strength. Cond. Res. 2017, 31, 1062–1071. [Google Scholar] [CrossRef]
  24. Haugen, T.A.; Tønnessen, E.; Svendsen, I.S.; Seiler, S. Sprint Time Differences between Single- and Dual-Beam Timing Systems. J. Strength. Cond. Res. 2014, 28, 2376–2379. [Google Scholar] [CrossRef]
  25. Thapa, R.K.; Sarmah, B.; Singh, T.; Kushwah, G.S.; Akyildiz, Z.; Ramirez-Campillo, R. Test-Retest Reliability and Comparison of Single- and Dual-Beam Photocell Timing System with Video-Based Applications to Measure Linear and Change of Direction Sprint Times. Proc. Inst. Mech. Eng. Part P J. Sports Eng. Technol. 2023, 30, 17543371231203440. [Google Scholar] [CrossRef]
  26. Bataller-Cervero, A.V.; Gutierrez, H.; Derentería, J.; Piedrafita, E.; Marcén, N.; Valero-Campo, C.; Lapuente, M.; Berzosa, C. Validity and Reliability of a 10 Hz GPS for Assessing Variable and Mean Running Speed. J. Hum. Kinet. 2019, 67, 17–24. [Google Scholar] [CrossRef] [PubMed]
  27. Vachon, A.; Berryman, N.; Mujika, I.; Paquet, J.B.; Monnet, T.; Bosquet, L. Reliability of a Repeated High-Intensity Effort Test for Elite Rugby Union Players. Sports 2020, 8, 72. [Google Scholar] [CrossRef] [PubMed]
  28. Moreno-Azze, A.; López-Plaza, D.; Alacid, F.; Falcón-Miguel, D. Validity and Reliability of an IOS Mobile Application for Measuring Change of Direction Across Health, Performance, and School Sports Contexts. Appl. Sci. 2025, 15, 1891. [Google Scholar] [CrossRef]
  29. Hopkins, A.G.; Marshall, S.W.; Batterham, A.M.; Hanin, J. Progressive Statistics for Studies in Sports Medicine and Exercise Science. Med. Sci. Sports Exerc. 2009, 41, 3–12. [Google Scholar] [CrossRef]
  30. Altmann, S.; Spielmann, M.; Engel, F.A.; Ringhof, S.; Oriwol, D.; Härtel, S.; Neumann, R. Accuracy of Single Beam Timing Lights for Determining Velocities in a Flying 20-m Sprint: Does Timing Light Height Matter? J. Hum. Sport. Exerc. 2018, 13, 601–610. [Google Scholar] [CrossRef]
  31. Tomczak, M.; Tomczak, E. The Need to Report Effect Size Estimates Revisited. An Overview of Some Recommended Measures of Effect Size. Trends Sport. Sci. 2014, 1, 19–25. Available online: https://www.wbc.poznan.pl/publication/413565 (accessed on 1 May 2025).
  32. Lin, L.; Hedayat, A.S.; Sinha, B.; Yang, M. Statistical Methods in Assessing Agreement: Models, Issues, and Tools. J. Am. Stat. Assoc. 2002, 97, 257–270. [Google Scholar] [CrossRef]
  33. McBride, G.B. A Proposal for Strength-of-Agreement Criteria for Lin’s Concordance Correlation Coefficient. NIWA Client. Rep. 2005, HAM2005, 307–310. [Google Scholar]
  34. Hopkins, W.G. Spreadsheets for Analysis of Validity and Reliability. Sportscience 2015, 19, 36–45. Available online: https://go.gale.com/ps/i.do?p=AONE&sw=w&issn=11749210&v=2.1&it=r&id=GALE%7CA562004416&sid=googleScholar&linkaccess=abs&userGroupName=anon%7E8659cdc2&aty=open-web-entry (accessed on 1 May 2025).
  35. Passing, H.; Bablok, W. Comparison of Several Regression Procedures for Method Comparison Studies and Determination of Sample Sizes. Application of Linear Regression Procedures for Method Comparison Studies in Clinical Chemistry, Part II. J. Clin. Chem. Clin. Biochem. 1984, 22, 431–445. [Google Scholar] [CrossRef] [PubMed]
  36. Koo, T.K.; Li, M.Y. A Guideline of Selecting and Reporting Intraclass Correlation Coefficients for Reliability Research. J. Chiropr. Med. 2016, 15, 155–163. [Google Scholar] [CrossRef] [PubMed]
  37. Portney, L.G. Foundations of Clinical Research: Applications to Evidence-Based Practice, 4th ed.; F.A. Davis: Philadelphia, PA, USA, 2020; ISBN 9780803661134. [Google Scholar]
  38. Atkinson, G.; Nevill, A.M. Statistical Methods for Assessing Measurement Error (Reliability) in Variables Relevant to Sports Medicine. Sports Med. 1998, 26, 217–238. [Google Scholar] [CrossRef]
  39. Sašek, M.; Miras-Moreno, S.; García-Ramos, A.; Cvjetičanin, O.; Šarabon, N.; Kavčič, I.; Smajla, D. The Concurrent Validity and Reliability of a Global Positioning System for Measuring Maximum Sprinting Speed and Split Times of Linear and Curvilinear Sprint Tests. Appl. Sci. 2024, 14, 6116. [Google Scholar] [CrossRef]
  40. Marco-Contreras, L.A.; Bataller-Cervero, A.V.; Gutiérrez, H.; Sánchez-Sabaté, J.; Berzosa, C. Analysis of the Validity and Reliability of the Photo Finish® Smartphone App to Measure Sprint Time. Sensors 2024, 24, 6719. [Google Scholar] [CrossRef]
  41. Smith, T.B.; Hopkins, W.G.; Lowe, T.E. Are There Useful Physiological or Psychological Markers for Monitoring Overload Training in Elite Rowers? Int. J. Sports Physiol. Perform. 2011, 6, 469–484. [Google Scholar] [CrossRef]
  42. Hunter, S.K.; Senefeld, J.W.; Hunter, S.K.; Senefeld, J.W.; Physiol, J. Sex Differences in Human Performance. J. Physiol. 2024, 602, 4129–4156. [Google Scholar] [CrossRef]
  43. Manouras, N.; Batatolis, C.; Ioakimidis, P.; Karatrantou, K.; Gerodimos, V. The Reliability of Linear Speed with and without Ball Possession of Pubertal Soccer Players. J. Funct. Morphol. Kinesiol. 2023, 8, 147. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Concordance between Chronojump and Witty Microgate photocells (a) Correlation analysis between Witty and Chronojump. Red line: regression; dotted line: x = y line; rs: Spearman’s correlation coefficient; SEE = standard error of estimate; (b) Bland–Altman plot for the study of the grade of concordance in the time between Witty Microgate and Chronojump photocells. The red solid line is the mean of the differences; the solid black line characterizes the line of perfect agreement; the green shaded area represents confidence intervals for 95% of the mean and LoA; the purple dotted line is the line of perfect agreement; the green dashed line is the regression line of differences.
Figure 1. Concordance between Chronojump and Witty Microgate photocells (a) Correlation analysis between Witty and Chronojump. Red line: regression; dotted line: x = y line; rs: Spearman’s correlation coefficient; SEE = standard error of estimate; (b) Bland–Altman plot for the study of the grade of concordance in the time between Witty Microgate and Chronojump photocells. The red solid line is the mean of the differences; the solid black line characterizes the line of perfect agreement; the green shaded area represents confidence intervals for 95% of the mean and LoA; the purple dotted line is the line of perfect agreement; the green dashed line is the regression line of differences.
Applsci 15 08852 g001
Figure 2. Test–retest within-session reliability (internal consistency) for trials of Chronojump photocell. Top: Regression plots for pairs of 10 m sprint trials, where plot (a) represents pair 1–2, plot (b) pair 3–2, and plot (c) pair 1–3. The solid red line represents regression line; the dotted line signifies line of perfect linearity. Bottom: Bland–Altman plots for pairs of 10 m sprint trials plot, where plot (d) represents pair 1–2, plot (e) pair 3–2, and plot (f) pair 1–3. The solid red line represents the mean of the differences (systematic error); the dashed line represents upper and lower LoA (random error); the solid black line characterizes the line of perfect agreement; the dotted green line represents the regression of the differences (proportional error).
Figure 2. Test–retest within-session reliability (internal consistency) for trials of Chronojump photocell. Top: Regression plots for pairs of 10 m sprint trials, where plot (a) represents pair 1–2, plot (b) pair 3–2, and plot (c) pair 1–3. The solid red line represents regression line; the dotted line signifies line of perfect linearity. Bottom: Bland–Altman plots for pairs of 10 m sprint trials plot, where plot (d) represents pair 1–2, plot (e) pair 3–2, and plot (f) pair 1–3. The solid red line represents the mean of the differences (systematic error); the dashed line represents upper and lower LoA (random error); the solid black line characterizes the line of perfect agreement; the dotted green line represents the regression of the differences (proportional error).
Applsci 15 08852 g002
Table 1. Descriptive statistics for 10 m sprint times, mean, difference of means, significant differences between means, and concordance correlations between Chronojump and Witty photocells.
Table 1. Descriptive statistics for 10 m sprint times, mean, difference of means, significant differences between means, and concordance correlations between Chronojump and Witty photocells.
Chronojump (s)
(95% CI)
Witty (s)
(95% CI)
Difference (s)
(95% CI)
ES (g)
(95% CI)
CCC
(95% CI)
CCC (ρ)CCC (Cb)
2.3422.3900.046 *0.110.9830.9880.995
(2.299 to 2.385)(2.347 to 2.433)(0.038 to 0.053)(0.09 to 0.13)(0.988 to 0.995)
95% CI: confidence intervals at 95%; s: seconds; ES: effect size; CCC: concordance coefficient correlation; * p < 0.01. ρ = precision factor of CCC; Cb = accuracy factor of CCC.
Table 2. Matched reliability of Chronojump and Witty photocells in the test–retest within-session.
Table 2. Matched reliability of Chronojump and Witty photocells in the test–retest within-session.
Absolute Reliability
ChronojumpWitty
2-13-21-3MeanL-CIU-CI2-13-21-3MeanL-CIU-CI
Change in mean (s)0.00−0.070.070.02−0.030.01
SEM (s)0.130.120.170.141.292.010.150.110.190.15
SDC (s)0.360.340.480.400.350.460.410.310.520.420.370.49
SWC (s)0.080.080.070.080.050.090.070.080.070.070.050.09
Relative Reliability
ChronojumpWitty
2-13-21-3MeanL-CIU-CI2-13-21-3MeanL-CIU-CI
ICC0.900.920.820.880.810.930.850.930.780.850.770.91
CV0.220.210.260.230.230.200.260.23
Standardized SEM0.340.310.480.360.310.410.430.290.550.390.350.46
L-CI = Lower confidence intervals at 95%, U-CI = Upper confidence intervals at 95%, SEM = standard error of measurement, SDC = smallest detectable change, SWC = small worthwhile change, ICC = intraclass correlation coefficient, CV = coefficient of variation.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Villalón-Gasch, L.; Jimenez-Olmedo, J.M.; Penichet-Tomas, A.; Sebastia-Amat, S. Concurrent Validity and Reliability of Chronojump Photocell in the Acceleration Phase of Linear Speed Test. Appl. Sci. 2025, 15, 8852. https://doi.org/10.3390/app15168852

AMA Style

Villalón-Gasch L, Jimenez-Olmedo JM, Penichet-Tomas A, Sebastia-Amat S. Concurrent Validity and Reliability of Chronojump Photocell in the Acceleration Phase of Linear Speed Test. Applied Sciences. 2025; 15(16):8852. https://doi.org/10.3390/app15168852

Chicago/Turabian Style

Villalón-Gasch, Lamberto, Jose M. Jimenez-Olmedo, Alfonso Penichet-Tomas, and Sergio Sebastia-Amat. 2025. "Concurrent Validity and Reliability of Chronojump Photocell in the Acceleration Phase of Linear Speed Test" Applied Sciences 15, no. 16: 8852. https://doi.org/10.3390/app15168852

APA Style

Villalón-Gasch, L., Jimenez-Olmedo, J. M., Penichet-Tomas, A., & Sebastia-Amat, S. (2025). Concurrent Validity and Reliability of Chronojump Photocell in the Acceleration Phase of Linear Speed Test. Applied Sciences, 15(16), 8852. https://doi.org/10.3390/app15168852

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop