Next Article in Journal
Landing System Development Based on Inverse Homography Range Camera Fusion (IHRCF)
Previous Article in Journal
Bernstein Polynomial-Based Method for Solving Optimal Trajectory Generation Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

The Way to Modern Shutter Speed Measurement Methods: A Historical Overview

1
Alba Regia Technical Faculty, Óbuda University, 8000 Székesfehérvár, Hungary
2
Department of Computer Science and Systems Technology, University of Pannonia, 8200 Veszprém, Hungary
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(5), 1871; https://doi.org/10.3390/s22051871
Submission received: 24 January 2022 / Revised: 15 February 2022 / Accepted: 24 February 2022 / Published: 27 February 2022
(This article belongs to the Section Sensing and Imaging)

Abstract

:
Exposure time is a fundamental parameter for the photographer when the photo is composed, and the exact length of the exposure may be an essential determinant of performance in certain camera-based applications, e.g., optical camera communication (OCC) systems. There can be several reasons to measure the shutter speed of a camera: shutter speed may be checked at the time of manufacturing; it may be necessary to recheck in case of an elder camera model; it may be necessary to be measured if its exact value is not provided by the manufacturer; or a precise measurement may be necessary for a demanding application. In this paper various methods for shutter speed measurement are reviewed, presenting and analyzing methods that are still relevant today either for manufacturers, service personnel, amateur photographers, or the developers of camera-based systems. Each presented method is illustrated by real measurement results and the performance properties of the methods are also presented.

1. Introduction

Exposure time (often referred to as shutter speed) is the length of time for which the film of a traditional camera or the sensor of a digital camera is exposed to the incoming light in order to create the image. In most cases, darker scenes (e.g., night time photos) require longer exposure times, while bright items (e.g., a sunny landscape) can be photographed with short exposure times. The exposure time has crucial effect in photography when a picture is composed: short exposure times allow for catching fast movements, while long exposure times allow motion blur and thus create artistic effects [1]. These effects are so important parts of the photographic experience that even virtual reality photography simulates them [2]. While the special or individual choice of exposure time (and the corresponding aperture) allows for expressing the photographer’s creativity and feelings about the scene, technical systems may use various exposure times to create optimal results, e.g., in High Dynamic Range (HDR) photography [3,4,5,6]. Imaging systems are utilized in various technical fields, where the shutter speed is set to achieve the requirements of the application; e.g., when high speed fluid flows are measured using interferometry, the exposure time is set very short (as low as a few microseconds) to prevent motion blur [7,8], while in astronomical photography extremely long exposures times (even days) may be used to provide good signal to noise ratio [9,10,11]. Object tracking systems usually require short exposure times to provide sharp images [12,13,14]. For speed estimation, longer exposure times may also be utilized, where the amount of motion blur contains information about the speed of the object [15,16,17]. The ubiquity of smartphones, equipped with good quality cameras, stimulates the research of OCC systems. Here, the cameras are utilized as sensors, to receive visually coded information [18,19,20], and the value of the exposure time (among other factors) determines the data rate, the bit-error-rate, and the achievable communication distance [21,22,23]. In camera-based positioning systems, OCC is often utilized to allow beacon identification [24,25].
In most cameras, which use mechanical shutters, the exposure time is determined by two moving shutter curtains, deployed in front of the focal plane. The mechanism, called rolling shutter, is illustrated in Figure 1. The upper and lower rows illustrate the cases of longer and shorter exposure times, respectively. Before the exposure, the first (or front) curtain blocks the way of light, and thus the shutter is closed, as shown in Figure 1a. At the beginning of the exposure, the front curtain starts to fall, opening the shutter so light can reach the sensor (see Figure 1b,c). For longer exposure times, the shutter may be fully open for a while, as shown in Figure 1d. After a while, the second (rear) curtain starts to roll down (Figure 1e,f) and finally it closes the shutter again, as shown in Figure 1g. For shorter exposure times, the process starts similarly, as shown in Figure 1h,i, but the shutter may not be completely open: the rear curtain starts to fall while the front curtain is still falling (see Figure 1j). In this case, a part of the picture is covered by the front curtain, and another is by the rear curtain. Both curtains having the same speed, the narrow gap between the curtains rolls down, as shown in Figure 1j,k. It is clear from the operation that the exposure is not abrupt: first the upper part of the sensor (or film) is exposed to the light, and then gradually, as the front curtain falls, the lower part is also exposed. The closure is similar: starting at the upper part, the sensor is gradually blocked from light as the rear curtain falls.
Nowadays, most of the smaller sized and inexpensive digital cameras use electronic shutters, which require no mechanical parts. In this case, the light sensor is switched on and off for the time of the exposure. Electronic shutters may behave very similarly to their mechanical rolling shutter counterparts: for easier operation, the sensor is operated line-wise: first, the uppermost line of pixels is switched on (exposed and then the sensor values are stored), then the second line follows, and so on to the last line. Thus, the upper part of the picture is exposed somewhat earlier than the lower part, similar to the case of Figure 1, causing distortion in the case of a moving target. More expensive digital cameras, however, may contain global shutter mechanisms, where every part of the sensor is switched on and off (and thus exposed) at the same time. Such cameras may provide distortion-free pictures for demanding industrial or scientific applications.
Technically, the exposure time is defined as the time span for which the center of the sensor is exposed to the incoming light. This definition is valid for both rolling shutter and global shutter cameras: although in rolling shutter cameras various parts of the image are exposed at different times, every part has (approximately) the same exposure time. Global shutter cameras simply expose every part of the picture at the same time for the time span of the exposure time.
The exposure time of a camera can usually be set in discrete steps, e.g., by the exposure dial on a traditional camera or by software in case of a digital camera. The exact value of the exposure time, however, may differ from the nominal value: old mechanical shutters may deteriorate in time and the shutter speed may significantly differ from the nominal values, and the actual exposure time of a digital camera may also differ from the nominal value reported by the manufacturer. It also happens that manufacturers of inexpensive cameras do not provide timing values at all. Thus, the measurement of the exposure time may not be performed only during manufacturing and production control by the manufacturer itself, but the critical user may also need to measure it if the camera is used for high precision applications.
Various camera types and different accuracy requirements led to the development of several methods to measure the exposure time of cameras, the earliest solutions dating back to the 1890s, when electro-opto-mechanical equipment were proposed to measure the Speed of Camera Shutters [26]. In this paper, those measurement principles and approaches are revised, which still has some relevance today, also showing a historical path towards modern solutions. In addition to the introduction of the measurement methods, their performance properties (accuracy, measurement range) will also be discussed and illustrated. The following type of methods will be discussed in detail:
  • The direct method allows the measurement of shutter speed by observing the operation of the mechanical shutter mechanism. This method requires access to the camera’s focal plane: for vintage and traditional film cameras it is straightforward, but for most digital cameras it is only possible during the manufacturing process;
  • The most common indirect way to measure the shutter speed is taking photos of a moving object and calculate the exposure time from the motion blur, observed on the picture, and the speed of the moving object. For simple measurements, the moving object can be a real physical object with known velocity, but more precise measurements use electronically simulated movements;
  • The shutter time of cameras capable of recording video streams can be measured using equivalent sampling. In this case, a blinking light source is recorded by the camera under test, and from the change of the recorded light intensity vs. time, the shutter speed is calculated.
This paper is organized as follows: in Section 2, the test equipment, used for illustration throughout the paper, is introduced. In Section 3, the direct method is presented. In Section 4 various methods, based on motion blur, are discussed. In Section 5, a different approach is presented, which uses equivalent sampling of a blinking light source. Each method is illustrated by real measurements and their performance properties are evaluated. In Section 6, the discussed methods are compared.

2. Test Equipment

Three cameras from different eras were used to illustrate the measurement processes and their performance properties. The cameras are shown in Figure 2.
The oldest model is a Zenit TTL SLR (Single Lens Reflex) film camera from the late 70 s, produced by KMZ (Krasnogorsk, Soviet Union), shown in Figure 2a. It has an all-mechanical cloth shutter with 5 selectable exposure times from 1/30 s to 1/500 s. From now on, this camera will be referred to as C1.
The EOS 350D, shown in Figure 2b, is one of the earliest DSLR (Digital Single Lens Reflex) cameras produced by Canon (Tokyo, Japan). It is equipped with an electromechanical shutter system with exposure times between 1/4000 s and 30 s with 1/3 f increments. This camera will be referred to as C2.
The third camera, C3, is the FLIR Grasshopper3 (GS3-U3-23S6M), which is an industrial camera mainly targeted for machine vision purposes, produced by Teledyne FLIR (Wilsonville, OR, U.S.). It is equipped with a global electronic shutter and can take photos and videos with predefined exposure times from 8 μ s to 31.9 s. The camera is shown in Figure 2c.
The properties of the cameras, relevant to this research, are summarized in Table 1.

3. Direct Method

A wide range of direct methods has been applied to measure exposure time, the common factor between them being that a light source is used in front of the shutter and the length of the passing light pulse is measured behind the shutter. The earliest systems used the film itself as a sensor [26], but later electronic sensors (e.g., photocells, photodiodes, phototransistors) were utilized [27,28]. The length of exposure was often estimated by using a series of light sources [29] or stroboscopic effect [26]; thus, the time measurement could be replaced by counting. Other solutions integrated the sensed light pulse by a capacitor and calculated the length of the pulse from it [27]. Later devices used digital circuits to present the measurement results for the user [28]. Using the same measurement principle, smartphone apps with simple external hardware, to be connected to the microphone input, are available for modest accuracy requirements [30]. Professional equipment using direct method can measure the shutter time with 5–10 μs uncertainty [31,32].
ISO standard 516 also defines a direct measurement method to determine the shutter speed of a camera [33]. The measurement scheme is shown in Figure 3. A constant illumination is provided in front of the camera, which has a light sensor (e.g., photodiode or phototransistor) placed behind its shutter at the center of the focal plane. The detected light intensity is observed on an oscilloscope. During the exposure, the light sensor detects increased light intensity. Thus, the width of the detected impulse is the exposure time.
A set of measurements are shown in Figure 4. The first measurement was made with exposure time 1/30 s, but the measurement shows significantly different time of 23.4 ms = 1/43 s. The second measurement was made with setting 1/500 s, and the corresponding measurement shows 1.94 ms = 1/515 s.
In the direct method, the exposure time is read directly from the oscilloscope. If the pulse width is determined from the screen of the oscilloscope by the user, the reading uncertainty h r e a d can be as high as 2–5%. However, digital oscilloscopes provide built-in measurement features, typically reducing the reading error to approx. 0.5%. In addition to the reading error, measurement noise may cause uncertainty in the measurement, as follows. Let us suppose that the magnitude of the noise is A n , the measured signal amplitude is A , and the rising and falling times of the signal are T r , as shown in Figure 5.
If the slope of the signal is m then the time measurement uncertainty Δ t , caused by the magnitude uncertainty A n , is
Δ t = A n m = A n A / T r = T r A n A = T r S N R ,
where S N R = A / A n is the signal-to-noise ratio. Since the uncertainty is present on both the rising and falling edges, in the worst case the measurement uncertainty Δ T e x p will be
Δ T e x p = 2 Δ t = 2 T r S N R .
Since T r can be considered constant (it is the time while the curtain moves the distance equivalent to the light sensor’s size), the measurement uncertainty depends only on the signal-to-noise ratio. The relative uncertainty h e x p of the exposure time measurement is the following:
h e x p = 2 S N R T r T e x p .
According to (3), the relative measurement uncertainty, due to noise, increases for small exposure times. The total relative uncertainty, as the sum of the reading error h r e a d and noise uncertainty h e x p , is shown in Figure 6, for the measured camera C1. The value T r was set to 0.6 ms, as was measured in case of C1 (see Figure 4b). The SNR in the measurements was close to 20 dB, thus the blue curve shows the maximum expected relative uncertainty for camera C1. For the estimation of (3), the values A and T r were simply read from the scope, and A n was estimated as the RMS of the horizontal section of the measured signal.
The measurement results of camera C1 are shown in Table 2. The results show that the camera has a large bias at the longer exposure time region, the largest error is almost 30% at 1/30. These errors are much higher than the expected maximum measurement error, shown in Figure 6; thus, the camera surely has problems with its timing. Considering the age of the camera, it is not surprising. At shorter exposure times, the camera shows acceptable performance.
Comparing the results of Table 2 with the measurement errors in Figure 6, the following statements can be made:
  • The camera behaves reasonably well at shutter speeds of 1/500 and 1/250, the error is around or below 3–4%. In this range, the uncertainty of the measurement is comparable to the error of the camera, thus the error level cannot be determined more precisely.
  • At longer exposure times, the error of the camera is much higher. Since here the uncertainty of the measurement is smaller, it can be stated that the shutter time error of the camera at 1/30, 1/60, and 1/125 is 30 ± 1%, 14 ± 2%, and 8 ± 3%, respectively.
The direct method proposed in the standard can be used for a wide range of cameras, but unfortunately access to the focal plane is necessary to perform the measurement. In traditional film cameras, it can be done easily by opening the back cover and placing the sensor in place of the film [34]. In most digital cameras, however, access to the focal plane is not possible without breaking the integrity of the camera; thus, the application of the direct method is limited to the manufacturing phase or service activities; regular users must use other indirect and non-intrusive methods, which utilize pictures taken in the normal operation mode of the camera.
Notice that the exposure time is defined at the center of the image. Although there might be slight differences in the exposure times in different parts of the picture, depending on the actual properties of the shutter, this variation is usually neglected when indirect methods are utilized.

4. Methods Based on Motion Blur

Since cameras integrate the incoming light during the interval of the exposure time, the image taken on a moving object may be blurred. The blurring effect depends on the speed of the object (the higher the speed the higher the blur) and the exposure time (the longer the exposure time the higher the blur). The latter effect can be utilized to measure the length of the exposure.
The method was applied in many forms to provide an estimate on the exposure time. Since rotating movements are easier to handle in measuring equipment than lateral movements, numerous methods used some form of rotating target; e.g., in the first published method a rotating disk with holes was applied [26], or later a camera was rotated while taking a photo on a small fixed light source [35]. For the sake of convenience, in simple measurement setups often conventional turntables were used to create controlled movement, as will be described in Section 4.1. Later, instead of moving physical objects, electronic systems were utilized to simulate movement. In the era of cathode ray tubes (CRTs), the swiping electron beam on the display provided the moving target, which allowed higher precision and wider measurement range, as will be shown in Section 4.2. Today’s measurement equipment utilizes LED arrays, which will be discussed in Section 4.3.

4.1. Moving Phisycal Target

In a convenient measurement setup, an image is taken of a small light object, which is rotated with known angular velocity. During the time of the exposure, the object will move, and so on the picture blurring will occur; instead of a point, an arc will be shown. For the sake of convenience, a turntable may be used for driving purposes [36], and the measurement process may be automated [37]. In Figure 7, a measurement setup with a turntable is shown. The angular velocity ω of the turntable is known, e.g., ω = 33 1 3 RPM (rotations per minute) or 45 RPM, and the measured angle of the blur is α . The length of the exposure T e x p can be calculated as follows:
T e x p = α ω .
The measurement equipment and a photo taken with T e x p = 1 / 10 s can be seen in Figure 8.
The measurement uncertainty of (4) depends on the accuracy of angular velocity ω and the accuracy of measurement α . Since normally the uncertainty of ω is negligible compared to that of α , the uncertainty Δ t can simply be approximated as follows:
Δ T e x p = Δ α ω ,
and the corresponding relative uncertainty is
h e x p = Δ T e x p T e x p = Δ α α .
Since the maximum reading uncertainty of angle α on a photo is approx. Δ α = 0.5 degrees, based on our experiments, and the uncertainty is independent of the actual value of α , the relative uncertainty (6) is practically inversely proportional with α . The maximum relative uncertainty of (6) are shown in Figure 9 for a turntable with ω = 33 1 3 RPM, where the measured angle (in degrees), as a function of exposure time is the following:
α = T e x p · 360 ° · 33 1 3 60   sec = 200 ° / sec · T e x p
According to the results shown in Figure 9, the uncertainty is around 1%, when the measured exposure time is higher than 1/4 s. For exposure times shorter than 1/40 s, the relative measurement uncertainty may be higher than 10%; thus, this measurement method is suitable only for long exposure times.
Measurements for cameras C2 and C3 were made by the turntable method, the results are shown in Table 3, and the measured relative errors are also plotted on Figure 9. In the case of C3, the error trend is similar to the theoretical measurement uncertainty, although the error is somewhat lower at shorter times, indicating a smaller reading uncertainty than 0.5 degrees. Since the error in this case is lower than the measurement uncertainty, it can be stated that the camera is more accurate than the measurement itself. In the case of C2, however, the error is significantly higher at longer exposure times: here, the camera has detectable deviation (in the range of 1–2%) from the nominal exposure values.

4.2. Moving Electron Beam: CRT Monitor

Instead of mechanical movement, a moving electron beam can be used to estimate the exposure time [36]. A few decades ago, Cathode Ray Tubes (CRTs) were generally used in monitors and TV sets. This equipment provided an easily available and straightforward way to produce the necessary moving electron beams for the measurement. The principle is shown in Figure 10. The CRT display creates the picture row-by-row, moving the electron beam from left to right in each row, then starting at the beginning of the next row. The picture is drawn on the screen frequently enough (60–120 times per second) so that the human eye does not see the blinking. When a photo is taken of the display, only those rows are shown in the pictures which were refreshed during the time of the exposure. (Note that other parts of the picture may also be visible, but not bright, due the phosphor persistence.) In Figure 10, only a small slice of the full picture is visible. From the size of the visible part, the exposure time can be calculated.
A simple approach to estimate the exposure time is the following: let us calculate the number N of rows on the taken picture; for this, the display should contain a carefully designed pattern containing horizontal lines, e.g., in every tenth row. The time T l i n e necessary to draw one line can be calculated from the refresh rate and the number of lines of the monitor. A simple estimate of the exposure time can be the following:
T e x p = N · T l i n e .
This method, however, has bias, and may be significantly improved by investigating the process of exposure, as shown in Figure 11a. Let us suppose that the monitor is refreshed from top to bottom, i.e., first the uppermost row is drawn, then the next, until the last row at the bottom of the screen. Also let us suppose that the camera is placed so that the curtains fall in the same direction as the rows follow each other on the image (since cameras create inverted images, this happens if the camera is oriented upside down).
As shown in Figure 11a, the image is refreshed from top to bottom with speed v e rows per second, where v e = 1   row / T l i n e . For sake of convenience, let us define the speed of the curtains as v c rows per second (i.e., in one second the curtain would cover/uncover v c rows of the monitor in the taken picture), the speed vector v c pointing from top to bottom. Notice that in practice v c v e .
At time instant T 0 , the monitor draws row A and the camera’s front curtain just opens before row A: it will be the first row shown in the picture. Since the curtain is faster than the beam, the front curtain will uncover the area below row A, followed by the slower beam. At time instant T 0 + T e x p , the rear curtain reaches row A and covers it. At time instant T 0 + T M 1 , the rear curtain reaches the actually refreshed row B and covers it. It is the last row shown in the picture. The picture contains rows between A and B, their number is denoted by N 1 .
Notice that the number of lines refreshed during T e x p is always smaller than N 1 : the fact that the speed of the shutter is finite causes a bias in the measurement; the measured time according to (8) with N = N 1 is always longer than the real length of the exposure.
Notice that between time instants T 0 + T e x p and T 0 + T M 1 the rear curtain covered the distance between rows A and B, thus N 1 rows. The beam covered the same distance between time instants T 0 and T 0 + T M 1 Thus the following equation holds:
N 1 = v e T M 1 = v s ( T M 1 T e x p ) .
Let us consider the case when the camera is rolled 180 degrees (it is now in normal position), as shown in Figure 11b. Now, the vertical speed of the beam points downwards, while the speed of the curtains points upwards. The process of exposure is the following: At time instant T 0 , the front curtain uncovers row C, which will be the first row shown in the picture. At time instant T 0 + T M 2 the rear curtain reaches and covers the currently refreshed row D. This will be the last row shown in the picture. Some time later, at T 0 + T e x p , the rear curtain reaches the position of row C. In the taken picture rows between C and D are shown, their numbers being N 2 .
Now let us notice that the rear curtain between time instants T 0 + T M 2 and T 0 + T e x p covered the distance between rows C and D, altogether N 2 rows. The same distance was covered by the beam between time instants T 0 and T 0 + T M 2 ; thus, the following equation can be constructed:
N 2 = v e T M 2 = v s ( T e x p T M 2 ) .
Note that in this case there is a bias, too, if the naïve approach of (8) is used, but now the measured time is always smaller than T e x p .
From (9) and (10) the unbiased estimate of T e x p can be expressed, as follows:
T e x p = 2 T M 1 T M 2 T M 1 + T M 2 = 2 v e N 1 N 2 N 1 + N 2 .
The uncertainty of the estimated exposure time can be calculated as follows. Using the partial derivatives δ T e x p δ T M 1 = 2 T M 2 2 ( T M 1 + T M 2 ) 2 and δ T e x p δ T M 2 = 2 T M 1 2 ( T M 1 + T M 2 ) 2 of (11), the variation of T e x p , in the presence of measurement uncertainties Δ T M 1 and Δ T M 2 , can be approximated as follows:
Δ T e x p δ T e x p δ T M 1 Δ T M 1 + δ T e x p δ T M 2 Δ T M 2 = 2 T M 2 2 ( T M 1 + T M 2 ) 2 Δ T M 1 + 2 T M 1 2 ( T M 1 + T M 2 ) 2 Δ T M 2 .
If the reading uncertainties Δ T M 1 and Δ T M 2 are maximized by Δ T , i.e.,
Δ T max ( Δ T M 1 ) max ( Δ T M 2 ) ,
then the maximum estimation uncertainty Δ T e x p ,   m a x is the following:
Δ T e x p ,   m a x = 2 T M 1 2 + T M 2 2 ( T M 1 + T M 2 ) 2 Δ T ,
and the maximum relative uncertainty is
Δ T e x p ,   m a x Δ T e x p = T M 1 2 + T M 2 2 T M 1 T M 2 ( T M 1 + T M 2 ) Δ T .
Using approximate value T M T M 1 T M 2 , the maximum relative uncertainty is estimated as follows:
Δ T e x p ,   m a x Δ T e x p Δ T T M = Δ N T e x p v e .
Since v e is constant, and the reading uncertainty Δ N is also approximately constant (1–3 lines of uncertainty was experimented during the measurements), the relative estimation uncertainty (16) is inversely proportional with the exposure time. Figure 12 shows the theoretical relative uncertainty, for Δ N = 1 , 2 , 3 . Thus, reasonable measurements are possible between 1/125 and 1/4000; the uncertainty of the estimation may be below 1% above 1/500, but in the short exposure time region it can be as high as 10%.
An example measurement of C2 is shown in Figure 13, with exposure time of 1/500 sec. The results were N 1 = 120 and N 2 = 163 , in normal and upside-down camera positions, respectively. The monitor draws 1 line in 14.56   μ s ( v e = 1 / 14.56   lines / μ s ), thus the naïve measurement results, according to (8) are 1.75   ms and 2.37 ms . The unbiased estimate of (11) is 2.01   ms , which is a good estimate of the nominal 1/500 s =   2   ms value.
More measurement results are shown in Table 4, the errors are also plotted on Figure 12, for cameras C2 and C3. Since C3 has global shutter, for this camera N 1 = N 2 , so either approach gives the same unbiased estimate. The measurement results are quite close to the nominal values, for longer exposure time with error below 1%, while for shorter exposure times the error increased to 3%, possibly due to measurement inaccuracies, as was expected according to (16). C2 has rolling shutter, thus the naïve approach resulted in high errors. Notice that the error is always negative in normal, while positive in upside down position. The unbiased estimator shows good agreement between 1/125 s and 1/500 s, but there are significant differences for shorter shutter times (higher than the expected measurement uncertainties), thus the timing of the camera is probably not accurate in this range.

4.3. Running LED Array

The moving object can be replaced by an LED array: in this setup, as shown in Figure 14, one LED is switched on at a time for time T O N . The LEDs light up one after another, creating an effect as if one LED was running circularly along the array. Such devices may use LED stripes, as in [38], where stripes of 100 LEDs were proposed, while the commercial product [39] utilizes a 10 × 10 array of LEDs.
Trivially, if a picture contains N O N bright LEDs, then the exposure time T e x p can be calculated as follows:
T e x p = N O N · T O N .
Notice that the array setup, shown in Figure 14, results in the same problem that was discussed in the CRT case: the final speed of the rolling shutter will cause a bias. When the LEDs are arranged in a single row, this effect is not present.
In practice the first and last LEDs in the bright series may not be as bright as the other ones. Although the observed light intensity could be used to refine the estimate, it is safer to state that the reading uncertainty is not more than ± 2 in the count of N L E D . The timing inaccuracy of the LEDs can be neglected, thus the relative uncertainty of the measurement can be estimated as
h e x p = 2 T O N T e x p = 2 N O N .
Due to the limited resolution, the best result that can be obtained, according to (18), is 2/ N T O T A L , where N T O T A L is the total number of LEDs in the device. In a device containing 100 LEDs the relative uncertainty would be around 2%. The resolution, thus the accuracy, can be improved using multiple LED timers, as shown in Figure 15.
In the multi-timer device, the central LED, illustrated as a wider LED, has on-time T C E N T , while the on-time of the side LEDs is T S I D E . Notice that T C E N T may be significantly higher than T S I D E . In the illustration T S I D E = 1   ms and T C E N T = 500   ms . If the central LED is bright, on the left side there is N L E F T = 1 bright LED, and on the right side there are N R I G H T = 2 bright LEDS, as shown in Figure 15, then the exposure time is
T e x p = T C E N T + ( N L E F T + N R I G H T ) T S I D E ,
resulting in T e x p = 503   ms . Notice that the resolution is now determined by T S I D E , which may be a very small value, providing high resolution and high accuracy with a small number of LEDs. The uncertainty is now estimated as
h e x p = 2 T S I D E T e x p .
which in the example of Figure 15 results in an error of 0.4%.
When the multi-timer device is used, for an unknown exposure time usually an iterative approach is necessary: first, the approximate exposure time T e x p is determined with T C E N T = T S I D E , then T S I D E is reduced and T C E N T is set so that T C E N T + 4 T S I D E T e x p . Using the more and more accurate estimate T e x p , the values of T C E N T and T S I D E are updated using smaller and smaller T S I D E values, until the required resolution is reached.
The utilization of a device, similar to Figure 15, has disadvantages, too. Notice that the LED pattern must fulfill the following requirements, R1 and R2, in order to contain meaningful measurement:
R1:
The leftmost and rightmost LEDs must be dark (otherwise the numbers N L E F T or N R I G H T would not be meaningful)
R2:
The two side LEDs, next to the central LED must be bright (otherwise it would not be sure that the central LED was on for the full time of T C E N T )
To capture such pattern, either the camera must be synchronized to the measurement device, or the user must be really lucky: the higher the ratio of T C E N T / T S I D E the less probable that the image satisfies the requirements. If camera synchronization is not possible but the camera is able to record video, an alternative ‘quasi-synch’ method can be used, as follows:
The running LED is not cycling continuously: the LED runs along the line once and stops at the last LED. The cycle will start again so that the repeat time of the cycle is T r e p . If the camera’s framerate is f f r a m e then T r e p is tuned around 1 / f f r a m e :
T r e p = 1 f r a m e Δ T r e p ,
For Δ T r e p = 0 , the camera and the running LEDs would be perfectly synchronized, thus pictures of the running LED would be taken at exactly the same phase and all pictures of the video would contain the same image. Instead, Δ T r e p T S I D E is used during the measurement, when each picture of the video is taken at a phase Δ T r e p time later than the previous one. Thus, the captured video stream scans the running LED sequence, with offset changing by steps of Δ T r e p in each frame, eventually catching a desired time instant, similar to Figure 15. After the video recording, a suitable frame, satisfying requirements R1 and R2, is selected and N L E F T and N R I G H T are measured on the frame. Finally, (19) is used to calculate the exposure time estimate. Notice that the “quasi-synch” method in fact uses equivalent sampling [40,41], which will be discussed in Section 5.
Figure 16 shows the multi-timer measurement equipment and a photo taken by C3 with settings T C E N T = 1005   μ s , T S I D E = 1   μ s . From Figure 13b, the values N L E F T = 1 and N R I G H T = 2 can be read, thus, according to (19), the measured exposure time is T e x p = 1008   μ s .
Measurement results of C3 can be seen in Table 5. The nominal value, set by the user, is internally rounded and slightly modified by the camera, and the exact value can be queried. Thus, the column Reported exposure time shows the exact timing reported by the camera. Column T e x p shows the estimated exposure times, along with the maximum uncertainty. The uncertainty values were calculated as 2 T S I D E ; e.g., 1004 ± 2   μ s means that the side LEDs on-time was 1   μ s , causing maximum 2   μ s reading uncertainty. As the results show, there is a systematic error of approximately 6   μ s , which is especially visible at the lower time region: the camera has higher exposure time than it is actually reported by the camera’s software. Similar effects were observed concerning other camera types of the same manufacturer [42]. The last column Relative error(br) shows the relative measurement error, after the 6   μ s bias was subtracted from the reported values. The accuracy of the measurement is really good: at lower speed the relative error is way below 1%, while around the few microseconds range the error increased to 7%. This measurement method allows exposure time measurement even with 1   μ s accuracy.

5. Measurement Using Equivalent Sampling

A completely different approach was proposed in [42] for digital cameras having video mode. The method is illustrated in Figure 17. The camera is used in video mode, i.e., it captures frames with period T C A M , where T C A M = 1 / f f r a m e and f r a m e is the frame rate, e.g., 30 FPS (frames per second). The input is a blinking light, produced by an LED, driven by a symmetrical square wave, with period T L E D . With properly chosen blinking frequency, the camera will record a slowly blinking LED. The intensity function of the recorded LED is utilized to compute the exposure time.
The operation is illustrated in Figure 18. The blinking frequency is selected so that T C A M is approximately, but not exactly the multiple of T L E D :
T C A M = n T L E D + Δ T ,
where Δ T T L E D and n 1 integer. For a moment, let us suppose that we sample the LED signal with ideal (impulse) sampling, as shown in Figure 18 in red dots. Let us denote the original signal by x ( t ) , and the sampled signal by x s ( k ) = x ( k T C A M ) . Notice that due to (22), consecutive samples will have the following properties:
x s ( k ) = x ( k T C A M ) x s ( k + 1 ) = x ( ( k + 1 ) T C A M ) ) = x ( k T C A M + n T L E D + Δ T ) = x ( k T C A M + n T L E D + Δ T ) .
Because of the periodicity of x ( t ) , x ( t + n T L E D ) = x ( t ) , for any integer n , thus
x s ( k + 1 ) = x ( k T C A M + Δ T ) .
According to (24), it seems that sample k + 1 is taken Δ T time after sample k. It is exactly the principle of equivalent sampling: a periodic signal x ( t ) is sampled with low sampling frequency 1 / T C A M but the sampled signal is the same as if x ( t ) was sampled with high frequency 1 / Δ T , as shown in Figure 18 [40].
Cameras, however, do not use ideal sampling, rather the operation of light sensors (both films and electronic sensors) can be modelled as integrators: the camera integrates the incoming light for the length of exposure. Thus, the real sample x i ( k ) , taken by the camera and shown with blue dots in Figure 18, is computed by the integral of x ( t ) , between time instants k T C A M and k T C A M + T e x p , as follows:
x i ( k ) = k T C A M k T C A M + T e x p x ( t ) d t .
The integral of a symmetrical square wave is a symmetrical trapezoid, where the lengths of the rising and falling edges are T e x p , as shown in Figure 18.
Let us denote the number of samples on the rising (or falling) edge by N e x p , and the number of samples in the full period by N L E D . Thus, the following approximate equations hold:
T e x p N e x p Δ T , T L E D N L E D Δ T .
From (26) the exposure time estimate can be calculated as follows:
T e x p = T L E D N e x p N L E D .
The exposure time measurement is performed as follows:
Step 1.
The generator’s period length T L E D is set according to (22), using any integer number n .
Step 2.
The output of the camera is observed and T L E D is fine-tuned so that the video stream shows a slowly blinking LED. The period length may be several seconds or even minutes. After the tuning the value of T L E D is read.
Step 3.
A sufficiently long record is gathered (at least one full period)
Step 4.
One pixel of the LED (preferably at the center of the screen) is selected and the intensity function of this pixel as a function of time is used.
Step 5.
The number of samples N e x p on the rising (or falling) edge is counted.
Step 6.
The number of samples N L E D in the full period is counted.
Step 7.
The exposure time is estimated using (27).
The uncertainty of T L E D can be neglected, when a good quality oscillator is used, thus the uncertainty of the measurement can be estimated from (27), using the partial derivatives δ T e x p δ N e x p and δ T e x p δ N L E D , as follows:
Δ T e x p = δ T e x p δ N e x p Δ N e x p + δ T e x p δ N L E D Δ N L E D = T e x p N e x p Δ N e x p T e x p N L E D Δ N L E D = T e x p h N e x p T e x p h N L E D ,
thus, the relative uncertainty of T e x p is the following:
h e x p = Δ T e x p T e x p = h N e x p h N L E D .
The reading uncertainty depends on the measurement noise, and can be enhanced e.g., using linear regression [42]. In our test environment, the reading uncertainty was 2–5 samples. According to (28) and (29), the higher the number of measurements ( N 1 and N 2 ) the better the accuracy, thus for high quality measurements the T L E D blinking period must be tuned so that the blinking period on the image is long enough. An example is provided to illustrate the determination of measurement parameters, given the accuracy needs.
Let us suppose that the exposure time to be measured is approximately T e x p 1 / 1000 s, and we want to determine its exact value with 1% accuracy. The camera’s sampling interval is T C A M = 1 / 30 s, parameter n = 5 , thus, according to (22), T L E D 1 / 150 sec. If the reading accuracy Δ N e x p Δ N N L E D 2 = Δ N samples then, according to (29), the accuracy requirement can be written as h e x p Δ N ( 1 N e x p + 1 N L E D ) < 1 % . From (27), Δ N e x p = Δ N L E D T e x p T L E D = Δ N L E D 6.7 , thus the accuracy requirement becomes 2 ( 6.7 N L E D + 1 N L E D ) < 1 % , from which N L E D > 1533. Thus, one blinking period must contain at least 1533 samples, which means that T L E D time in Step 2 must be tuned until the observed blinking period is longer than 1533 · 1 30 s = 51   s .
Two example measurements, with nominal exposure times of 1/1000 and 1/125,000, are detailed in Table 6, using camera C3. The table contains the exposure times reported by the camera, the counted values N e x p and N L E D , the frequency f L E D of the LED, the estimated exposure time according to (27), and the estimated maximum relative uncertainty h e x p , according to (29). Figure 19 shows the plots of the corresponding measurements.
The maximum estimation uncertainty h e x p was calculated using counting uncertainty Δ N e x p Δ N N L E D 2 . For the first case, using (29), the maximum relative measurement uncertainty is h e x p 2 233 + 2 1540 1 % . In the second case the maximum relative uncertainty is h e x p 2 23 + 2 1239 9 % .
The measurement results of C3 are summarized in Table 7. In addition to the reported exposure time, the table also contains the corrected exposure times, due to the systematic bias discussed in Section 4.3. Values in column Relative error are calculated with respect to the reported nominal values, while column Relative error(br) shows the error with respect to the corrected (unbiased) values. The measured exposure times correspond very well with the corrected values, the relative error growing above 1% only in case of the very short exposure times.

6. Comparison and Evaluation

In this section the reviewed measurement methods are summarized and compared. Table 8 summarizes the main features of the discussed methods.
The direct method is applicable only for cameras where the focal plane is accessible, i.e., film cameras, or cameras where the camera frame can be opened. The method is also suitable for testing during manufacturing or servicing. The method can be applied for exposure times longer than 1/10,000. The minimum exposure time that can be measured is limited by the measurement noise. The measurement uncertainty at the shorter times may be as high as 10%, but as the exposure time increases the measurement uncertainty decreases to approx. 0.5%. The measurement process is simple: only one exposure is required, followed by a simple time measurement on the oscilloscope. No special equipment is required: a light source, a photosensor and an oscilloscope is necessary. The direct method can be used to measure the exposure time according to the standard at the center of the picture frame, or alternatively the measurement can be made at any point of the picture frame.
The turntable method (or any alternative blur-based method using a moving physical object) is a simple method requiring only a single shot with any type of camera. The measurement range is quite narrow, from 1/125 sec to 2 s: at short exposure times the angle to be measured is very small, resulting in poor accuracy, while at long exposure times the angle would be higher than 360 degrees, which cannot be detected. Thus, at shorter times the uncertainty is high (even 10%), but at longer times uncertainty around 1% can be reached.
The method using a monitor has a somewhat wider measurement range, from 1/10,000 to 1/125. The measurement process is simple, in general requiring two exposures (one in case of global shutters). Here the measurement range is limited by the fact that at short exposure times the number of rows is small (possibly fractional), causing large detection uncertainties, while at long exposure times the exposed rows fill the full monitor, prohibiting the measurement. The accuracy is modest at short exposure times but can be better than 1% at longer times. The application of the method is more and more difficult since CRT monitors in good operating condition are hard to find.
The running LED method with single timer requires an LED array with large number of LEDs. The commercial equipment is quite expensive. The accuracy in the full operation range is good, around a few percent. Here the accuracy is limited by the detection error, which is uniformly around 1–2 LEDs, independently of the measurement range. The measurement process is simple, only one exposure is required, followed by the counting of the bright LEDs, which can be automatized. This is a general and comfortable method, suitable for most requirements.
The multiple timer version of the running LED method offers a much simpler measurement device and potentially much higher accuracy, at a price of more complicated measurement process. The measurement may require multiple iterations, until the required precision is reached. Moreover, either the camera needs to be synchronized to the measurement device, or a video-based sampling is required, in order to provide a picture containing the necessary information to calculate the shutter time. This method is suitable for very high accuracy measurements. The measurement range at very small exposure times is practically limited by the minimal timing of the side LEDs.
The accuracy of the equivalent sampling-based method is also excellent, similarly to the multi-timer LED. The measurement equipment is very simple, containing only a signal generator and an LED. The measurement process requires the tuning of the generator frequency, by observing the under-sampled camera output. The measurement process may require several minutes in order to gather the necessary amount of data. This method is applicable only for cameras with video mode. The range of measurable exposure times is limited by the detection error of the equivalent period length, allowing measurements in the microsecond range with modest accuracy, but for longer exposure times very high precision can be achieved.
The applicable measurement regions, along with the achievable accuracy, for all methods are summarized in Figure 20.

7. Conclusions

This paper reviewed several methodologies and measurement devices to measure the exposure time of cameras. The direct method, several motion blur methods, and the equivalent sampling method were discussed, along with the investigation of their performance properties. All methods were illustrated by real measurement examples.
The direct method is applicable for cameras, where the focal plane is accessible. Its accuracy may be better than 1%, for exposure time higher than 1/100 s. The turntable and monitor based methods have modest accuracy and much narrower range, from 1/10,000 s to 1/100 s and 1/100 s to 1 s, respectively. The running LED with uniform timer method has a uniformly excellent performance, with a few percent uncertainty, starting from exposure times even as low as 1/100,000 s. The running LED with multiple timers and the equivalent sampling methods provide wide measurement ranges from 1/100,000 s and can provide excellent precision, with estimation uncertainties well below 1%.

Author Contributions

Conceptualization, G.S.; methodology, G.S. and G.V.; software, M.R. and G.V.; hardware, G.V.; writing G.S. and G.V. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kelby, S. The Digital Photography Book; Rocky Nook: San Rafael, CA, USA, 2020. [Google Scholar]
  2. Tanizaki, K.; Tokiichiro, T. A real camera interface enabling to shoot objects in virtual space. In Proceedings of the International Workshop on Advanced Imaging Technology (IWAIT), Online, 5–6 January 2021; Volume 11766, p. 1176622. [Google Scholar]
  3. Banterle, F.; Artusi, A.; Debattista, K.; Chalmers, A. Advanced High Dynamic Range Imaging, 2nd ed.; CRC Press: Boca Raton, FL, USA, 2017. [Google Scholar]
  4. Várkonyi-Kóczy, A.R.; Rövid, A.; Hashimoto, T. Gradient-Based Synthesized Multiple Exposure Time Color HDR Image. IEEE Trans. Instrum. Meas. 2008, 57, 1779–1785. [Google Scholar] [CrossRef]
  5. McCann, J.J.; Rizzi, A. The Art and Science of HDR Imaging; John Wiley & Sons: Hoboken, NJ, USA, 2012. [Google Scholar]
  6. Gnanasambandam, A.; Chan, S.H. HDR Imaging with Quanta Image Sensors: Theoretical Limits and Optimal Reconstruction. IEEE Trans. Comput. Imaging 2020, 6, 1571. [Google Scholar] [CrossRef]
  7. Psota, P.; Çubreli, G.; Hála, J.; Šimurda, D.; Šidlof, P.; Kredba, J.; Stašík, M.; Lédl, V.; Jiránek, M.; Luxa, M.; et al. Characterization of Supersonic Compressible Fluid Flow Using High-Speed Interferometry. Sensors 2021, 21, 8158. [Google Scholar] [CrossRef] [PubMed]
  8. Wu, T.; Valera, J.D.; Moore, A.J. High-speed, sub-Nyquist interferometry. Opt. Express 2011, 19, 10111–10123. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Beckwith, S.V.; Stiavelli, M.; Koekemoer, A.M.; Caldwell, J.A.; Ferguson, H.C.; Hook, R.; Lucas, R.A.; Bergeron, L.E.; Corbin, M.; Jogee, S.; et al. The Hubble Ultra Deep Field. Astron. J. 2006, 132, 1729. [Google Scholar] [CrossRef]
  10. Feltre, A.; Bacon, R.; Tresse, L.; Finley, H.; Carton, D.; Blaizot, J.; Bouché, N.; Garel, T.; Inami, H.; Boogaard, L.A.; et al. The MUSE Hubble Ultra Deep Field Survey-XII. Mg II emission and absorption in star-forming galaxies. Astron. Astrophys. 2018, 617, A62. [Google Scholar] [CrossRef] [Green Version]
  11. Borlaff, A.; Trujillo, I.; Román, J.; Beckman, J.E.; Eliche-Moral, M.C.; Infante-Sáinz, R.; Lumbreras-Calle, A.; De Almagro, R.T.; Gómez-Guijarro, C.; Cebrián, M.; et al. The missing light of the Hubble Ultra Deep Field. Astron. Astrophys. 2019, 621, A133. [Google Scholar] [CrossRef] [Green Version]
  12. Wang, S.; Xu, Y.; Zheng, Y.; Zhu, M.; Yao, H.; Xiao, Z. Tracking a Golf Ball with High-Speed Stereo Vision System. IEEE Trans. Instrum. Meas. 2019, 68, 2742–2754. [Google Scholar] [CrossRef]
  13. Gyongy, I.; Dutton, N.A.W.; Henderson, R.K. Single-Photon Tracking for High-Speed Vision. Sensors 2018, 18, 323. [Google Scholar] [CrossRef] [Green Version]
  14. Li, J.; Long, X.; Xu, D.; Gu, Q.; Ishii, I. An Ultrahigh-Speed Object Detection Method with Projection-Based Position Compensation. IEEE Trans. Instrum. Meas. 2020, 69, 4796–4806. [Google Scholar] [CrossRef]
  15. Cortés-Osorio, J.A.; Gómez-Mendoza, J.B.; Riaño-Rojas, J.C. Velocity Estimation from a Single Linear Motion Blurred Image Using Discrete Cosine Transform. IEEE Trans. Instrum. Meas. 2018, 68, 4038–4050. [Google Scholar] [CrossRef]
  16. Ma, B.; Huang, L.; Shen, J.; Shao, L.; Yang, M.; Porikli, F. Visual Tracking Under Motion Blur. IEEE Trans. Image Processing 2016, 25, 5867–5876. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Zhang, Y.; Wang, C.; Maybank, S.J.; Tao, D. Exposure Trajectory Recovery from Motion Blur. IEEE Trans. Pattern Anal. Mach. Intell. 2021. [Google Scholar] [CrossRef] [PubMed]
  18. Saha, N.; Ifthekhar, M.S.; Le, N.T.; Jang, Y.M. Survey on optical camera communications: Challenges and opportunities. IET Optoelectron. 2015, 9, 172–183. [Google Scholar] [CrossRef]
  19. Hasan, M.K.; Chowdhury, M.Z.; Shahjalal, M.; Nguyen, V.T.; Jang, Y.M. Performance Analysis and Improvement of Optical Camera Communication. Appl. Sci. 2018, 8, 2527. [Google Scholar] [CrossRef] [Green Version]
  20. Liu, W.; Xu, Z. Some practical constraints and solutions for optical camera communication. Phil. Trans. R. Soc. A 2020, 378, 20190191. [Google Scholar] [CrossRef] [Green Version]
  21. Nguyen, H.; Nguyen, V.; Nguyen, C.; Bui, V.; Jang, Y. Design and Implementation of 2D MIMO-Based Optical Camera Communication Using a Light-Emitting Diode Array for Long-Range Monitoring System. Sensors 2021, 21, 3023. [Google Scholar] [CrossRef]
  22. Jurado-Verdu, C.; Guerra, V.; Matus, V.; Almeida, C.; Rabadan, J. Optical Camera Communication as an Enabling Technology for Microalgae Cultivation. Sensors 2021, 21, 1621. [Google Scholar] [CrossRef]
  23. Rátosi, M.; Simon, G. Robust VLC Beacon Identification for Indoor Camera-Based Localization Systems. Sensors 2020, 20, 2522. [Google Scholar] [CrossRef]
  24. Simon, G.; Zachár, G.; Vakulya, G. Lookup: Robust and Accurate Indoor Localization Using Visible Light Communication. IEEE Trans. Instrum. Meas. 2017, 66, 2337–2348. [Google Scholar] [CrossRef]
  25. Chavez-Burbano, P.; Guerra, V.; Rabadan, J.; Perez-Jimenez, R. Optical Camera Communication system for three-dimensional indoor localization. Optik 2019, 192, 162870. [Google Scholar] [CrossRef]
  26. Method of Measuring the Speed of Camera Shutters. Sci. Am. 1897, 76, 69–71. [CrossRef]
  27. Kelley, J.D. Camera Shutter Tester. U.S. Patent 2168994, 8 August 1939. [Google Scholar]
  28. Springer, B.R. Camera Testing Methods and Apparatus. U.S. Patent 4096732, 27 June 1978. [Google Scholar]
  29. Fuller, A.B. Electronic Chronometer. U.S. Patent 1954313, 10 April 1934. [Google Scholar]
  30. The Photoplug. Optical Shutter Speed Tester for Your Smartphone. Available online: https://www.filmomat.eu/photoplug (accessed on 22 January 2022).
  31. ALVANDI Shutter Speed Tester. Available online: https://www.mr-alvandi.com/technique/Alvandi-shutter-speed-tester.html (accessed on 22 January 2022).
  32. Shutter Tester—7FR-80D. Available online: https://www.jpu.or.jp/eng/shutter-tester/ (accessed on 12 January 2022).
  33. ISO 516:2019. Camera Shutters—Timing—General Definition and Mechanical Shutter Measurements. International Organization for Standardization, 2019. Available online: https://www.iso.org/obp/ui/#iso:std:iso:516:ed-4:v1:en (accessed on 12 January 2022).
  34. Asakura, Y.; Takahashi, S.; Doi, K.; Watanabe, A.; Ushiyama, T.; Inoue, A. Exposure Precision Tester and Exposure Precision Testing Method for Camera. U.S. Patent 5895132, 20 April 1999. [Google Scholar]
  35. LaRue, R.S. Shutter Speed Measurement Techniques. Master’s Thesis, Boston University, Boston, MA, USA, 1949. [Google Scholar]
  36. Davidhazy, A. Calibrating Your Shutters with TV Set and Turntable. Available online: https://people.rit.edu/andpph/text-calibrating-shutters.html (accessed on 12 January 2022).
  37. Budilov, V.N.; Volovach, V.I.; Shakurskiy, M.V.; Eliseeva, S.V. Automated measurement of digital video cameras exposure time. In Proceedings of the East-West Design & Test Symposium (EWDTS 2013), Rostov on Don, Russia, 27–30 September 2013; pp. 344–347. [Google Scholar]
  38. Masson, L.; Cao, F.; Viard, C.; Guichard, F. Device and algorithms for camera timing evaluation. In Proceedings of the IS&T/SPIE Electronic Imaging Symposium, San Francisco, CA, USA, 2–6 February 2014. [Google Scholar] [CrossRef]
  39. Image Engineering LED-Panel. Available online: https://www.image-engineering.de/products/equipment/measurement-devices/900-led-panel (accessed on 12 January 2022).
  40. D’Antona, G.; Ferrero, A. Digital Signal Processing for Measurement Systems: Theory and Applications; Springer: New York, NY, USA, 2006. [Google Scholar]
  41. Shize, G.; Shenghe, S.; Zhongting, Z. A novel equivalent sampling method using in the digital storage oscilloscopes. In Proceedings of the 1994 IEEE Instrumentation and Measurement Technology Conference, Hamamatsu, Japan, 10–12 May 1994; Volume 2, pp. 530–532. [Google Scholar]
  42. Rátosi, M.; Vakulya, G.; Simon, G. Measuring Camera Exposure Time Using Equivalent Sampling. In Proceedings of the 2021 IEEE International Instrumentation and Measurement Technology Conference (I2MTC), Online, 17–20 May 2021; pp. 1–6. [Google Scholar]
Figure 1. The operation of a mechanical focal plane shutter with two curtains. (ag) long exposure time, the shutter is fully open, (hl) short exposure time, the only a band in the shutter is open.
Figure 1. The operation of a mechanical focal plane shutter with two curtains. (ag) long exposure time, the shutter is fully open, (hl) short exposure time, the only a band in the shutter is open.
Sensors 22 01871 g001
Figure 2. Cameras used for testing. (a) Zenit TTL (C1), (b) EOS 350D (C2), (c) Grasshopper3 (C3).
Figure 2. Cameras used for testing. (a) Zenit TTL (C1), (b) EOS 350D (C2), (c) Grasshopper3 (C3).
Sensors 22 01871 g002
Figure 3. Direct method using a light sensor and an oscilloscope.
Figure 3. Direct method using a light sensor and an oscilloscope.
Sensors 22 01871 g003
Figure 4. Measurements of C1 using the direct method. Settings: (a) 1/30 s, (b) 1/500 s.
Figure 4. Measurements of C1 using the direct method. Settings: (a) 1/30 s, (b) 1/500 s.
Sensors 22 01871 g004
Figure 5. Measurement error of the direct method due to measurement noise. (a) Measured quantities, (b) measurement error of the rising edge.
Figure 5. Measurement error of the direct method due to measurement noise. (a) Measured quantities, (b) measurement error of the rising edge.
Sensors 22 01871 g005
Figure 6. Relative measurement uncertainty of the direct method.
Figure 6. Relative measurement uncertainty of the direct method.
Sensors 22 01871 g006
Figure 7. Motion blur method with a rotating object on a turntable.
Figure 7. Motion blur method with a rotating object on a turntable.
Sensors 22 01871 g007
Figure 8. (a) The turntable with an LED light source. (b) Measurement with T e x p = 1 /   10 s.
Figure 8. (a) The turntable with an LED light source. (b) Measurement with T e x p = 1 /   10 s.
Sensors 22 01871 g008
Figure 9. Relative measurement uncertainty of the turntable measurement.
Figure 9. Relative measurement uncertainty of the turntable measurement.
Sensors 22 01871 g009
Figure 10. Measurements using a CRT monitor.
Figure 10. Measurements using a CRT monitor.
Sensors 22 01871 g010
Figure 11. Exposure of a monitor screen. The refreshing of the monitor is done from top to bottom. (a) Inverted camera (in upside down position), curtain falling from top to bottom on the image. (b) Camera in normal position, curtains moving from bottom to up on the image.
Figure 11. Exposure of a monitor screen. The refreshing of the monitor is done from top to bottom. (a) Inverted camera (in upside down position), curtain falling from top to bottom on the image. (b) Camera in normal position, curtains moving from bottom to up on the image.
Sensors 22 01871 g011aSensors 22 01871 g011b
Figure 12. Maximum theoretical relative uncertainty of the exposure time estimates of the monitor measurement. Measurement errors of C2 and C3, using the unbiased estimate of (11), are also shown.
Figure 12. Maximum theoretical relative uncertainty of the exposure time estimates of the monitor measurement. Measurement errors of C2 and C3, using the unbiased estimate of (11), are also shown.
Sensors 22 01871 g012
Figure 13. Example measurements of C2 with a monitor screen with 1/500 s. (a) camera in normal position, showing 120 lines. (b) camera upside-down, showing 163 lines.
Figure 13. Example measurements of C2 with a monitor screen with 1/500 s. (a) camera in normal position, showing 120 lines. (b) camera upside-down, showing 163 lines.
Sensors 22 01871 g013
Figure 14. Measuring exposure time with a running LED on an LED array. (a) One LED is on at a time (b) On the exposed image multiple LEDs are bright.
Figure 14. Measuring exposure time with a running LED on an LED array. (a) One LED is on at a time (b) On the exposed image multiple LEDs are bright.
Sensors 22 01871 g014
Figure 15. LED array using multiple timers.
Figure 15. LED array using multiple timers.
Sensors 22 01871 g015
Figure 16. (a) LED array equipment. (b) Measurement of camera C3, with nominal T e x p = 1 / 1000 and settings T C E N T = 1005   μ s , T S I D E = 1   μ s .
Figure 16. (a) LED array equipment. (b) Measurement of camera C3, with nominal T e x p = 1 / 1000 and settings T C E N T = 1005   μ s , T S I D E = 1   μ s .
Sensors 22 01871 g016
Figure 17. The equivalent sampling-based measurement method.
Figure 17. The equivalent sampling-based measurement method.
Sensors 22 01871 g017
Figure 18. Operation of the equivalent sampling-based measurement method. Black rectangle signal: intensity of the blinking LED. Red dots: samples using ideal (impulse) sampling. Blue dots: samples using integral sampling with integral time of T e x p .
Figure 18. Operation of the equivalent sampling-based measurement method. Black rectangle signal: intensity of the blinking LED. Red dots: samples using ideal (impulse) sampling. Blue dots: samples using integral sampling with integral time of T e x p .
Sensors 22 01871 g018
Figure 19. Measurements of camera C3 with equivalent sampling. (a) 1/1000 s, (b) 1/125,000 s.
Figure 19. Measurements of camera C3 with equivalent sampling. (a) 1/1000 s, (b) 1/125,000 s.
Sensors 22 01871 g019
Figure 20. Accuracy and regions of applicability of the discussed methods.
Figure 20. Accuracy and regions of applicability of the discussed methods.
Sensors 22 01871 g020
Table 1. Shutter properties of the cameras used for testing in the paper.
Table 1. Shutter properties of the cameras used for testing in the paper.
C1C2C3
Shutter typemechanicalelectromechanicalelectronic
Rolling/globalrollingrollingglobal
Shutter time range1/500–1/30 s1/4000–30 s1/125,000–31.9 s
Focal plane availableyesnono
Video modenonoyes
Table 2. Measurement results of C1, with the direct measurement.
Table 2. Measurement results of C1, with the direct measurement.
Nominal Exposure Time (s)Measured Exposure Time (ms)Relative Error (%)
1/3023.4−29.8
1/6014.34−14.0
1/1257.37−7.9
1/2504.153.75
1/5001.94−3.0
Table 3. Turntable measurement results for C2 and C3.
Table 3. Turntable measurement results for C2 and C3.
Nominal Exposure TimeC2C3
T e x p ( μ s ) Relative Error (%) T e x p ( μ s ) Relative Error (%)
1/1 978,492 −2.1 1,001,986 0.5
1/2 494,514 −1.1 495,435 −0.9
1/4 242,970 −2.8 250,896 0.4
1/8 127,179 1.7 123,449 −1.2
1/15 65,129 −2.3 63,894 −4.2
1/30 33,037 −0.9 32,221 −3.4
1/60 13,679 −17.9 15,789 −5.2
1/125 6772 −15.4 7072 −11.6
Table 4. Measurement results of the CRT method.
Table 4. Measurement results of the CRT method.
Nominal Exposure TimeC3C2
Using either (7) or (10)Using (10)Using (7), Normal Camera PositionUsing (7), Camera
Upside Down
T e x p ( μ s ) Rel. Error (%) T e x p ( μ s ) Rel. Error (%) T e x p ( μ s ) Rel. Error (%) T e x p ( μ s ) Rel. Error (%)
1/125 7979 −0.3 7983 −0.2 6873 −14 9523 19
1/250 3909 −0.4 3982 −0.4 3407 −15 4791 20
1/500 1980 −1.1 2013 0.6 1747 −13 2373 19
1/1000 976 −2.5 1069 6.9 917 −8 1281 28
1/2000 495 −1.7 544 8.8 451 −10 684 37
1/4000 248 2.9 284 13.6 233 −7 364 46
Table 5. Measurement results of C3, using the multi-timer running LED method.
Table 5. Measurement results of C3, using the multi-timer running LED method.
Nominal Exposure Time (s)Reported Exposure Time (μs) T e x p   ( μ s )   ±   2 T S I D E Relative Error (%)Relative Error(br) (%) (Bias Removed)
1/6016,667 16 , 665   ±   10−0.01−0.05
1/1258000 8004   ±   40.05−0.03
1/2504004 4008   ±   40.1−0.05
1/5002002 2006   ±   40.2−0.1
1/10001001 1008   ±   20.60.1
1/2000496 503   ±   21.40.2
1/4000252 259   ±   22.80.4
1/10,00098 104   ±   26.10
1/20,00049 56   ±   214.31.8
1/125,0008 15   ±   287.57.1
Table 6. Parameters of two example measurements of C3, using the equivalent sampling method.
Table 6. Parameters of two example measurements of C3, using the equivalent sampling method.
Reported Exposure Time N e x p N L E D f L E D Estimated Exposure Time (27) h e x p     ( 29 )
1001   μ s 2331540 150.15   Hz 1007.6   μ s 1%
8.1   μ s 231239 1201.098   Hz 15.5   μ s 9%
Table 7. Measurement results of C3, using the equivalent sampling method.
Table 7. Measurement results of C3, using the equivalent sampling method.
Nominal Exposure TimeReported Exposure TimeExposure Time with Bias Removed T e x p ( μ s ) Relative Error (%)Relative Error(br) (%)
1/6016,66716,67316,650−0.1−0.1
1/1258000800680020.03−0.1
1/2504004401040050.03−0.1
1/5002002200820080.30
1/10001001100710050.4−0.2
1/20004965025021.20
1/40002522582592.80.4
1/10,000981041046.10
1/20,00049555612.21.8
1/125,0008141587.57.1
Table 8. Comparison of the discussed methods.
Table 8. Comparison of the discussed methods.
Direct MethodTurntableMonitorRunning LED Uniform TimerRunning LED
Multi Timer
Equivalent Sampling
Applicabilityfilm cameras, manufacturingany cameraany cameraany cameracameras with synchronization or videovideo
Meas. range (s)1/10,000 <1/125–21/10,000–1/1251/100,000<1/100,000<1/100,000<
uncertainty   for   short   T e x p ≅10%≅10%<≅10%1–3%1–10%1–10%
uncertainty   for   long   T e x p ≅1%≅1%≅1%1–3%<<1%<<1%
Meas. time1 exposure1 exposure2 exposures1 exposureminutes (iterative, video)minutes (freq. tuning, video)
Equipment costhighlowlowhighmediummedium
Measurement complexitylowlowmediumlowhighmedium
Prosfast, simplesimplesimple, moderate rangefast, accurate, simple, wide rangeinexpensive, very accurate, wide rangeinexpensive, very accurate, wide range
Consopening of camera frame is necessarynarrow range, modest accuracyobsolete technology (CRT)expensive equipmentlong and cumbersome measurementlong measurement, video only
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Simon, G.; Vakulya, G.; Rátosi, M. The Way to Modern Shutter Speed Measurement Methods: A Historical Overview. Sensors 2022, 22, 1871. https://doi.org/10.3390/s22051871

AMA Style

Simon G, Vakulya G, Rátosi M. The Way to Modern Shutter Speed Measurement Methods: A Historical Overview. Sensors. 2022; 22(5):1871. https://doi.org/10.3390/s22051871

Chicago/Turabian Style

Simon, Gyula, Gergely Vakulya, and Márk Rátosi. 2022. "The Way to Modern Shutter Speed Measurement Methods: A Historical Overview" Sensors 22, no. 5: 1871. https://doi.org/10.3390/s22051871

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop