Real-Time Detection of Sporadic Meteors in the Intensified TV Imaging Systems

The automatic observation of the night sky through wide-angle video systems with the aim of detecting meteor and fireballs is currently among routine astronomical observations. The observation is usually done in multi-station or network mode, so it is possible to estimate the direction and the speed of the body flight. The high velocity of the meteorite flying through the atmosphere determines the important features of the camera systems, namely the high frame rate. Thanks to high frame rates, such imaging systems produce a large amount of data, of which only a small fragment has scientific potential. This paper focuses on methods for the real-time detection of fast moving objects in the video sequences recorded by intensified TV systems with frame rates of about 60 frames per second. The goal of our effort is to remove all unnecessary data during the daytime and make free hard-drive capacity for the next observation. The processing of data from the MAIA (Meteor Automatic Imager and Analyzer) system is demonstrated in the paper.


Introduction
Meteors are streaks of light appearing in the sky when meteoroids ablate in the Earth's atmosphere. Observation of meteors is a cost-effective way to understand the distribution of material in our solar system. Meteor observations are typically performed using radar [1], passive radio detectors [2], all-sky photographic [3] and CCD (charge coupled device) cameras [4], digital video cameras [5] or television (TV) cameras optionally equipped with an image intensifier. While the radio-based detection methods can be performed during the daytime, thus being suitable for estimation of total meteor activity, camera-based methods are limited to night time. Regardless of this limitation, camera-based observations allow building the light curve (i.e., the time-dependent fluctuations of light emitted by a meteor), which may contain information about the mass and structure of the original particle or parent object: comets [6] and asteroids [7]. Wide-band observation with a suitably-designed bank of photometric filters additionally allows obtaining information about the chemical composition of the meteoroid [8,9]. Although camera-based systems are more common, combinations of multiple ways of observations are also used [10]. All-sky cameras with a huge spatial resolution and long exposure times are great for detecting intense light phenomena, like bolides or fireballs. However, for the calculation of atmospheric trajectory, it is necessary to observe meteors simultaneously from at least two different places, optionally with high temporal resolution. Moreover, a higher frame rate brings more data for the modeling of the meteoroid structure [11].
Meteor observation with two or more camera systems is currently a standard technique for the measurement of meteoroid trajectories. There are networks of different scales and technology: the Spanish Meteor Network (SPMN) [12] has about 25 video and CCD stations; Cameras for Allsky Automatic Real-time Detection (ASGARD) [31]. CAMS and CMN use the method of detecting meteors in frames of digital video that had been compacted into a single bit-mapped color file [32]. Since this method produces large volumes of false detections (up to 50%), Siladi et al. [33] proposed a method using neural networks and support vector machine (SVM) to reach a 10% false rate. Molau and Gural [34] reported a real-time success-rate of meteor detection better than 80% for both MetRec and MeteorScan, with a false alarm rate less than 20%.
All methods as mentioned above perform well while processing video sequences with less than VGA spatial resolution and a temporal resolution of no more than 25 fps. However, in the TV systems using frame rates of typically 25 fps or faster, meteor paths can be only a couple of pixels; see Figure 1 for example. As is shown in Figure 2, a meteor streak sampled at a high frame rate (here 61 fps) has a similar shape to stars. Figure 3 displays radial profiles of sampled meteor streaks in comparison with the sampled stellar object at the same times. Moreover, when an observing system employs non-linear devices like MPC, the algorithms have to deal with strong noise with a generally unknown distribution. Since we are targeting highly automated camera systems with minimal human interaction, our goal is also to minimize false alerts. The paper is organized as follows. Section 2 introduces the characteristics of the second generation image intensifier. Section 3 describes a proposed algorithm for meteor detection in the video sequences acquired with the TV intensified system. Section 4 gives an overview of graphics processing unit (GPU)-based acceleration of our algorithm. Section 5 presents the results of real data processing, and, finally Section 6 concludes the paper.

Characterization of the MCP Image Intensifier
In this section, we will summarize the characteristics of the second generation MCP image intensifier. One of the representatives of this branch of imaging devices is Philips (Photonis) XX1332. The XX1332 image intensifier has a highly nonlinear input-output conversion function as a result of the automatic gain control (AGC). The AGC feature helps to accommodate extremely high dynamic range and also brings high nonlinearity, which is especially critical for photometric measurements. It calculates the mean intensity in the image and adjusts the multiplication gain (which results in higher excess noise) accordingly (increases if less photons are present and decreases for higher overall fluxes).
The AGC feature naturally affects the opto-electronic conversion function (OECF) of the instrument. To cover this characteristic, we used the ISO 14524 chart [35] (see Figure 4) illuminated under laboratory conditions. We used 17 various illuminance levels ranging between 1.6 mlx (mililuxes) and 2.4 lx, which leads to background luminance levels between 125 µcd/m 2 (microcandela per square meter) and 187 mcd/m 2 (milicandela per square meter). From the known illuminance and optical density of the particular patch, it is possible to calculate the patch luminance. Figure 5a displays OECF measured for six of 17 various background levels (gain levels in the image intensifier). It can be seen, however, that the OECF for the fixed gain is not perfectly linear; rather, high linearity is achieved.  The same ISO 15524 chart allows covering the dependency of noise characteristics on the spatially-variable signal level (represented by patches of the chart) and automatic gain control in the image intensifier. We also investigated the behavior of the image intensifier at different working conditions by the change of the chart illumination [36]. Figure 5b shows the curves for the six chosen illumination levels. Every curve represents the dependency of the standard deviation on the patch luminance. The several curves show that the system is highly signal dependent even in the case of constant illumination level (against the assumption, the standard deviation is growing with growing patch luminance). Furthermore, the standard deviation decreases with the growing background luminance.
The above-mentioned features, typical for the intensifier TV systems, cause the presence of speckle noise components in the acquired video sequences. The level of individual bright spots in the video frame fluctuates significantly, while the overall signal level remains roughly constant (i.e., a couple of bright spots increase their level, while the level is decreased for other bright spots). This phenomenon affects conventional image processing algorithms based on the subtraction concerning their scalability and performance. Together with findings from the measurement of the noise standard deviation, it naturally leads to the assumption that brighter parts (pixels) of the video sequence have higher variance. This type of image acquisition system requires new methods of meteor detection. The idea arises from the previous analysis that it is difficult to examine the relationships between the pixels within one frame. We propose an algorithm that takes into account single pixel probability characteristics calculated across a certain number of frames. Figure 6 shows consecutive frames and the sliding window w i of size N. The value of the mean µ i (x, y) and the standard deviation σ i (x, y) of the pixel at spatial position (x, y) in the i-th frame is calculated from the values of the pixel in the window. To detect a meteor, the algorithm searches for the relationship between pixel characteristics valid for the i-th frame and the model calculated from the first M frames of a video sequence. The model builds on the relation between the mean value and standard deviation of the pixels in the frame. In Figure 7, the circle marks present this relation in frames without a meteor, and it demonstrates an example of how this relation changes when a sliding window includes frames with a meteor. One can see a deviation in a certain interval of pixel intensity values caused by the temporal appearance of the meteor on the dark background, which increases the standard deviation of pixels with low intensity. Both, meteor appearance and the noise can increase the mean and standard deviation of a pixel across a window. Thus, it is not enough to keep these parameters for each pixel. The model has to describe the estimation of the permitted standard deviation depending on the mean intensity. In this case, we propose to construct the model by approximation of the relation between the mean value and the standard deviation in frames without a meteor. We consider video sequences of a duration of 10 min and propose the renewal of the model from the last M frames labeled as frames without a meteor. It compensates variations during the night.

Description of the Algorithm
Based on the above-described idea, we propose the frame classification method shown in Figure 8. The statistical analysis block provides the calculation of the mean value µ i (x, y) and standard deviation σ i (x, y) of each pixel through N frames. We use the recursive calculation of this characteristic based on known µ i−1 (x, y) and σ i−1 (x, y). With the model built from the frames with only static objects present, an algorithm can detect the transient (i.e., moving) object. To reduce false detection, we also introduce the post-analysis block exploring how many times the algorithm marked the single pixel and its neighbors as candidate objects.
postanalysis frame classification list of candidate objects M frames

Statistical Analysis
Widely-used methods of computing the standard deviation require two passes through the data. Since our effort focuses on real-time data processing, more suitable for implementation are single-pass algorithms. Our pipeline uses the robust iterative formula by Welford [37]. Since we have determined the mean µ 0 and variance σ 2 0 of a single pixel for the window included in (i, i + 1, ... i + N − 1) frames, we can estimate how parameters µ 1 and σ 2 1 change when we slide the window by one position: where I i is the pixel intensity in the i-th frame and N is the window size. To evaluate the difference of variances, the unbiased sample variance is used: Hence, we obtain: This means that we can use the iteration formula to calculate the mean value and the variation of a pixel across the window of size N frames. In our algorithm, we use a window size equal to 15 frames, which is enough to follow the changes of the standard deviation and to detect a meteor in a frame.

Comparison with the Model
To get the list of candidate objects in the i-th frame, we perform the statistical analysis across the moving window of size N. The calculated standard deviation of a single pixel at spatial position (x, y) with a certain mean value is compared with a corresponding value of the a priori model σ M = f (µ M ). If the standard deviation of the pixel is significantly higher than the model standard deviation, we label this pixel as a candidate object.
The model represents the relationship between the mean of pixel values µ M and the standard deviation σ M , and it is constructed from data samples of M frames including static objects only (typically the first 15 frames in a video sequence). To get these data samples, we calculate the mean values and standard deviation of each pixel across M frames. In this case, the number of samples associated with the background is significantly bigger. To get an equal number of samples in different intervals of dynamic range, we average these parameters in single intervals.
The precision of the model is a crucial factor affecting algorithm performance. We found that the model is well described by the formula: whereB is an estimation of the background and a 1,2 and b 1,2 are parameters approximating data samples from M frames. Accurate background estimation significantly reduces the number of detection errors. There are different methods, for example sigma-clipping [38], multiresolution support [39], modified distance transform [40], etc. The trade-off between efficiency, simplicity and speed leads to the use of the convolution with the averaging filter [41] of size 11 × 11 pixels for this particular task.
An example of how the model fits the data samples can be seen in Figure 9a. Figure 9b shows the dependency of the estimated background value on the size of the filter.

Post-Analysis
The list of candidate objects includes both true and false positive detections. False candidate objects are typically one-pixel or a small connected area; most of them can be removed efficiently by the use of the morphological transformation [42]. In the proposed algorithm, we apply dilation followed by erosion (Figures 10 and 11). Dilation with 2 × 2 structuring element allows connecting candidates that are close to each other and ensuring saving a meteorite trajectory following erosion ( Figure 10). Erosion with 3 × 3 structuring element removes all candidates that have no eight-connectivity, which is an effective way to get out of negative candidates ( Figure 11). Using bigger structuring elements for morphological transformation can cause the removal of the meteor trajectory. Classification of residual candidate objects requires further analysis. Our algorithm uses a counter calculating how many times the single pixel was marked as a candidate object in previous frames. We analyzed the results of this calculation for a meteor and a static object. In Figure 12, the green path is a meteor trajectory. The counter associated with the positive candidate objects tends to decrease its value in the direction of an object moving smoothly. As we can see, the counters of single pixels have no big difference from their non-zero neighbors. This allows excluding candidates having significant differences in counters associated with the negative candidate ( Figure 13). Based on this assumption, we define the difference between a pixel's counter and its non-zero neighbors. If the biggest difference is lower than four, we mark a pixel as an object. The result of post-analysis is a list of detected meteors in the frame, which is the basis for frame classification. If the list is not empty, we mark the frame as including a meteor.

GPU Acceleration
Besides true positive detections of meteors, the second most important parameter of the algorithm is an execution time. The algorithm was designed to be implemented on a GPU using CUDA (Compute Unified Device Architecture), a highly parallel multi-threaded architecture [43]. A block diagram of this implementation is shown in Figure 14. One of the main bottlenecks of GPU acceleration is inefficient data transfer between the host and the device, negatively affecting the overall application performance. Thus, our GPU implementation simultaneously processes several frames, as proposed by Vítek in [44]. We transfer frames (i, i + 1, ..., i + N gpu − 1) to the GPU global memory where (i = 1, N gpu + 1, 2 · N gpu + 1, ...), and N gpu is the window size chosen based on the parameters of the GPU. After some experiment, we found that six-frame processes simultaneously represent a good-enough trade-off between accuracy and execution time.
The recursive calculation of the statistic characteristics described in Section 3.1 is used for all transferred frames except the last one. To get (µ i , µ i+1 , ..., µ i+N gpu −2 ) and (σ i , σ i+1 , ..., σ i+N gpu −2 ), only one set of referent parameters µ i−1 and σ i−1 is used. This is the main difference from the CPU implementation, which uses referent parameters for each frame. The statistic characteristics of the last frame in the window have to be calculated based on all frames in the window without using recursion because it defines the accuracy of detection in the next frames.

Verification of Algorithm Performance
During the test, we focused on the two main features of the algorithms: (a) the ability to detect meteors in the single frame and (b) the ability to detect an event as such. The frame classifier has four possible outcomes: true positive, shown in Figure 15 (TP, the case when the meteor is present in the frame and it is correctly detected by the algorithm), false positive (FP, the case when the meteor is not present in the frame, but it is falsely detected by the algorithm), true negative (TN, the case when the meteor is not present in the frame and the algorithm is not producing any alert) and false negative, shown in Figure 16 (FN, the case when the meteor is present in the frame and the algorithm is not producing any alert).
Performance of the detection algorithm depends on the geocentric velocity of the meteoroids and the geometry of the meteor appearance. When a meteoroid enters the top layers of the Earth's atmosphere, its movement is not followed by any significant change in brightness. Thus, it is hard to distinguish the beginning of the event and fluctuation caused by speckle noise, and frames capturing the beginning of the meteor trail are the main source of the FN classification. Another problem for the processing algorithm is faint meteors, for example meteoroids entering the atmosphere at a small angle, so the overall duration of the event is short, and changes in the brightness are weak. It is therefore difficult to track the brightness changes of the neighboring pixels, and the false detection rate is higher for frames capturing shorter events.

Experimental Setup
For the purpose of this paper, testing data were acquired with the system MAIA (Meteor Automatic Imager and Analyzer) [45]. This system uses image intensifier XX1332 and GigE (Gigabit Ethernet) progressive scanning camera JAI CM-040GE, running at a frame rate of 61 fps and a bit depth of 10 bits. The spatial resolution of the camera is 776 × 582 pixels (approximately 6 arcmin/pixel), corresponding to a field-of-view of 52 • . The limiting stellar magnitude is +8. The astrometric precision of the system is quite good: the standard deviation is better than 0.04 • both for naked and intensified systems. MAIA works in double-station configuration, and camera systems are deployed in two places: Ondřejov and Kunžak, the distance between both stations being 92.5 km.
To evaluate the performance of the proposed algorithm, we processed 30 video sequences with a total number of 2419 frames, acquired during different nights by the use of the MAIA system. All video sequences contain a meteor, and we manually labeled all 1169 frames on which meteors are recorded. Frames at the beginning of each video sequence contain only static objects, so it is possible to build the model. We compared our algorithm with three other methods: the first one is an algorithm that is currently in use within the MAIA project; the second one is the widely-used UFOCapture; and the third one is our reimplementation of a meteor detector used within CMN [46]. Originally, the Python-based software RPi Meteor Station (RMS [47]) was running on the Raspberry Pi platform.
The algorithm currently in use within the MAIA project takes into account the high temporal resolution of video sequences. It creates a list of static objects and detects new objects in the next frames. Each new object is placed in the list of temporary objects as an object for the next investigation. To find a meteor, the trajectories of these temporary objects are followed. The algorithm is implemented in the pipeline known as dMAIA. The goal of the pipeline is obtaining the sequential measurement of the meteor and its apparent coordinates in comparison with real stars in the background. Detected meteors are the subject of further measurements, particularly the measurement of brightness, the measurement of range of height (especially the beginning heights) and the determination of the atmospheric trajectory. Details about the measurement may be found for example in [48,49].
The most common methods of meteor detection discover meteor tracks in video sequences with low temporal resolution. In this case, the meteor track presents a line in each frame. The RMS algorithm is based on this frame feature. Its basic concept is line detection by kernel-based Hough transform in a reconstruction image from 64 frames. In our implementation of this algorithm, we reconstructed images from 15 frames, which was enough to detect a meteor.

Results
To compare the execution time of a tested algorithm, we used a personal computer with Intel Core i5-3210M 2.5 GHz x4, 16 GB of DDR3/1600 MHz memory and NVIDIA GeForce GT 635M 2 GB GDDR5 graphics card. To include UFOCapture in the test, we developed a custom virtual DirectShow camera. As we can see in Table 1, the implementation of the proposed algorithm significantly reduces the time needed to process one frame of the video sequence. Note that the time needed to build a model is 1.19 s, so while we are updating the model once per 36,600 frames (i.e., ten minutes of recording), there is an overhead of 0.03 ms per processed frame.  Table 2 summarizes the results of particular algorithms. Following our hypothesis of the more difficult detection of shorter events, we performed a test on the subset of video sequences containing events longer than 25 frames. The results of those tests are summarized in Table 2b, and one can see a significantly lower number of FN detections for events longer than 25 frames.
Furthermore, we evaluated the ability of the algorithms to find a meteor event (i.e., a streak of light in consecutive frames) in the video sequence. Our algorithm was able to detect all meteors in the video sequences, and the currently used algorithm missed two meteors, while the algorithm based on RMS missed three meteors. UFOCapture missed only one meteor, but also produced a high number of false positives. It is worth noting that we also had the possibility to investigate the usability of the tracking algorithm incorporated in a University of Western Ontario processing pipeline for high temporal resolution of video sequences. This algorithm is an evolution of the Astrobiology Instrumentation for Meteor Imaging and Tracking system [50]. It has an advantage in time processing compared with the proposed algorithm. The time of a single frame processing is 6.3 ms. However, this algorithm requires the accurate setting of input parameters for each video sequence, which has a significant effect on the precision of meteor detection. The algorithm proposed in this paper removes this disadvantage.

Conclusions
This paper focuses on methods of meteor detection in video sequences with high frame rates. We proposed the algorithm of frame classification based on the comparison between temporal statistical characteristics of a pixel and the model built on the relation between the mean and the standard deviation of the pixel.
The results showed high performance in accuracy and speed. The precision of the proposed algorithm is 0.9881, and the recall is 0.8503. The proposed algorithm is significantly faster compared to state-of-the-art algorithms. The implementation using computing on a GPU reduced the processing time of a single frame and had a duration of 10.3 ms per frame, which means that it is possible to process single frame in a time shorter than the exposure time (16.4 ms for a frame rate of 61 fps). The parameters of this implementation need further investigation to obtain a trade-off between accuracy and speed.