Next Article in Journal
Audio-Based Automatic Giant Panda Behavior Recognition Using Competitive Fusion Learning
Previous Article in Journal
Proposed SmartBarrel System for Monitoring and Assessment of Wine Fermentation Processes Using IoT Nose and Tongue Devices
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Image-Free Single-Pixel Detection System for Adaptive Multi-Target Tracking

1
School of Instrument Science and Opto-Electronic Engineering, Beijing Information Science and Technology University, Beijing 100192, China
2
Department of Precision Instrument, Tsinghua University, Beijing 100084, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Sensors 2025, 25(13), 3879; https://doi.org/10.3390/s25133879
Submission received: 22 May 2025 / Revised: 19 June 2025 / Accepted: 20 June 2025 / Published: 21 June 2025

Abstract

Conventional vision-based sensors face limitations such as low update rates, restricted applicability, and insufficient robustness in dynamic environments with complex object motions. Single-pixel tracking systems offer high efficiency and minimal data redundancy by directly acquiring target positions without full-image reconstruction. This paper proposes a single-pixel detection system for adaptive multi-target tracking based on the geometric moment and the exponentially weighted moving average (EWMA). The proposed system leverages geometric moments for high-speed target localization, requiring merely 3 N measurements to resolve centroids for N targets. Furthermore, the output values of the system are used to continuously update the weight parameters, enabling adaptation to varying motion patterns and ensuring consistent tracking stability. Experimental validation using a digital micromirror device (DMD) operating at 17.857 kHz demonstrates a theoretical tracking update rate of 1984 Hz for three objects. Quantitative evaluations under 1920 × 1080 pixel resolution reveal a normalized root mean square error (NRMSE) of 0.00785, confirming the method’s capability for robust multi-target tracking in practical applications.

1. Introduction

Multi-target tracking technology is a core requirement in many modern applications, such as drone navigation, autonomous driving and remote sensing [1,2,3,4,5,6,7,8]. These tasks typically demand high localization accuracy, low latency, and energy-efficient hardware. Existing multi-target tracking systems are generally categorized into two groups: image-based and image-free. With the advancement of computer vision, image-based target tracking has become widely adopted in daily life due to its cost-effectiveness. High-speed cameras [9,10], which rely on image-based methods, are widely used to continuously capture scene images and extract targets for position estimation.
However, achieving high-precision localization with high-speed cameras often requires considerable computational resources. Higher precision requirements lead to more complex algorithms and increased processing time. In image-based tracking, the contradiction between temporal resolution and spatial resolution is a persistent challenge. Some researchers have focused on refining the hardware design, providing alternatives to traditional software processing, and working to mitigate the conflict between temporal and spatial resolution in target tracking [11,12,13]. Wei et al. [12] combined a line-by-line structure with an electronic rolling shutter technique, employing hardware logic circuits instead of software code for real-time information processing. Teman et al. [13] employed a CMOS sensor window readout mode to focus on the region of interest, thus reducing the computational burden. However, they were still limited to tracking targets through imaging, and the aforementioned conflict has not been effectively addressed. As a device capable of the high-frequency modulation of optical fields, the digital micromirror device (DMD) has attracted increasing attention [14,15,16]. Researchers have attempted to integrate it with a single-pixel detector for data acquisition, enabling image-free tracking [17,18,19]. Scholars have explored various coding masks to obtain the trajectory of the target without reconstructing the image [20,21,22]. Zhang et al. [21] used Fourier basis patterns to illuminate targets and captured the optical signals through a single-pixel detector, achieving an tracking update rate of up to 1666 Hz. Yang et al. [22] proposed a tracking method based on discrete cosine transform, achieving target tracking at only 0.59% of the Nyquist–Shannon sampling rate, reducing measurement time in complex backgrounds. Furthermore, some scholars have used geometric moments for centroid localization, significantly reducing the number of masks and improving the tracking update rate [23,24,25,26].
The rapid advancements in high-speed, high-precision single-target tracking with single-pixel detection have laid a solid foundation for further research in multi-target tracking. Some scholars extended the Fourier spectral properties to multi-target tracking [27,28,29]. Zhang et al. [28] established equations based on the correspondence between object displacement and Fourier phase, solving them to obtain multiple target positions for tracking. Yu et al. [29] optimized the selection of speckle patterns using Fourier spectrum characteristics, effectively reducing the sampling rate for localization and combining geometric moments for multi-target positioning. Some proposed projection methods to convert two-dimensional image information into projection curves, enabling multi-target localization [30,31]. Zheng et al. [31] designed two sets of patterns using a DMD, which were projected onto targets at different angles. The method calculated the positions of individual targets based on the obtained projection curves.
However, existing single-pixel-based multi-target localization systems face limitations such as low update rates, restricted applicability, and insufficient robustness in dynamic environments with complex object motions. In [31], in order to obtain the positions of multiple targets at a given moment, a large number of masks need to be projected. The number of masks required is related to the imaging size of the targets on the DMD. This process is time-consuming and limits the update rate of the method. In [28], the windowing method used suffers from some hysteresis and may not accurately track the target during rapid motion changes. The method maps target changes to the Fourier domain, treating horizontal and vertical displacements as unknowns in a system of nonlinear equations. However, Fourier-domain measurement errors are often unavoidable, and even small errors can cause significant deviations in the solution. These deviations may lead to results far from the actual values or, in some cases, no solution at all, reducing the method’s stability.
To address these challenges, we present an image-free multi-target tracking system based on single-pixel detection and an adaptive exponentially weighted moving average (EWMA) framework. The system employs DMD as a spatial light modulator to establish a direct mapping between modulation patterns and spatial moment values, thereby integrating sensing and computation into a unified process. This approach significantly reduces data storage requirements and computational latency compared to conventional image reconstruction methods. The tracking process begins by estimating the initial positions of multiple targets using Radon projections, which capture projection curves from multiple angular perspectives. The adaptive EWMA algorithm then generates dynamically updated tracking windows, which are applied in conjunction with a centroid-based moment localization method to extract the precise coordinates of each target. To further improve performance in scenarios involving closely spaced targets, we introduce an EWMA-based joint-window localization strategy. This method combines real-time measurements with motion predictions, continuously adjusting weighting parameters based on tracking discrepancies. The iterative update mechanism forms a positive feedback loop, enabling robust, high-precision tracking even under significant spatial interference. Experimental results demonstrate that the proposed system effectively performs the adaptive tracking of multiple targets across different rates and relative positions. For N targets, it requires only 3N measurements to obtain the position of each target.

2. Principle and Method

2.1. Single-Pixel System Design for Multi-Target Tracking

The system schematic proposed in this paper is shown in Figure 1a. The core principle of the system is to use a DMD to load various encoded masks that modulate the optical field of the target. A photomultiplier tube (PMT) then collects the modulated light and converts it into an electrical signal, which is subsequently digitized and transferred to a computer for processing.
A crucial step in the measurement process is the generation of mask patterns by the DMD (Figure 1c), which modulates the light field of a scene containing multiple targets. Initially, the DMD performs projections of the targets from multiple angles, and the combined projection results are used to estimate the initial position of each target. Then, a window function is assigned to each target to separate them spatially. By integrating a geometric moment localization algorithm, the multi-target localization task is decomposed into several independent single-target localization problems. Due to the extremely high update rate of the photomultiplier tube (PMT), the time for each measurement is determined by the DMD’s flipping speed, reaching 56 µs.
However, the deflection angle generated by the DMD micromirrors is limited to only 24°, making it challenging to fully separate the incident and reflected light paths in the optical design. Extending the optical path to achieve separation would significantly increase the overall size of the system. To address this issue, this study introduces a total internal reflection (TIR) prism [32]. By efficiently redirecting the reflected light within the limited angular range, the TIR prism enables the effective spatial decoupling of the incident and reflected beams (Figure 1b). This not only solves the beam separation problem but also significantly improves the compactness and integration of the system. The proposed optical configuration provides strong support for the development of miniaturized, modular single-pixel detection systems.

2.2. DMD-Based Multi-Target Localization

2.2.1. Initial Multi-Target Localization Based on Radon Projection

To obtain the initial positions of multiple targets, we use the Radon projection method for localization. Radon projection enables the transformation of high-dimensional image data into low-dimensional projection data. The core concept involves performing line integrals of a two-dimensional image along specific angles. This process sums the grayscale values of each pixel along the projection direction, generating one-dimensional projection values, as illustrated in Figure 2a.
The rapid flipping of micromirrors in the DMD enables projection modulation of the target light field from different angles. Micromirrors arranged at predefined angles flip in sequence, thereby scanning the entire DMD. As illustrated in Figure 2b, the DMD reflects the scene light field at 0°, 45°, 90°, and 135°, producing projection curves at these four angles. The scene light field containing the target is focused onto the DMD through the lenses. Micromirrors, arranged at predetermined angles, flip column by column to achieve modulation at the specified projection angle. The DMD modulates the light at this angle, which is collected by a photomultiplier tube and processed by a computer to generate the projection curve. This projection curve reflects the intensity variations at the corresponding positions in the image, providing information on the light intensity distribution.
By combining projection curves from multiple angles, the approximate positions of objects can be determined. We consider a specific region to contain an object if it shows values in projections from different angles. This approach is particularly useful for distinguishing between multiple objects, as the peaks in the projection curves help reveal the relative positions of the objects. For closely spaced targets, choosing projections from different angles helps mitigate the effects of proximity, thereby enhancing localization accuracy.

2.2.2. Continuous Tracking via Geometric Moments

For a 2D continuous function G(x,y), the formula for calculating its centroid in a Cartesian coordinate system can be expressed as follows:
x c = x · G ( x , y ) d x d y G ( x , y ) d x d y
y c = y · G ( x , y ) d x d y G ( x , y ) d x d y
where x c and y c represent the centroid coordinates of the desired region. The principle behind this calculation is to perform a weighted average over the spatial positions of the function. When the region of interest is a discrete area D(x,y), the integral operations can be converted into summations. The centroid formulas are as follows:
x c = x y x · D ( x , y ) x y D ( x , y )
y c = x y y · D ( x , y ) x y D ( x , y )
When the target scene is imaged on the discrete micromirror array of the DMD, the scene’s light field D(i,j) represents the light intensity at position (i,j). On a DMD of size I × J, (i,j) corresponds to the coordinates of each micromirror. By controlling the DMD to generate a position-encoded mask L(i,j), each position’s light intensity D(i,j) serves as a weight. The modulation process is achieved by computing the weighted sum over the scene’s light field area. Position information in the x-axis and y-axis directions is represented by L 1 and L 2 , respectively, as shown in the following equations:
L 1 = 1 2 I 1 2 I 1 2 I I × J , L 2 = 1 1 1 2 2 2 J J J I × J
The above matrices can be represented by grayscale coding masks. However, the DMD can only generate a binary coding mask instead of a grayscale coding mask. To address this problem, we applied a spatial dithering method to produce grayscale coding masks [33]. This method quantizes the grayscale coding mask into 0 and 1 based on a threshold, and then the quantization error is diffused to adjacent pixels to reduce the error in local regions, achieving binarization of the grayscale mask, as shown in Figure 3a. The DMD is controlled to flip according to the binarized mask, with two flip angles corresponding to two gray values.
The centroid of the scene light field D(i,j) can be calculated as follows:
x c = ω [ L 1 ( i , j ) · D ( i , j ) ] ω D ( i , j )
y c = ω [ L 2 ( i , j ) · D ( i , j ) ] ω D ( i , j )
where ω represents the modulated window region on the DMD. By utilizing two encoded masks, L 1 and L 2 , generated by the DMD, the scene light field D(i,j), containing target information, is modulated. A photodetector sequentially collects and measures the light intensities, thereby achieving the summation operations in the formulas, with the modulated light intensities denoted as S 1 and S 2 :
S 1 = ω D ( i , j ) · L 1 ( i , j ) , S 2 = ω D ( i , j ) · L 2 ( i , j )
S 3 = ω D ( i , j )
After the DMD rapidly flips to modulate the light field, the photodetector instantly receives the reflected light and calculates the centroid of the target. The calculated centroid coordinates ( x c , y c ) can be expressed as follows:
x c = S 1 S 3 , y c = S 2 S 3

2.3. Multi-Target Window Tracking Method Based on Adaptive EWMA

When the light field containing multiple targets is projected onto the DMD, the geometric moment calculation will yield the average centroid of multiple objects. To acquire the individual positions of each target, we propose a window-based geometric moment (WGM) localization method, converting the multi-target localization problem into multiple single-target localization problems. We first obtain each target’s initial position using the Radon projection method mentioned earlier and then apply a window around each target’s corresponding position. As shown in Figure 3c, white dots on the DMD represent micromirrors flipped to a positive angle, while black dots represent those flipped to a negative angle. The photodetector is positioned to receive only light reflected by micromirrors set to a positive angle. A rectangular window is centered on the target’s initial location, and a new modulation mask is generated within this window on the DMD. Micromirrors outside the window are flipped to a negative angle to isolate the interference of light intensity from other targets. The modulation can be expressed as
S w 1 = ω D ( i , j ) · w ( i , j ) · L w 1 ( i , j )
S w 2 = ω D ( i , j ) · w ( i , j ) · L w 2 ( i , j )
S w 3 = ω D ( i , j ) · w ( i , j )
where w(i,j) is the window function and L w 1 ( i , j ) and L w 2 ( i , j ) are the grayscale masks generated in the window. Each target only requires three flips to determine its corresponding position. We employ the centroid position as the center of the window function at the next moment, which allows continuous tracking.
However, this tracking method cannot track fast-moving targets due to its lag. Furthermore, when the targets come close to each other, multiple objects may appear within a single window, causing interference, as shown in Figure 4a. To address this problem, we proposed an adaptive multi-target window tracking method based on EWMA. EWMA is a statistical technique that smoothes measurements and predictions, often used to monitor changing processes [34]. Leveraging the fast flipping frequency of the DMD, it can be assumed that the speed ratio of the target over a short period of consecutive measurements remains constant. Therefore, we can make a prediction based on the previous speed and obtain a predicted position. When objects are far apart, we can allocate weights between the measured centroid P o b j ( t i ) and the predicted centroid P o b j ( t i + 1 ) according to the EWMA weighting parameter. The computed value is taken as P w i n d o w ( t i + 1 ) , which serves as the center of the window function for the next moment, eliminating lag. In addition, the weight parameter is then adjusted based on the difference between the subsequent measured centroid P o b j ( t i + 1 ) and P w i n d o w ( t i + 1 ) , achieving positive feedback. When the targets come close to each other, we propose a joint tracking method combined with EWMA, which effectively mitigates interference and allows for stable tracking. The derivation of joint tracking is as follows.
For a scene with N targets, at time t 0 , we predict that n targets might move to a close proximity at t 0 + Δ t , causing them to appear in the same window. Consider Δ t as a detection cycle and assume that it takes T 0 for the DMD to flip once. Then, a detection cycle can be expressed as
Δ t = N · 3 T 0
At the moment t 0 and before is the case where there is only one object in the window and the positions of all objects are known. And after the moment t 0 , these n objects move to a window where the centroid of each object cannot be obtained directly. Assume that the n objects are g 1 to g n , where g i denotes the i t h object. Taking the horizontal direction as an example, the average horizontal displacement of n objects in the time period ( t 0 , t 0 + q j N · Δ t ) is Δ x ¯ ( t 0 + q j N · Δ t ) denotes the serial number of the time period corresponding to each of these n measurements in N. Then similarly, Δ x ¯ ( t 0 Δ t + q j N · Δ t ) denotes the average horizontal displacement of n objects in the time period ( t 0 Δ t , t 0 Δ t + q j N · Δ t ) .
Because of the high flip-frequency of the DMD, two adjacent displacements at the same time interval Δ t can be considered to be in a fixed proportion, as in the following equation:
k g i ( t 0 + Δ t ) = Δ x g i t 0 + q j N · Δ t Δ x g i t 0 Δ t + q j N · Δ t
For object g i , its displacement during the measurement process is distributed proportionally across multiple intervals, represented by q j , where q j takes values from 1 to N. The parameter j ranges from 1 to n, indicating the number of proportional displacement measurements. The value q j divides the time span into fractional steps within the measurement intervals. The displacement Δ x g i ( t 0 + q j N · Δ t ) corresponds to the motion of g i during each of these n measurements relative to t 0 , while Δ x g i ( t 0 Δ t + q j N · Δ t ) represents the displacement during the previous interval Δ t relative to ( t 0 Δ t ) . These proportional displacements, spaced by Δ t , form n pairs, and their ratio is denoted as k g i ( t 0 + Δ t ) . Since the displacement Δ x g i ( t 0 Δ t + q j N · Δ t ) belongs to the known time before t 0 , it can be multiplied by the ratio k g i ( t 0 + Δ t ) to estimate the unknown displacement Δ x g i ( t 0 + q j N · Δ t ) .
And after the moment t 0 , we can only obtain the mean horizontal centroid of the n objects at the moment ( t 0 + q j N · Δ t ) through the large window, and subtract it from the mean horizontal centroid at the moment t 0 to obtain the mean displacement Δ x ¯ t 0 + q j N · Δ t in this time period. The expression for the mean displacement is then obtained from the centroid formula:
Δ x ¯ t 0 + q j N · Δ t = S g 1 · Δ x g 1 t 0 + q j N · Δ t + + S g n · Δ x g n t 0 + q j N · Δ t S g 1 + + S g n
where S ( g i ) is the reflected light intensity of the target g i , and this is used as a weight to calculate the average displacement of all n objects in the time period ( t 0 , t 0 + q j N · Δ t ) . The object g i corresponds to the parameter k g i ( t 0 + Δ t ) , and the associated (15) and (16) yield n equations containing n parameters k. The parameter k g i ( t 0 + Δ t ) corresponding to the object g i is obtained by solving the system of n elemental equations, and the back-substitution into (15) yields the calculated values of the displacements:
Δ x g i ( t 0 + Δ t ) m e a = k g i ( t 0 + Δ t ) · Δ x g i ( t 0 Δ t )
The horizontal displacement of object g i in the time period ( t 0 , t 0 + Δ t ) is finally obtained, thus achieving independent tracking in the case of multiple objects in one window. However, the method of solving the equation to obtain the target displacement requires high accuracy of the known parameters in the arithmetic equation. If the measured value obtained through the centroid moment has a large error with the real value, the solution of the equation is prone to error accumulation and becomes a wrong solution. In this regard, the experiment corrects the measured values according to the proposed adaptive EWMA method.
Experimentally, the trajectory of object g i before the moment t 0 is recorded, and a prediction value Δ x g i ( t 0 + Δ t ) p r e is obtained based on the trajectory. And the computed value Δ x g i ( t 0 + Δ t ) obtained by solving the equations will be weighted with the prediction value as per the following equation,
Δ x g i ( t 0 + Δ t ) r e a l = α g i ( t 0 + Δ t ) · Δ x g i ( t 0 + Δ t ) m e a + ( 1 α g i ( t 0 + Δ t ) ) · Δ x g i ( t 0 + Δ t ) p r e
where α g i ( t 0 + Δ t ) is the smoothing coefficient for object g i during the time interval ( t 0 , t 0 + Δ t ) and the value range is (0,1]. In practice, the smoothing coefficient is adaptively adjusted according to the motion dynamics of the target. For instance, when the object exhibits rapid or abrupt motion changes, increasing the value of the smoothing coefficient allows the system to respond more promptly to the measurement, thereby better adapting to fast variations. Δ x g i ( t 0 + Δ t ) r e a l is used as the final displacement value. In order to make the obtained value closer to the real value, the smoothing coefficient will be continuously adjusted according to the difference between the measured value and the predicted value as follows:
α g i ( t 0 + 2 Δ t ) = γ · Δ x g i ( t 0 + Δ t ) m e a Δ x g i ( t 0 + Δ t ) p r e + α g i ( t 0 + Δ t )
where α g i ( t 0 + 2 Δ t ) is the smoothing coefficient of object g i in the horizontal direction in the next time period and γ is the gain factor. If the difference between the calculated value and the predicted value is large, it means that the motion state of object g i in the horizontal direction changes drastically. Increasing the smoothing coefficient at the next moment allows the system to respond faster to changes in the measured value.
If the difference between the calculated value and the predicted value is small, it means that the change in the motion state of object g i in the horizontal direction is gentle and the system is updated smoothly. Varying the smoothing coefficient adaptively allows the values to be taken closer to the real situation. The same calculation procedure can be obtained for the vertical direction.
The four objects are tracked locally, as shown in Figure 4b. At t 0 , we predict from the trajectory that window N 2 and window N 4 will cross in the next detection cycle. The system will not be able to obtain the centroid of the two objects in the next cycle directly. We turn the two small windows into a large one and calculate the average centroid of the two objects instead.
The objects in windows N 2 and N 4 are designated as g 1 and g 2 , respectively. The flowchart for the adaptive EWMA method is shown in Figure 5, illustrating the tracking process of the horizontal positions of g 1 and g 2 .
The first step involves measurement and prediction. Firstly, the scaling parameters are set based on the displacement values of the previous detection cycle. We use the displacement of a past time period to represent the displacement of a future time period. Since N 2 and N 4 are within the same window, centroid measurements of g 1 and g 2 are taken at the midpoint and end of the detection cycle, yielding the average centroids of g 1 and g 2 at these two moments. Then, the differences in average centroids relative to t 0 yield the average centroid displacements of the two objects during the intervals ( t 0 , t 0 + 1 2 Δ t ) and ( t 0 , t 0 + Δ t ) . Two equations for the average centroid displacements are formulated, weighted by the total reflected light intensities of g 1 and g 2 . The two unknowns are the scale parameters set beforehand. The system of equations is solved to find the displacement Δ x m e a of each individual object. Meanwhile, the motion state of the object is judged according to the prior three detection cycles, so as to predict the displacement Δ x p r e . In the second step, the adaptive EWMA method is employed to obtain a displacement value that is closer to the true value. An appropriate smoothing factor is chosen from (18) to allocate weights to both the measured and predicted values. Then, the smoothing factor is improved according to the difference between the measured and predicted values for the next moment. Finally, the horizontal displacements of g 1 and g 2 over the respective time intervals are obtained, yielding outputs Δ x g 1 ( t 0 + Δ t ) and Δ x g 2 ( t 0 + Δ t ) . Additionally, Δ x g 1 ( t 0 + 1 2 Δ t ) and Δ x g 2 ( t 0 + 1 2 Δ t ) are calculated and used for the next prediction and measurement cycle, enabling continuous high-precision tracking.
Therefore, the proposed method continuously monitors the relative distance between objects, enabling the adaptive tracking of multiple moving targets. For a DMD with a flip frequency of f, the update rate of localization for N targets can be achieved up to f 3 N .

3. Simulation and Experiment

3.1. Simulation

3.1.1. Radon Projection

The process of Radon projection is shown in Figure 6. Firstly, Radon projection is performed on the target from angles of 0°, 45°, 90°, and 135°. The projection curves of the target region are obtained from multiple angles. Then, region extraction is performed on the basis of the intensity of these curves. Regions with non-zero projection values indicate the presence of a target at those angles. Subsequently, areas that overlap from different angles are marked. Positions where the projection curves exhibit non-zero values across all four angles are identified as target areas.
We measure the target dimensions on the basis of the size of the identified target region. The center position of the window function corresponding to the target is determined by averaging the projection region’s values in the horizontal and vertical directions. The measurement results are shown in Table 1. The results indicate that the center of the window is close to the object’s centroid. Selecting an appropriately sized window ensures that the target remains fully captured.

3.1.2. Multi-Target Window Tracking Method Based on Adaptive EWMA

Upon ascertaining the positions of multiple targets, the subsequent tracking can be achieved using the WGM localization methodology. Considering the varying distances between targets, we categorize the tracking into independent tracking and joint tracking. When the targets are far apart, they do not interfere with each other, and each window function corresponds to a single target for independent tracking. When the targets are close together, multiple targets may appear within a single window, making it impossible to directly calculate the centroid of each target. In this case, we employed the EWMA-based WGM method to track continuously, which combined the information of targets for computation. In order to validate the proposed method, we performed a tracking simulation of multiple moving objects, which was realized by sequentially inputting 60 pictures. There are 4 objects in the picture, each with a diameter of 40 pixels, and the motion trajectories they exhibit are shown in Figure 7a. During the motion, objects N 2 and N 4 will be in close in proximity to each other.
We use the improved WGM-based localization method for tracking, which integrates the adaptive EWMA principle, as illustrated in Figure 7b. In independent tracking, the EWMA-based WGM method predicts the position at t i + 1 using the trajectory before t i , ensuring that the target remains centered within the window rather than outside it. In joint tracking, the EWMA-based WGM method dynamically detects the distance between objects. When objects come close, two smaller windows merge into a larger one, integrating multiple data sources for computation and localization.
We track four objects, and the localization accuracy of objects N 2 and N 4 is illustrated in Figure 7c. The two objects meet at frame 33 and separate at frame 40. Prior to their encounter, both methods demonstrate stable and reliable tracking performance. As the objects approach each other, the standalone WGM method fails to maintain continuous tracking and eventually loses track of object N 4 . In contrast, the EWMA-based WGM method ensures stable and high-precision tracking throughout the process. The tracking results for the four moving targets are shown in Table 2. It is evident that the EWMA-based WGM method exhibits high robustness regardless of the relative distances between the targets, enabling consistent and stable tracking performance.

3.2. Experiment

We further validated the proposed method through experiments, and the experimental setup is shown in Figure 8. A light source combined with a collimator is used, producing parallel light that is projected onto the micromirrors of DMD1 (DLP700, Texas Instruments, headquartered in Dallas, TX, USA). DMD1 has a resolution of 1024 × 768 and a maximum flipping frequency of 22.2 kHz. Thee rapid flipping of micromirrors is used to simulate the light field of multiple moving objects. The optical information of the objects is focused on DMD2 (DLP9500, Texas Instruments, headquartered in Dallas, TX, USA) through the imaging lenses. DMD2 has a resolution of 1920 × 1080 and a maximum flipping frequency of 17.857 kHz, which is used to modulate the illumination light field. The frame loading frequency of DMD1 is set to match the measurement cycle of DMD2. The modulated light is directed by a total internal reflection prism, which is then collected and detected by a photomultiplier tube (PMT1001/M, Thorlabs, headquartered in Newton, NJ, USA). The detected light information is transmitted to the computer (Intel(R) Core(TM) i5-10210U CPU) through a data acquisition system (USB3133A, ART Technology, headquartered in Beijing, China). The reflected light intensity is recorded synchronously using data acquisition software.
To test the accuracy of the proposed system in tracking objects with different motion states, we designed tracking experiments with objects moving at different speeds. The objects were simulated using DMD1. The frequency of target image loading was kept consistent in DMD1, and the displacement between two consecutive frames was defined as the object’s speed, measured in pixels per frame. We conducted the tracking on Object 1 (triangle) and Object 2 (circle), and each object occupied an area of approximately 50 × 50 pixels on DMD1. Their motion trajectories are shown in Figure 9a. DMD2 was used for modulation and calibrated with DMD1 for position alignment.
We conducted three tracking experiments. The speeds of the two objects were set at 5 pixels per frame, 10 pixels per frame, and 20 pixels per frame. For each speed scenario, we verified that selecting an appropriate initial smoothing coefficient significantly improves tracking accuracy. As the object speed increased, the initial value of the smoothing coefficient was set to 0.4, 0.5, and 0.7, respectively. A larger smoothing coefficient allows the system to respond more quickly to current measurements, which is beneficial in high-speed scenarios. Meanwhile, the gain factor was kept small to ensure gradual adjustment of the smoothing coefficient, thereby maintaining tracking stability and avoiding abrupt fluctuations due to noise. The real trajectories of the targets and the tracking results at different speeds are shown in Figure 9b. To assess the tracking accuracy of the proposed method, we adopted the normalized root mean square error (NRMSE) as our performance metric. Compared to the root mean square error (RMSE) and mean absolute error (MAE), NRMSE stands out due to its normalization, which is particularly advantageous in high-resolution settings (1920 × 1080 pixels). Moreover, by reflecting errors in proportion to the target’s motion range, NRMSE offers a more balanced evaluation of tracking accuracy. This metric is widely recognized in the tracking literature for its robust interpretability [29], defined as follows:
NRMSE = 1 M j = 1 M ( ( x t ( j ) x e ( j ) ) 2 + ( y t ( j ) y e ( j ) ) 2 ) 1 M j = 1 M ( x t ( j ) 2 + y t ( j ) 2 )
where ( x t , y t ) represents the true coordinates of the object’s movement, and ( x e , y e ) represents the estimated coordinates. M is the total number of frames. The denominator in the equation normalizes the magnitude of the target’s position, ensuring that the error calculation is unaffected by the target’s location. We calculated the NRMSE at different speeds, as shown in the Table 3. It can be observed that the proposed system achieves stable tracking for multi-target motion scenarios at various speeds.
We also experimented with more complex multi-target scenarios. The experiment increased the tracking target to three objects, and their trajectories are shown in Figure 10. Object 1 (triangle) and Object 2 (circle) will move closer to each other and eventually meet, while Object 1 and Object 3 (square) will move closer later and then meet. We selected 60 frames of measurement data, and the estimated trajectory is shown in Figure 11. In the figure, the red line represents the motion trajectory of Object 1, the blue line represents the motion trajectory of Object 2, and the green line represents the motion trajectory of Object 3.
The solid lines represent the real trajectories, while the colored data points correspond to the estimated values. As shown in Figure 11, the discrete trajectory formed by the estimated values matches the real trajectory well. We define the error as the absolute difference between the estimated coordinates and the real coordinates for each data point. The error charts for the x-axis and the y-axis are shown in Figure 12. It can be seen that the error does not change significantly with the variation in the coordinates. Object 1 and Object 3 meet each other in frame 12, followed by Object 1 and Object 2 meeting each other in frame 38. However, no significant errors are observed during these interactions. During the tracking of three moving objects, only nine masks were needed to obtain the current centroid of each target. Compared with previously reported single-pixel multi-target tracking methods, the proposed approach requires significantly fewer masks, offering a clear advantage in speed. In addition, we provide a more detailed evaluation of tracking accuracy. The NRMSE values for the three objects are calculated as 0.00850, 0.00839, and 0.00785. It can be concluded that the proposed method demonstrates strong robustness in multi-target tracking.

4. Discussion

During window-based tracking, keeping the window center close to the target centroid enhances both localization accuracy and tracking stability. To precisely adjust the window position, we introduced a weighting parameter α to balance the influence of the measured centroid P o b j ( t i ) and the predicted centroid P o b j ( t i + 1 ) . Experiments show that when target motion is smooth, assigning a higher weight to the prediction yields better accuracy. In contrast, during abrupt motion, relying more on measurements proves more effective. The adaptive adjustment of α is thus critical for robust tracking.
When targets are in close proximity, we jointly estimate target positions and incorporate EWMA smoothing to enhance localization accuracy, enabling continuous tracking. However, the proposed method experiences a sharp increase in tracking error when occlusion occurs between targets, due to the loss of reliable centroid information. Addressing this limitation will be a key focus of future work, with efforts directed toward developing occlusion-resilient models for more robust multi-target tracking performance. Furthermore, since the proposed method relies on real-time control of the DMD for dynamic mask generation, this control latency was not accounted for in the current experimental results. Future research will focus on practical application and propose corresponding optimization strategies to address this limitation.

5. Conclusions

In this paper, we presented an adaptive EWMA-based multi-target tracking system using a single-pixel detector, achieving significant reductions in mask utilization while enhancing tracking update rates. By leveraging geometric moment theory, the proposed method resolves target centroids with only 3 N measurements for N objects, outperforming conventional localization approaches in both computational efficiency and dynamic adaptability. Furthermore, the integration of EWMA enables continuous weight parameter updates through positive feedback, ensuring robust adaptation to diverse motion patterns and environmental perturbations. Numerical simulations reveal tracking accuracies of 0.7 pixels under complex motion states and 1.1 pixels in near-interference scenarios, validating the method’s resilience to target proximity. Experimental validation, conducted using a DMD operating at 17.857 kHz, achieves stable tracking of three moving targets with an NRMSE of 0.00785 under 1920 × 1080 pixel resolution. These results underscore the capability of the proposed system, which maintains high-speed detection and high-precision tracking in scenarios involving multiple targets with complex and time-varying trajectories.

Author Contributions

Conceptualization, Y.P., J.Y. and S.Y.; methodology, Y.P. and J.Y.; software, Y.P. and Y.F.; validation, Y.P., J.Y. and T.S.; formal analysis, F.X. and T.S.; investigation, Y.P., J.Y. and Y.F.; data curation, F.X. and T.S.; writing, Y.P. and J.Y.; supervision, S.Y. and T.S.; funding acquisition, S.Y. and T.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (NSFC) under Grant 62375022.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available from the corresponding author upon reasonable request due to confidentiality requirements associated with the project.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
EWMAExponentially weighted moving average
WGMWindow-based geometric moment
NRMSENormalized root mean square error
DMDDigital micromirror device
PMTPhotomultiplier tube
TIRTotal internal reflection

References

  1. Masmitja, I.; Martin, M.; O’Reilly, T.; Kieft, B.; Palomeras, N.; Navarro, J.; Katija, K. Dynamic robotic tracking of underwater targets using reinforcement learning. Sci. Robot. 2023, 8, eade7811. [Google Scholar] [CrossRef]
  2. Verma, V.; Maimone, M.W.; Gaines, D.M.; Francis, R.; Estlin, T.A.; Kuhn, S.R.; Rabideau, G.R.; Chien, S.A.; McHenry, M.M.; Graser, E.J.; et al. Autonomous robotics is driving Perseverance rover’s progress on Mars. Sci. Robot. 2023, 8, eadi3099. [Google Scholar] [CrossRef]
  3. Pardhasaradhi, B.; Cenkeramaddi, L.R. GPS spoofing detection and mitigation for drones using distributed radar tracking and fusion. IEEE Sens. J. 2022, 22, 11122–11134. [Google Scholar] [CrossRef]
  4. Gabr, K.; Abdelkader, M.; Jarraya, I.; AlMusalami, A.; Koubaa, A. SMART-TRACK: A Novel Kalman Filter-Guided Sensor Fusion For Robust UAV Object Tracking in Dynamic Environments. IEEE Sens. J. 2024, 25, 3086–3097. [Google Scholar]
  5. Vo-Doan, T.T.; Titov, V.V.; Harrap, M.J.; Lochner, S.; Straw, A.D. High-resolution outdoor videography of insects using Fast Lock-On tracking. Sci. Robot. 2024, 9, eadm7689. [Google Scholar] [CrossRef]
  6. Xiao, D.; Kedem Orange, R.; Opatovski, N.; Parizat, A.; Nehme, E.; Alalouf, O.; Shechtman, Y. Large-FOV 3D localization microscopy by spatially variant point spread function generation. Sci. Adv. 2024, 10, eadj3656. [Google Scholar] [CrossRef] [PubMed]
  7. Jiao, L.; Zhang, X.; Liu, X.; Liu, F.; Yang, S.; Ma, W.; Li, L.; Chen, P.; Feng, Z.; Guo, Y.; et al. Transformer meets remote sensing video detection and tracking: A comprehensive survey. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 1–45. [Google Scholar] [CrossRef]
  8. Zhu, Y.; Zhao, X.; Li, C.; Tang, J.; Huang, Z. Long-term Motion Assisted Remote Sensing Object Tracking. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5407514. [Google Scholar] [CrossRef]
  9. Kondo, Y.; Takubo, K.; Tominaga, H.; Hirose, R.; Tokuoka, N.; Kawaguchi, Y.; Takaie, Y.; Ozaki, A.; Nakaya, S.; Yano, F.; et al. Development of ‘HyperVision HPV-X’ high-speed video camera. Shimadzu Rev. 2012, 69, 285–291. [Google Scholar]
  10. Fuller, P. An introduction to high speed photography and photonics. Imaging Sci. J. 2009, 57, 293–302. [Google Scholar] [CrossRef]
  11. Wei, M.; Xing, F.; You, Z. An implementation method based on ERS imaging mode for sun sensor with 1 kHz update rate and 1 precision level. Opt. Express 2013, 21, 32524–32533. [Google Scholar] [CrossRef] [PubMed]
  12. Wei, M.S.; Xing, F.; You, Z. A real-time detection and positioning method for small and weak targets using a 1D morphology-based approach in 2D images. Light. Sci. Appl. 2018, 7, 18006. [Google Scholar] [CrossRef] [PubMed]
  13. Teman, A.; Fisher, S.; Sudakov, L.; Fish, A.; Yadid-Pecht, O. Autonomous CMOS image sensor for real time target detection and tracking. In Proceedings of the 2008 IEEE International Symposium on Circuits and Systems (ISCAS), Seattle, WA, USA, 18–21 May 2008; pp. 2138–2141. [Google Scholar]
  14. Deng, Q.; Zhang, Z.; Zhong, J. Image-free real-time 3-D tracking of a fast-moving object using dual-pixel detection. Opt. Lett. 2020, 45, 4734–4737. [Google Scholar] [CrossRef]
  15. Zhang, H.; Liu, Z.; Zhou, M.; Zhang, Z.; Chen, M.; Geng, Z. Prior-free 3D tracking of a fast-moving object at 6667 frames per second with single-pixel detectors. Opt. Lett. 2024, 49, 3628–3631. [Google Scholar] [CrossRef] [PubMed]
  16. Zhao, Y.; Yang, J.; Liu, C.; Wang, C.; Zhang, G.; Ding, Y. Study on Exposure Time Difference Compensation Method for DMD-Based Dual-Path Multi-Target Imaging Spectrometer. Remote Sens. 2025, 17, 2021. [Google Scholar] [CrossRef]
  17. Zheng, J.L.; Xu, D.S.; Yang, Z.H.; Yu, Y.J. Fast Image-free high precision target tracking using single pixel detection. In Proceedings of the 2023 38th Youth Academic Annual Conference of Chinese Association of Automation (YAC), Hefei, China, 19–21 May 2023; pp. 1131–1136. [Google Scholar]
  18. Yu, Y.; Yang, Z.; Li, W.; Shao, H. Image-Free Positioning Tracking Scheme via Single Pixel Detection. In Proceedings of the Advances in Guidance, Navigation and Control: Proceedings of 2020 International Conference on Guidance, Navigation and Control, ICGNC 2020, Tianjin, China, 23–25 October 2020; Springer: Berlin/Heidelberg, Germany, 2022; pp. 1349–1357. [Google Scholar]
  19. Peng, Y.; Sun, T.; Yang, J.; Yu, S.; Feng, Y.; Liu, H. Research on Enhancing the Precision of Real-Time Target Centroid Localization Based on Digital Micromirror Device. In Proceedings of the 2024 Academic Conference of China Instrument and Control Society (ACCIS), Chengdu, China, 28–31 July 2024; pp. 335–339. [Google Scholar]
  20. Shi, D.; Yin, K.; Huang, J.; Yuan, K.; Zhu, W.; Xie, C.; Liu, D.; Wang, Y. Fast tracking of moving objects using single-pixel imaging. Opt. Commun. 2019, 440, 155–162. [Google Scholar] [CrossRef]
  21. Zhang, Z.; Ye, J.; Deng, Q.; Zhong, J. Image-free real-time detection and tracking of fast moving object using a single-pixel detector. Opt. Express 2019, 27, 35394–35401. [Google Scholar] [CrossRef]
  22. Yang, Z.H.; Chen, X.; Zhao, Z.H.; Song, M.Y.; Liu, Y.; Zhao, Z.D.; Lei, H.D.; Yu, Y.J.; Wu, L.A. Image-free real-time target tracking by single-pixel detection. Opt. Express 2022, 30, 864–873. [Google Scholar] [CrossRef]
  23. Yang, J.; Liu, X.; Zhang, L.; Zhang, L.; Yan, T.; Fu, S.; Sun, T.; Zhan, H.; Xing, F.; You, Z. Real-time localization and classification of the fast-moving target based on complementary single-pixel detection. Opt. Express 2025, 33, 11301–11316. [Google Scholar] [CrossRef]
  24. Zha, L.; Meng, W.; Shi, D.; Huang, J.; Yuan, K.; Yang, W.; Chen, Y.; Wang, Y. Complementary moment detection for tracking a fast-moving object using dual single-pixel detectors. Opt. Lett. 2022, 47, 870–873. [Google Scholar] [CrossRef]
  25. Fu, S.; Xing, F.; You, Z. Dual-pixel tracking of the fast-moving target based on window complementary modulation. Opt. Express 2022, 30, 39747–39761. [Google Scholar] [CrossRef] [PubMed]
  26. Zhang, Y.; Wang, H.; Yin, Y.; Jiang, W.; Sun, B. Mask-based single-pixel tracking and imaging for moving objects. Opt. Express 2023, 31, 32554–32564. [Google Scholar] [CrossRef] [PubMed]
  27. Wu, Q.F.; Yu, Y.J.; Ji, P.C.; Zhou, S.J.; Zhang, H.J. Image-free single-pixel fast localization of multi-target using small window projection. In Proceedings of the 2024 43rd Chinese Control Conference (CCC), Kunming, China, 28–31 July 2024; pp. 4036–4041. [Google Scholar]
  28. Zhang, J.; Hu, T.; Shao, X.; Xiao, M.; Rong, Y.; Xiao, Z. Multi-target tracking using windowed Fourier single-pixel imaging. Sensors 2021, 21, 7934. [Google Scholar] [CrossRef]
  29. Yu, Y.; Yang, Z.H.; Liu, Y.X.; Li, M.F.; Wu, F.L.; Yu, Y.J. Long-Range Fast Single-Pixel Localization of Multiple Moving Targets. IEEE Sens. J. 2024, 24, 24699–24707. [Google Scholar] [CrossRef]
  30. Meng, W.; Shi, D.; Yang, W.; Zha, L.; Zhao, Y.; Wang, Y. Multi-object positioning and imaging based on single-pixel imaging using binary patterns. Sensors 2022, 22, 3211. [Google Scholar] [CrossRef]
  31. Zheng, J.; Yu, Y.; Chen, S.; Yang, Z.; Li, G. Image-free localization and tracking of multi-targets based on single pixel detection. In Proceedings of the 2021 China Automation Congress (CAC), Beijing, China, 22–24 October 2021; pp. 3814–3819. [Google Scholar]
  32. Bowron, J.W.; Jonas, R.P. Off-axis illumination design for DMD systems. In Design of Efficient Illumination Systems; SPIE: Bellingham, WA, USA, 2003; Volume 5186, pp. 72–82. [Google Scholar]
  33. Ostromoukhov, V. A simple and efficient error-diffusion algorithm. In Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques, Los Angeles, CA, USA, 12–17 August 2001; pp. 567–572. [Google Scholar]
  34. Crowder, S.V.; Hamilton, M.D. An EWMA for monitoring a process standard deviation. J. Qual. Technol. 1992, 24, 12–21. [Google Scholar] [CrossRef]
Figure 1. Scheme of the DMD-based mult-target tracking system. (a) System schematic. The core principle is to use a DMD to load encoding masks that modulate the light field of targets, while the crucial step involves performing angular projections and geometric moment-based centroid localization to achieve high-speed mult-target tracking via PMT signal acquisition. (b) Working principle of the TIR prism. (c) Schematic of micromirrors on DMD mounted at a 45° rotation angle.
Figure 1. Scheme of the DMD-based mult-target tracking system. (a) System schematic. The core principle is to use a DMD to load encoding masks that modulate the light field of targets, while the crucial step involves performing angular projections and geometric moment-based centroid localization to achieve high-speed mult-target tracking via PMT signal acquisition. (b) Working principle of the TIR prism. (c) Schematic of micromirrors on DMD mounted at a 45° rotation angle.
Sensors 25 03879 g001
Figure 2. Radon projection based on DMD. (a) Schematic of the Radon projection principle. (b) Micromirror flip for 0-degree projection, 90-degree projection, 45-degree projection and 135-degree projection.
Figure 2. Radon projection based on DMD. (a) Schematic of the Radon projection principle. (b) Micromirror flip for 0-degree projection, 90-degree projection, 45-degree projection and 135-degree projection.
Sensors 25 03879 g002
Figure 3. Window function for multi-target positioning. (a) Dithering for grayscale mask binarization. (b) Targets. (c) Window function to intercept multiple targets.
Figure 3. Window function for multi-target positioning. (a) Dithering for grayscale mask binarization. (b) Targets. (c) Window function to intercept multiple targets.
Sensors 25 03879 g003
Figure 4. Multi-target tracking under close-range interference. (a) Interference from target proximity on window function positioning. (b) Schematic of close-range mult-target tracking.
Figure 4. Multi-target tracking under close-range interference. (a) Interference from target proximity on window function positioning. (b) Schematic of close-range mult-target tracking.
Sensors 25 03879 g004
Figure 5. Multi-target tracking algorithm based on adaptive EWMA.
Figure 5. Multi-target tracking algorithm based on adaptive EWMA.
Sensors 25 03879 g005
Figure 6. Radon projection for mult-target initial localization.
Figure 6. Radon projection for mult-target initial localization.
Sensors 25 03879 g006
Figure 7. Simulation of mult-target tracking using the EWMA-based WGM method. (a) Target trajectories. (b) Tracking principle of EWMA-based WGM method in two different scenarios. (c) Tracking accuracy of targets N 2 and N 4 using WGM method with and without EWMA.
Figure 7. Simulation of mult-target tracking using the EWMA-based WGM method. (a) Target trajectories. (b) Tracking principle of EWMA-based WGM method in two different scenarios. (c) Tracking accuracy of targets N 2 and N 4 using WGM method with and without EWMA.
Sensors 25 03879 g007
Figure 8. Image-free mult-target tracking system.
Figure 8. Image-free mult-target tracking system.
Sensors 25 03879 g008
Figure 9. Tracking multiple targets of different speeds. (a) Target trajectories. (b) Estimation results of targets with different speeds. The two trajectories represent two different moving objects. The green, blue, and yellow points represent the estimated trajectories of the objects moving at speeds of 5, 10, and 20 pixels per frame, respectively, while the solid red line represents the real trajectory.
Figure 9. Tracking multiple targets of different speeds. (a) Target trajectories. (b) Estimation results of targets with different speeds. The two trajectories represent two different moving objects. The green, blue, and yellow points represent the estimated trajectories of the objects moving at speeds of 5, 10, and 20 pixels per frame, respectively, while the solid red line represents the real trajectory.
Sensors 25 03879 g009
Figure 10. Target trajectories and motion process.
Figure 10. Target trajectories and motion process.
Sensors 25 03879 g010
Figure 11. Estimated trajectories in the experiment.
Figure 11. Estimated trajectories in the experiment.
Sensors 25 03879 g011
Figure 12. Tracking error of three objects on the x-axis and y-axis. The blue bar represents the actual value, and the red bar represents the error relative to the actual value. (a) Object 1. (b) Object 2. (c) Object 3.
Figure 12. Tracking error of three objects on the x-axis and y-axis. The blue bar represents the actual value, and the red bar represents the error relative to the actual value. (a) Object 1. (b) Object 2. (c) Object 3.
Sensors 25 03879 g012
Table 1. Initial localization results using Radon projection.
Table 1. Initial localization results using Radon projection.
ObjectsObject Centroid Value
(pixel)
Window Function Center Value
(pixel)
Moon(200, 212)(200.50, 258.00)
Plane(794, 366)(783.25, 358.50)
Car 1(212, 925)(209.25, 918.50)
Car 2(730, 903)(742.50, 908.00)
Table 2. Estimation results of two methods.
Table 2. Estimation results of two methods.
ObjectsWGM Method Without EWMA
RMSE (pixel)
EWMA-Based WGM Method
RMSE (pixel)
N 1 1.210.73
N 2 16.291.17
N 3 1.471.16
N 4 N.A.1.57
Table 3. Estimation results of objects at different motion speeds.
Table 3. Estimation results of objects at different motion speeds.
ObjectsNRMSE at Motion Speed of S Pixels per Frame
S = 5S = 10S = 20
Object 10.006180.006940.00722
Object 20.007830.008140.00903
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Peng, Y.; Yang, J.; Feng, Y.; Yu, S.; Xing, F.; Sun, T. An Image-Free Single-Pixel Detection System for Adaptive Multi-Target Tracking. Sensors 2025, 25, 3879. https://doi.org/10.3390/s25133879

AMA Style

Peng Y, Yang J, Feng Y, Yu S, Xing F, Sun T. An Image-Free Single-Pixel Detection System for Adaptive Multi-Target Tracking. Sensors. 2025; 25(13):3879. https://doi.org/10.3390/s25133879

Chicago/Turabian Style

Peng, Yicheng, Jianing Yang, Yuhao Feng, Shijie Yu, Fei Xing, and Ting Sun. 2025. "An Image-Free Single-Pixel Detection System for Adaptive Multi-Target Tracking" Sensors 25, no. 13: 3879. https://doi.org/10.3390/s25133879

APA Style

Peng, Y., Yang, J., Feng, Y., Yu, S., Xing, F., & Sun, T. (2025). An Image-Free Single-Pixel Detection System for Adaptive Multi-Target Tracking. Sensors, 25(13), 3879. https://doi.org/10.3390/s25133879

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop