Next Article in Journal
Comparative Performance Analysis of Lightweight Cryptographic Algorithms on Resource-Constrained IoT Platforms
Previous Article in Journal
Enhancing Heart Rate Detection in Vehicular Settings Using FMCW Radar and SCR-Guided Signal Processing
Previous Article in Special Issue
An Image-Free Single-Pixel Detection System for Adaptive Multi-Target Tracking
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

Real-Time Object Classification via Dual-Pixel Measurement

1
Department of Precision Instrument, Tsinghua University, Beijing 100084, China
2
Department of Automation, Tsinghua University, Beijing 100084, China
3
School of Instrument Science and Opto-Electronic Engineering, Beijing Information Science and Technology University, Beijing 100192, China
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(18), 5886; https://doi.org/10.3390/s25185886
Submission received: 8 July 2025 / Revised: 1 September 2025 / Accepted: 18 September 2025 / Published: 20 September 2025

Abstract

Achieving rapid and accurate object classification holds significant importance in various domains. However, conventional vision-based techniques suffer from several limitations, including high data redundancy and strong dependence on image quality. In this work, we present a high-speed, image-free object classification method based on dual-pixel measurement and normalized central moment invariants. Leveraging the complementary modulation capability of a digital micromirror device (DMD), the proposed system requires only five tailored binary illumination patterns to simultaneously extract geometric features and perform classification. The system can achieve a classification update rate of up to 4.44 kHz, offering significant improvements in both efficiency and accuracy compared to traditional image-based approaches. Numerical simulations verify the robustness of the method under similarity transformations—including translation, scaling, and rotation—while experimental validations further demonstrate reliable performance across diverse object types. This approach enables real-time, low-data throughput, and reconstruction-free classification, offering new potential for optical computing and edge intelligence applications.

1. Introduction

Object classification plays a crucial role in various fields, including remote sensing [1], autonomous navigation [2], security monitoring [3] and industrial inspection [4]. The rapid advancements in machine vision and deep learning technologies have revolutionized classification tasks, while also imposing more stringent demands on real-time performance and low power consumption. However, image-based classification techniques rely on visual data captured by imaging devices, which often contains a significant amount of irrelevant, redundant data. The pressure of high-speed data throughput has become a critical bottleneck that limits the improvement of a system’s real-time performance. Moreover, image-based methods are highly dependent on image quality, leading to poor performance in challenging conditions, such as loss of detail from low-resolution sensors, limited spectral range, and motion blur caused by high-speed movement [5].
Recent approaches such as optical computing [6,7,8,9,10] and single-pixel imaging (SPI) [11,12,13,14] have broadened new horizons for classification systems. Optical computing leverages optical elements to perform various computational tasks, delivering advantages in terms of high speed, low power consumption, and parallel data processing [7]. Qu et al. developed a super-pixel diffractive neural network for classification tasks, utilizing digital micromirror devices (DMDs) to simplify the optical system with a computational speed of 326 Hz per layer [8]. Bian et al. achieved multi-character recognition at an update rate of 100 Hz [9]. However, these approaches require the construction of multi-layer diffractive optical networks, hindering their potential for miniaturization and integration. SPI is a promising computational technique known for its wide spectral bandwidth, high sensitivity, and fast timing response [13,14]. However, as it relies on sequential time-domain illumination patterns to acquire intensity signals for reconstruction, it can be time-consuming and unsuitable for fast-moving targets.
Some researchers have conducted research on image-free classification using single-pixel detection [15,16,17,18]. By employing a spatial light modulator (SLM) for time-domain extended structured illumination to pre-code the target optical field and using a single-pixel detector to measure the modulation results, dimensionality reduction of the target’s feature information can be achieved. This enables a shift in the classification paradigm from image-centric to information-centric. A reconstruction-free classification framework was developed by Latorre-Carmona et al. [15] in 2019. Meng et al. also achieved classification by computing the moment invariants of the image [16,17]. Peng et al. present a framework for classifying fast-moving objects with shear distortion using single-pixel detection without performing image reconstruction [18]. However, these methods do not exploit the complementary properties of DMD, resulting in an excessive number of templates for optical field encoding. The polynomial moments calculated are also relatively complex, and they fail to represent other image information such as area, centroid, orientation, and ellipticity.
In this communication, we demonstrate a real-time classification method via dual-pixel measurement based on normalized central moments and the complementary nature of DMD. The proposed approach employs just five tailored illumination patterns to concurrently acquire the normalized central moments of different objects. Object classification is performed through feature recognition within the multidimensional space of moment invariants. Owing to the optimized system architecture and the rapid modulation speed of the DMD, the classification update rate can reach 4.44 kHz. A theoretical investigation was conducted to assess the method’s robustness under similarity transformations and experimental validations confirm that the system delivers accurate object classification across diverse scenarios.

2. Materials and Methods

The DMD serves as the core of the classification system, utilizing a micromirror array to perform high-speed binary modulation of the light field at frequencies up to 22.2 kHz. As illustrated in Figure 1, light emitted from the object is first focused through a lens (Lens 0) and directed onto the DMD via a total internal reflection (TIR) prism. Each micromirror within the array can switch between ±12° tilt angles, corresponding to ‘on’ (+12°) and ‘off’ (−12°) states. Due to the small angular separation of ±12°, the reflected and incident beams lie close to each other, which can introduce optical path conflicts and stray light interference. To overcome this, a custom-designed triple-pass TIR prism is integrated into the system [19]. In the ‘on’ state, reflected light is directed to photomultiplier tube 1 (PMT1), while in the ‘off’ state, it is sent to PMT2. The data acquisition systems (DAQ) are utilized to transmit the signal from PMTs to a computer. The binary mask of DMD ensures that the two output channels form a complementary detection mechanism. The DMD binary mask sequence for the optical path towards PMT1 is designated as mask sequence 1, while the corresponding mask sequence for the optical path towards PMT2 is referred to as mask sequence 2.
Moments and their derived functions have been widely leveraged as invariant global features for object classification [20]. Among these various types, geometric moments are the most commonly applied, primarily for characterizing the object’s shape, localization, distribution and symmetry. For a 2D image I ( i , j ) , the (p + q)-order geometric moment mpq is given by:
m p q = i = 1 M j = 1 N I i , j i p j q
To construct a mathematically invariant form under translation, the central moment of the image I ( i , j ) is defined as:
m ¯ p q = i = 1 M j = 1 N I i , j i x ¯ p ( j y ¯ ) q
where x ¯ and y ¯ represent the centroid of the given image, respectively, x ¯ = m 10 m 00 and y ¯ = m 01 m 00 . It can be proved that the zeroth-order moments satisfy m ¯ 00 = m 00 . In order to obtain scale invariance [21], the normalized central moment is calculated to be:
μ p q = m ¯ p q 1 m 00 1 + p + q 2
Normalized central moments provide invariance to both translation and scaling. In 1962, seven famous moment invariants to rotation were first proposed by Hu [22]. For the sake of simplicity, the first and second moment invariants were chosen for classification tasks in our method, which are:
Φ 1 = μ 20 + μ 02 Φ 2 = ( μ 20 μ 02 ) 2 + 4 μ 11 2
Given that the centroids x ¯ and y ¯ of different images varies, it is not practical to directly construct a normalized central moment template of DMD for calculation. However, normalized central moments can be simplified using the binomial theorem [23].
μ p q = k = 1 p l = 1 q p k q l ( x _ ) p k ( y _ ) q l m k l m 00 1 + p + q 2
The normalized central geometric moments can be directly represented as linear combinations of the target’s geometric moments, enabling efficient construction of moment invariants. This mathematical formulation provides a critical foundation for implementing DMD-based hardware acceleration in subsequent processing stages. In our previous work [24], we have developed a novel optical computing protocol that utilizes dynamically reconfigurable DMD modulation patterns to directly compute geometric moments of targets through light field manipulation, achieving 93.3% accuracy in classifying 30 different objects. However, it estimated object circularity using geometric moment templates and performed classification solely based on shape, resulting in diminished accuracy as the number of object categories grew. Here, we utilize two normalized central moment invariants mentioned above to characterize object features and perform classification within their corresponding two-dimensional feature space. This approach significantly improves classification accuracy and demonstrates robustness in more complex scenarios. The complementary dual-channel design effectively reduces the total number of required measurements. Additionally, each individual measurement inherently captures the zero-order moment of the target, enhancing the accuracy of the system. These measurements jointly encode the area, centroid, orientation, and ellipticity of objects, thereby providing a compact yet discriminative feature representation. Increasing the number of projections may introduce additional descriptors but would also lead to higher acquisition time and system complexity, which conflicts with the goal of achieving real-time classification.
According to Equation (5), the normalized central moments of an object can be computed from its lower-order geometric moments. In our previous work, we successfully employed projected DMD patterns to directly acquire the geometric moment values of the target [24]. The ideal DMD patterns P p q for (p + q)-order moment should be:
P p q = 1 M p N q 1 p 1 q 2 p 1 q M p N q 1 p 2 q 2 p 2 q M p 2 q 1 p N q 2 p N q M p N q
where M and N represent the number of micromirrors in the transverse and longitudinal directions of the DMD, respectively. To convert this continuous-valued mask into a binary format, one effective approach is error diffusion dithering, which mitigates quantization artifacts by propagating the error from each pixel to its neighboring unprocessed pixels. The dithered patterns are shown in Figure 2. Therefore, the first and second moment invariants of an object can be acquired by leveraging the binary masks generated by the DMD.

3. Results

3.1. Numerical Simulation

The invariance of the proposed method under different similarity transformations is validated through numerical simulations in MATLAB R2021b (MathWorks, Natick, MA, USA), as shown in Figure 3. The scaling and rotational invariance of the DMD-based object classification method was first characterized. To demonstrate the generality of the method, we selected two different objects (a moon and an airplane) as test subjects. In the simulation of scale invariance, we gradually reduced the size of the initial objects to 60% of their original size, with a scaling factor decreasing by 0.01 at each step, resulting in 41 measurement values in total (Figure 3a). By incorporating complementary dual-pixel detection, we computed and plotted the corresponding invariants 1 and 2 with respect to the scaling factor (Figure 3b).
In the simulation of rotation invariance, each object was rotated around its centroid, starting from 0° and gradually increasing the angle to 360° in 10° intervals, while the corresponding invariants were calculated at each rotation angle (Figure 3c). It can be observed that invariant 1 remains almost unchanged with rotation. Due to image digitization and the binarization effect of our method [24], invariant 2 exhibits some fluctuation with respect to the rotation angle. In addition, it should be noted that the error-diffusion dithering algorithm we currently employ propagates quantization errors along the horizontal or vertical directions. As a result, the error transmission is not strictly isotropic, which constitutes an inherent systematic error of our method. However, by considering both invariants together, accurate classification results can still be obtained. The details of this process will be described in the following experimental section.
To further demonstrate the effectiveness of our method, we translated the object within a specified region of the image plane and calculated the error between the theoretical values of the two invariants and the values obtained using our method. The simulation results (Figure 4) demonstrate that the proposed method maintains strong performance under translational transformation. And the RMSE of invariant 1 and invariant 2 are calculated to be 0.0230 and 0.0027, respectively.

3.2. Experimental Validation

Building upon the theoretical framework and simulations presented above, we have constructed an experimental system (Figure 5) to further evaluate the effectiveness of the proposed method. The components of the experimental system and their respective functions are outlined as follows: A DLP technology-driven dynamic object simulator is connected to computer 1 for real-time simulation of various objects. Lens 0 focuses the light emitted by the target simulator onto the front surface of the DMD (DLP7000, Texas Instruments, Dallas, TX, USA), while the total internal reflection (TIR) prism is utilized to fold the optical path and compress the system’s spatial layout, facilitating dual-pixel detection. The DMD generates structured mask sequences in real time, which, when applied to the incident light field, enable the computation of the object’s normalized central moments of various orders. The two optical paths exiting the TIR prism are symmetrical. Mirrors 1 and 2 are employed to redirect the optical path, ensuring a compact system. The outgoing light is then converged by lenses 1 and 2 onto the PMTs (PMT1001/M, Thorlabs, Newton, NJ, USA) for detection, and the signals are transmitted to the computer via a data acquisition card for real-time analysis and object classification.
To demonstrate the versatility of our method and further validate its invariance to rotation, translation, and scaling, we selected five distinct objects from the MPEG-7 dataset [25] and applied various similarity transformations randomly. Using the dynamic target simulator, we generated images corresponding to different objects, which were then modulated by the DMD. The signals collected by the PMT were subsequently used for classification. Since the numerical values of the moment invariants are relatively small, a logarithmic transformation was applied to facilitate visualization. The transformed values ϕ 1 and ϕ 2 were then used as the basis for classification, where α is the weighting coefficient:
ϕ 1 = α · log 10 Φ 1 ϕ 2 = log 10 Φ 2
As shown in Figure 6, five object categories were classified using the proposed method. Since the maximum modulation rate of the DMD is 22.2 kHz and each classification requires the projection of five binary patterns, the system achieves a real-time classification rate of 4.44 kHz. The coefficient α is selected to be 5. Furthermore, Figure 6b illustrates that relying on a single normalized central moment invariant as the classification descriptor is insufficient for certain object pairs. For instance, when only invariant 1 is used, the lmfish and deer are difficult to distinguish; conversely, using only invariant 2 makes it challenging to differentiate the apple from the crown. By constructing a two-dimensional feature space using both invariants, the system not only maintains high-speed real-time classification but also significantly improves classification accuracy.
To quantitatively assess the proposed method, we used the first column in Figure 6a as the reference objects (numbered as 0), providing standard moment invariant values. The remaining four columns, obtained through similarity transformations, serve as test samples. For each object, we calculated its distance from all five reference categories in the moment invariant space. We computed the average distance between each of the four test objects within the same category and the standard objects from different categories. The results are summarized in Table 1. Experimental results demonstrate the robustness of our classification method under various similarity transformations, showing clear differentiation among multiple object categories. Part of the experimental error originates from the fact that the proposed method is not perfectly isotropic, as already discussed in Section 3.1. In practical applications, our method involves a pre-calibration step, where representative invariant values are computed in advance for each known object category. Classification can be achieved by calculating the moment invariants of a given object and measuring its two-dimensional distance from the reference values of standard categories. If this distance falls below a predefined threshold derived from pre-calibration, the object is considered to belong to the corresponding category. In this experiment, the threshold is chosen to be 0.1~0.15. To further demonstrate the applicability of our method on larger test sets and to provide a performance comparison with previous approaches. We utilize all 70 object categories from the MPEG-7 dataset to serve as the experimental test set. The results are shown in Table 2. We calculate the moment invariants of the different objects for classification, and the results show that our method maintains an accuracy of over 80% even when the number of target object categories increases to 70.

4. Discussion

The proposed dual-pixel classification system addresses key limitations of conventional image-based methods by eliminating the need for full image acquisition and reconstruction. By encoding object features directly through structured binary DMD masks, the method significantly reduces data redundancy and computation time.
However, the study also reveals some challenges. Although invariant 1 is highly stable across transformations, invariant 2 exhibits minor fluctuations under rotation, likely due to binarization artifacts. Nevertheless, combining both invariants ensures reliable object differentiation. Moreover, the proposed method is currently applicable only to single-object classification. In scenarios involving multiple objects, a preliminary segmentation step is required to isolate individual targets before projecting the corresponding masks for information acquisition. Further improvements could be achieved by designing a multi-object classification method and incorporating more moments and their invariants to enhance classification separability in higher-dimensional spaces.

5. Conclusions

In this communication, we have developed a real-time, image-free object classification method that leverages dual-pixel detection and DMD-based structured illumination. By extracting low-order normalized central moment invariants through optical modulation, the system enables fast and accurate classification across varying object types and transformations. Simulation and experimental results confirm that the method achieves high robustness and efficiency with an update rate of up to 4.44 kHz. The compact, complementary detection architecture and elimination of image reconstruction offer a promising foundation for future integration into miniaturized optical computing systems. Future work may involve learning-based enhancements to enable more complex recognition tasks across a broader range of applications.

Author Contributions

Conceptualization, J.Y. and F.X.; methodology, J.Y.; software, R.C.; validation, J.Y., Y.P. and T.S.; investigation, J.Y. and L.Z.; resources, J.Y. and Y.P.; writing—original draft preparation, J.Y.; writing—review and editing, J.Y. and R.C.; supervision, T.S. and F.X.; funding acquisition, F.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (NSFC) under grant U22A6006.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
SPISingle-pixel imaging
DMDDigital micromirror device
SLMSpatial light modulator
PMTPhotomultiplier tube
DAQData acquisition system
TIRTotal internal reflection
RMSERoot mean square error

References

  1. Turner, K.J.; Tzortziou, M.; Grunert, B.K.; Goes, J.; Sherman, J. Optical classification of an urbanized estuary using hyperspectral remote sensing reflectance. Opt. Express 2022, 30, 41590–41612. [Google Scholar] [CrossRef]
  2. Turay, T.; Vladimirova, T. Toward performing image classification and object detection with convolutional neural networks in autonomous driving systems: A survey. IEEE Access 2022, 10, 14076–14119. [Google Scholar] [CrossRef]
  3. Dedeoğlu, Y. Moving Object Detection, Tracking and Classification for Smart Video Surveillance. Master’s Thesis, Bilkent Universitesi, Ankara, Turkey, 2004. [Google Scholar]
  4. Hridoy, M.W.; Rahman, M.M.; Sakib, S. A framework for industrial inspection system using deep learning. Ann. Data Sci. 2024, 11, 445–478. [Google Scholar] [CrossRef]
  5. Yang, G.; Yao, M.; Li, S.; Zhang, J.; Zhong, J. High-accuracy image-free classification of high-speed rotating objects with fluctuating rotation periods. Appl. Phys. Lett. 2024, 124, 041107. [Google Scholar] [CrossRef]
  6. McMahon, P.L. The physics of optical computing. Nat. Rev. Phys. 2023, 5, 717–734. [Google Scholar] [CrossRef]
  7. Bai, B.; Li, Y.; Luo, Y.; Li, X.; Çetintaş, E.; Jarrahi, M.; Ozcan, A. All-optical image classification through unknown random diffusers using a single-pixel diffractive network. Light Sci. Appl. 2023, 12, 69. [Google Scholar] [CrossRef] [PubMed]
  8. Qu, Y.; Lian, H.; Ding, C.; Liu, H.; Liu, L.; Yang, J. High-frame-rate reconfigurable diffractive neural network based on superpixels. Opt. Lett. 2023, 48, 5025–5028. [Google Scholar] [CrossRef] [PubMed]
  9. Bian, L.; Wang, H.; Zhu, C.; Zhang, J. Image-free multi-character recognition. Opt. Lett. 2022, 47, 1343–1346. [Google Scholar] [CrossRef]
  10. Jiao, S.; Feng, J.; Gao, Y.; Lei, T.; Xie, Z.; Yuan, X. Optical machine learning with incoherent light and a single-pixel detector. Opt. Lett. 2019, 44, 5186–5189. [Google Scholar] [CrossRef]
  11. Edgar, M.P.; Gibson, G.M.; Padgett, M.J. Principles and prospects for single-pixel imaging. Nat. Photonics 2019, 13, 13–20. [Google Scholar] [CrossRef]
  12. Gibson, G.M.; Johnson, S.D.; Padgett, M.J. Single-pixel imaging 12 years on: A review. Opt. Express 2020, 28, 28190–28208. [Google Scholar] [CrossRef]
  13. Ji, P.; Wu, Q.; Cao, S.; Zhang, H.; Yang, Z.; Yu, Y. Single-pixel imaging of a moving object with multi-motion. Chin. Opt. Lett. 2024, 22, 101101. [Google Scholar] [CrossRef]
  14. Li, Y.; Shi, J.; Sun, L.; Wu, X.; Zeng, G. Single-pixel salient object detection via discrete cosine spectrum acquisition and deep learning. IEEE Photonics Technol. Lett. 2020, 32, 1381–1384. [Google Scholar] [CrossRef]
  15. Latorre-Carmona, P.; Traver, V.J.; Sánchez, J.S.; Tajahuerce, E. Online reconstruction-free single-pixel image classification. Image Vis. Comput. 2019, 86, 28–37. [Google Scholar] [CrossRef]
  16. Meng, Q.; Lai, W.; Lei, G.; Liu, H.; Cui, W.; Shi, D.; Wang, Y.; Han, K. Fast object imaging and classification based on circular harmonic Fourier moment detection. Opt. Express 2023, 31, 34527–34541. [Google Scholar] [CrossRef]
  17. Meng, Q.; Lai, W.; Lei, G.; Cui, W.; Liu, H.; Wang, Y.; Han, K. Rapid imaging and classification with single-pixel detector based on radial Tchebichef moments. Opt. Lasers Eng. 2024, 181, 108257. [Google Scholar] [CrossRef]
  18. Peng, Y.; Yang, J.; Zhang, L.; Xing, F.; Sun, T. Real-time target recognition and multi-motion parameters acquisition via single-pixel detection. Opt. Express 2025, 33, 37204–37219. [Google Scholar] [CrossRef]
  19. Zhao, Y.; Yang, J.; Liu, C.; Wang, C.; Zhang, G.; Ding, Y. Study on Exposure Time Difference Compensation Method for DMD-Based Dual-Path Multi-Target Imaging Spectrometer. Remote Sens. 2025, 17, 2021. [Google Scholar] [CrossRef]
  20. Liao, S.X.; Pawlak, M. On image analysis by moments. IEEE Trans. Pattern Anal. Mach. Intell. 1996, 18, 254–266. [Google Scholar] [CrossRef]
  21. Huang, Z.; Leng, J. Analysis of Hu’s moment invariants on image scaling and rotation. In Proceedings of the 2010 2nd International Conference on Computer Engineering and Technology, Chengdu, China, 16–18 April 2010; pp. V7-476–V7-480. [Google Scholar]
  22. Hu, M.-K. Visual pattern recognition by moment invariants. IRE Trans. Inf. Theory 1962, 8, 179–187. [Google Scholar]
  23. Hosny, K.M. New set of rotationally Legendre moment invariants. Int. J. Electr. Comput. Syst. Eng. 2010, 4, 176–180. [Google Scholar]
  24. Yang, J.; Liu, X.; Zhang, L.; Zhang, L.; Yan, T.; Fu, S.; Sun, T.; Zhan, H.; Xing, F.; You, Z. Real-time localization and classification of the fast-moving target based on complementary single-pixel detection. Opt. Express 2025, 33, 11301–11316. [Google Scholar] [CrossRef] [PubMed]
  25. Sikora, T. The MPEG-7 visual standard for content description—An overview. IEEE Trans. Circuits Syst. Video Technol. 2002, 11, 696–702. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of the real-time object classification system.
Figure 1. Schematic diagram of the real-time object classification system.
Sensors 25 05886 g001
Figure 2. DMD binary mask for calculating the object’s geometric moment.
Figure 2. DMD binary mask for calculating the object’s geometric moment.
Sensors 25 05886 g002
Figure 3. The scaling and rotational invariance of the proposed method. (a) The test object. (b) The calculated results of Invariants 1 and 2 under the scaling transformation. (c) The calculated results of Invariants 1 and 2 under rotational transformation.
Figure 3. The scaling and rotational invariance of the proposed method. (a) The test object. (b) The calculated results of Invariants 1 and 2 under the scaling transformation. (c) The calculated results of Invariants 1 and 2 under rotational transformation.
Sensors 25 05886 g003
Figure 4. Visualization of Invariant 1 and 2 errors across the DMD plane. (a) Error of invariant 1. (b) Error of invariant 2.
Figure 4. Visualization of Invariant 1 and 2 errors across the DMD plane. (a) Error of invariant 1. (b) Error of invariant 2.
Sensors 25 05886 g004
Figure 5. Experimental setup for real-time object classification.
Figure 5. Experimental setup for real-time object classification.
Sensors 25 05886 g005
Figure 6. Experimental validation of system classification performance. (a) Different test objects displayed on the dynamic target simulator. (b) The measured invariant values ϕ 1 and ϕ 2 of different test objects. The referenced standard objects in each category are distinguished by bold black bounding boxes.
Figure 6. Experimental validation of system classification performance. (a) Different test objects displayed on the dynamic target simulator. (b) The measured invariant values ϕ 1 and ϕ 2 of different test objects. The referenced standard objects in each category are distinguished by bold black bounding boxes.
Sensors 25 05886 g006
Table 1. Average distance between test objects and standard objects.
Table 1. Average distance between test objects and standard objects.
Object TypeAverage Distance
Apple0Butterfly0Crown0Deer0Lmfish0
Apple0.13771.79040.52752.26122.5639
Butterfly1.84430.07411.45150.45120.7365
Crown0.66661.45850.12511.87002.2151
Deer2.37990.60061.94960.10270.3141
Lmfish2.59290.79532.20350.40150.0484
Table 2. Classification results of different objects in comparison with previous method.
Table 2. Classification results of different objects in comparison with previous method.
Object TypeClassification Accuracy
Our MethodPrevious Method [24]
3096.7%93.3%
5090%82%
7081.4%68.6%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yang, J.; Chen, R.; Peng, Y.; Zhang, L.; Sun, T.; Xing, F. Real-Time Object Classification via Dual-Pixel Measurement. Sensors 2025, 25, 5886. https://doi.org/10.3390/s25185886

AMA Style

Yang J, Chen R, Peng Y, Zhang L, Sun T, Xing F. Real-Time Object Classification via Dual-Pixel Measurement. Sensors. 2025; 25(18):5886. https://doi.org/10.3390/s25185886

Chicago/Turabian Style

Yang, Jianing, Ran Chen, Yicheng Peng, Lingyun Zhang, Ting Sun, and Fei Xing. 2025. "Real-Time Object Classification via Dual-Pixel Measurement" Sensors 25, no. 18: 5886. https://doi.org/10.3390/s25185886

APA Style

Yang, J., Chen, R., Peng, Y., Zhang, L., Sun, T., & Xing, F. (2025). Real-Time Object Classification via Dual-Pixel Measurement. Sensors, 25(18), 5886. https://doi.org/10.3390/s25185886

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop