Next Article in Journal
An Analysis of Nonlinear Differential Equations Describing the Dynamic Behavior of an Unbalanced Rotor
Previous Article in Journal
Forecasting the Number of Freight Trains by Categories Using Time Series Regression Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Multi-Emitter Infrared Sensor System for Reliable Near-Field Object Positioning †

Department of Electronics and Communication Engineering, Istanbul Technical University, İstanbul 34469, Turkey
Presented at the 12th International Electronic Conference on Sensors and Applications, 12–14 November 2025; Available online: https://sciforum.net/event/ECSA-12.
Eng. Proc. 2025, 118(1), 98; https://doi.org/10.3390/ECSA-12-26549
Published: 7 November 2025

Abstract

Infrared (IR) proximity sensors measure distance using either time-of-flight (ToF) or reflection intensity methods. While the ToF offers greater precision, it requires costly, specialized components. Reflection-based sensors use simpler circuits, enabling lower-cost designs. This study presents a multi-emitter reflection intensity IR sensor as an economic alternative to near-field object positioning. Six IR LEDs, sequentially driven, surround a central photodiode that captures backscattered signals. A machine learning pipeline estimates the object coordinates, cross-section, and height. Tested on 20 objects and 13,750 labeled data points, the system achieved a <1 cm mean positioning error, which is competitive with multi-zone ToF accuracy with a reduced cost.

1. Introduction

1.1. Background on Infrared Proximity Sensing

Infrared (IR) proximity sensors are commonly used in industry and consumer devices due to their reliability, low cost, low power consumption, and compact form [1]. Applications like presence detection, robotic automation, gesture control, and safety systems are typically performed with IR proximity sensors [2,3].
In practical applications, two major forms of IR proximity sensors are used: time-of-flight (ToF) and reflection intensity sensors [4,5].

1.1.1. Time-of-Flight IR Sensors

ToF sensors determine distance by measuring the time required for emitted infrared light to travel from the emitter LED to the receiver photodiode and back after reflecting from a target [6]. They can achieve greater precision and allow for the detection of objects at greater distances; this demands high-speed timing and specialized, costlier integrated components. They are less affected by the target’s surface reflectivity, as the measurement is based on the light’s travel time rather than the intensity of the reflected signal. They also employ SPAD (Single-Photon Avalanche Diode) pixel arrays in combination with time-to-digital converters (TDCs) capable of resolving extremely narrow optical pulses [6,7,8].
The basic principle is like that of sonar sensors, which measure the travel time of sound waves. However, measuring the time of flight of photons is significantly more challenging due to the much higher speed of light compared to sound. For this reason, such sensors incorporate dedicated integrated circuits to handle the precise timing measurements. Their manufacturing is complex and requires high-speed electronics, often implemented as custom ASICs [8,9].
Multi-zone ToF sensors operate on the same principle, but instead of a single measurement axis, they change the emitter’s laser or employ multiple emitters to capture distance information across an array, forming image-like matrices with resolutions of up to 8 × 8 [10]. Such sensors are frequently used for near-field object detection in various applications [11,12].

1.1.2. Reflection Intensity Type Active IR Sensors

Reflection intensity type proximity sensors use one emitter IR LED and one receiver IR diode corresponding to them, similar to ToF sensors [13,14]. However, instead of measuring the distance with the time of flight, they measure the amplitude of reflected light, related to the object distance and reflectivity. For this reason, it is not easy to determine the distance of the target with an unknown reflectivity or color [15]. Also, reflection is easily affected by the object shape, environmental conditions, etc. [16]. Due to these limitations, such sensors often employ IR emitter LEDs with a wider viewing angle and forward current to increase the amount of reflected light received from targets under varying conditions. But this type of sensor is advantageous because it do not need to perform complex timing calculations [17]. Only a simple current-to-voltage circuit and an analog-to-digital converter are enough for digitizing the reflection value. The simplicity of reflection intensity IR sensors enables the production of more cost-effective sensors [18].
Single-LED reflection sensors in these systems produce a single-axis backscatter value. Since only a single value is read, it is sensitive to the color, reflectivity, surface slope, and environmental variables. The interpretation of the read value cannot account for the decrease in the reflection intensity resulting from these parameters and distance data [17].

1.2. Positioning with Infrared Proximity Sensors

In the domain of positioning with IR proximity sensors, multi-zone ToF sensors are often employed in various research studies. A 2019 study by A. Adamides et al. demonstrated the use of a multizone ToF sensor ring for human detection and positioning in industrial environments [19]. Another study in 2024 by A. Fasolino et al. utilized data from an 8 × 8 multi-zone ToF IR sensor to perform classification using a convolutional neural network (CNN), achieving an accuracy exceeding 92% [20]. Single-zone ToF sensors were also used in a 2019 study by U. Himmelsbach et al. for object detection and self-localization in robotic arms [21].
Low-cost reflection intensity active IR sensors are predominantly used for distance measurements rather than precise positioning. In a 1999 study, P. M. Novotny et al. employed a reflection-based IR sensor implementing the Phong illumination model to perform distance estimation [22]. Furthermore, by arranging reflection-based active IR sensors in an array configuration and applying echo analysis, object identification can also be achieved [23,24]. In some studies, ultrasonic sensors have been integrated to enhance the accuracy of such systems, enabling distance tracking solutions based on reflection type IR proximity sensors that are less affected by variations in target reflectivity and color [25].
Furthermore, near-field object positioning is closely related to visual perception and can therefore be processed using deep learning and machine learning (ML) methods like CNNs to extract deeper features beyond those attainable through classical approaches [26].
Numerous studies have employed higher-cost multi-zone ToF systems in combination with a histogram analysis [27,28]. However, despite these advancements, there is no reflection-based IR sensor model specifically developed for use in consumer electronics. This motivated the development of an extremely low-cost, machine learning-assisted object positioning system. The proposed design employs a circular array, multi-emitter sensor system that is capable of performing object detection and positioning within defined limits.
In the literature, low-cost reflection-based IR sensors are mostly used for distance measurements [13,14], while multi-region ToF arrays have been used for positioning [19,20,21]. This study aims to perform positioning with a single receiver and multiple transmitter architectures and a learned multi-channel pattern.

2. Materials and Methods

The use of a single LED in the reflection intensity type sensor makes it sensitive to the target reflectivity and surface orientation of the measurement. To prevent this dependency, six IR LEDs were placed as receivers in a circular arrangement with a 25 mm radius from the center. The receiver LED was held fixed at the center, allowing different illumination angles to be sampled. While measurements were collected from the receiver, the transmitter LEDs were triggered separately and sequentially.
Time-shared triggering limited inter-channel interference. The setup captures six-channel data instead of the absolute amplitude. The photodiode output is conditioned by a transimpedance amplifier, gain stage, and low-pass RC filter. The signal is fed to the ADC. This chain increases repeatability and noise immunity.

2.1. Infrared Emitter and Receiver Photodiode

In the sensor system to be developed, a central IR receiver diode is surrounded by six emitter LEDs arranged in a circular pattern at an equal distance of 25 mm from the center. The layout is shown in Figure 1.
As the emitter LED, the TSAL6200 (Vishay Semiconductors, Malvern, PA, USA) was selected, which operates at high power and emits at a peak wavelength of 940 nm (λp = 940 nm) [29]. It has a viewing angle of 34° (ϕ = ±17°). Due to its good fit with silicon photodiodes, it is suitable for such emitter to receiver applications [30,31].
For the IR photodiode, the SFH 213 FA silicon PIN photodiode (OSRAM Opto Semiconductors, Regensburg, Germany) was selected. It has a spectral sensitivity range of 750–1100 nm, with a peak sensitivity at approximately 900 nm [32]. The device features a half-angle (φ) of 10°, corresponding to a total viewing angle of 20°. The photocurrent is ≥72 μA, with a typical value of 90 μA [33]. For these reasons, it can be coupled with the selected IR LED and used together.

2.2. IR Emitter Driver

In the hardware implementation, each LED is driven by an N-channel MOSFET controlled by an independent pin. The PJA3441 (PANJIT Inc., Kaohsiung, Taiwan) was selected due to its low cost and ability to handle currents up to 3.1 A. A total of six MOSFETs were used, one for each emitter LED. Each emitter LED was connected in series with a 100 Ω SMD resistor, resulting in a current consumption of approximately 50 mA at a 5 V supply [34].
The active circular IR LED array is driven sequentially, with each LED triggered individually and the corresponding receiver readings recorded separately. To achieve this, an ESP32-WROOM-32D microcontroller unit (MCU) (Espressif Systems, Shanghai, China) with a 32-bit architecture was employed. Using an internal timer within the MCU, each LED is triggered rapidly in sequence. The trigger duration for each LED is 35 µs, followed by a 50 ms interval between triggers. Two consecutive measurements are performed with the same LED before proceeding to the next one. In total, 100 ms is allocated to each LED, and the complete cycle repeats every 600 ms.

2.3. IR Receiver Circuit

To enable analog measurements from the IR photodiode used as the receiver, a transimpedance amplifier (TIA), a non-inverting gain amplifier, and a subsequent RC filter were employed [32]. The OPA380 (Texas Instruments, Dallas, TX, USA) precision op-amp was selected for both TIA and gain stages. The selected op-amp is a low-noise op-amp with a high transimpedance gain bandwidth (90 MHz), low input bias current (0.05 pA), and low input voltage noise density (5.5 nV/Hz), making it suited for photodiode signal conditioning [32]. The diagram of the circuit can be seen in Figure 2.
In the TIA stage, the 100 kΩ feedback resistor sets the transimpedance gain, converting the photodiode’s current output into a voltage. The 47pF capacitor in parallel with this resistor limits the bandwidth to suppress high-frequency noise and maintain stability, especially with the photodiode’s junction capacitance [35,36].
In the non-inverting amplifier stage, the gain is 10, determined by Equation (1). This stage boosts the TIA’s output to a voltage level suitable for the ADC input of the microcontroller. The RC low-pass filter consists of a 1 kΩ resistor and a 100 nF capacitor, providing a cutoff frequency of 1.6 kHz, which can be found using Equation (2) [32]:
G = 1 + 90   k Ω 10   k Ω = 10
f c = 1 2 π R C 1.6   k H z

2.4. IR Receiver Value Acquisition

The IR signal is finally sampled through the microcontroller using an integrated analog-to-digital converter (ADC) module. Measurements are acquired in the one-shot ADC mode with a 12-bit resolution and 11 dB attenuation [37]. Each reading is stored in a buffer and indexed according to the sequence of the six emitter LEDs. This approach allows signals from six different emitters to be captured through the analog output of a single receiver circuit within each measurement cycle. The buffered data is processed through a software-based IIR filter before being transferred to the computer. A Python 3.10 script with a serial port reader stores the data into a comma-separated value (CSV) table file. Using the labeling tools provided in the program interface, the data is manually annotated and saved.

2.5. Dataset Preparation

The system is designed to detect cylindrical objects in proximity, with the sensor mounted above the targets. For this purpose, 20 different cylindrical objects were prepared. Each cylinder has a cardboard inner structure and is wrapped with the same color paper tape to standardize reflective properties. Three of the objects used in data recording are shown as examples in Figure 3.
The 20 object classes were defined based on two parameters: cross-sectional area and height. Two classes of target objects were used for the cross-sectional area, with values of 20 cm2 and 40 cm2. For each height class, two cross-sectional examples were included. In addition, ten height classes were defined, ranging from 5 cm to 15 cm in increments of 1 cm.
Each of the 20 object classes was positioned individually on metric grid paper placed beneath the sensor, ensuring that only one object was present for each measurement. The origin (0, 0) of the metric grid corresponds to the vertical projection of the IR receiver photodiode. The grid area covers a total of 12 cm × 12 cm, providing a coordinate range from (−6, −6) to (+6, +6).
The sensor array was mounted horizontally, with the central IR photodiode aligned to the center of the coordinate plane, at a height of 22 cm above the metric paper. For measurements, the sensor array was connected via cables to an ESP32-WROOM-32D microcontroller board, which in turn was connected to a computer via a USB cable.
For data acquisition, each of the 20 object classes was placed at grid points spaced at 0.5 cm intervals, covering all positions from (−6, −6) to (+6, +6), resulting in a total of 625 measurement points per object (Figure 4). The figure illustrates that the response from a single transmitter exhibits a direction-dependent and narrow distribution. This manual positioning process was monitored using a Python-based computer interface, which recorded the measurements in CSV format. To accelerate the manual measurement process, data augmentation techniques, such as symmetrization and Gaussian blur, were applied. The object height was entered manually into the interface, while the coordinate values were automatically assigned by the program.
The resulting dataset contains the reflective intensity values from the six emitter LEDs along with their corresponding labels. The dataset covers the entire coordinate plane at a resolution of 0.5 cm (Figure 5). Combining signals from six transmitters creates a distinctive channel pattern and provides data for position estimation. A total of 13,750 discrete data entries were recorded across 20 classes. Of the recorded data, 6974 entries are located within 6 cm of the center of the coordinate plane and are classified as near-field measurements. Objects positioned at a radial distance greater than 5 cm from the center fall outside the field of view of the IR emitter and receiver diodes, resulting in lower measurement stability.

2.6. Model Development and Training

The developed model is a three-stage supervised learning pipeline. Its primary goal is to determine the spatial coordinates (X, Y) of the target object, the class of its cross-sectional area, and the object’s height. The inputs consist of infrared reflective intensity values from six independent channels. Using these six separate inputs, the model estimates four target properties. All training and testing procedures were implemented in Python. The Scikit-learn package was used for general model implementations, and XGBoost was used for gradient boosting models. Our method emphasized minimizing information leakage by employing out-of-fold (OOF) predictions in intermediate stages and performing model selection exclusively on cross-validated results.

2.6.1. Data Partitioning and Preprocessing

The dataset underwent an automated preliminary check to ensure that no values were missing or empty. For each combination of the cross-section class, height, and coordinate, the dataset had six raw readings from the central IR diode and six different emitters. Cross-section classes were encoded into integer labels using a label encoder, with the mapping stored for reproducibility. All regression inputs were standardized using z-score normalization when required by the model type. Before model training, the dataset was split into training (80%) and test (20%) sets, stratified by the cross-section class to preserve a class balance across subsets.

2.6.2. Coordinate Regression

During the initial decision-making stage of the AI model, the primary objective was to estimate the object’s X and Y coordinates on the plane. Since the object’s position directly influences subsequent predictions, this first decision layer was implemented as a regression stage. At this stage, multiple methods were tested to interpret the six raw IR sensor readings.
The tested models and their parameters are listed as follows:
  • Ridge Regression with z-score standardization and regularization strength α = 50.
  • Multi-Output Random Forest Regressor (n = 400, min. split size = 4 samples).
  • Multi-Output Gradient Boosting Regressor (n = 300, rate = 0.05, max. depth = 3).
  • Multi-Output XGBoost Regressor (n = 300, rate = 0.05, max. depth = 3).
Test results were evaluated using 5-fold Stratified Cross-Validation (SCV). For each method, performance metrics were computed for X and Y coordinates as well as Euclidean distance, including the R-squared, mean absolute error (MAE), root mean squared error (RMSE), and custom-defined axis-wise success rate and radial success rate.
The axis-wise success criterion was defined as prediction errors within 1.2 cm for each axis, while the radial success criterion was defined as Euclidean distance errors below 1.6 cm. These thresholds were selected based on the system’s general performance target of achieving sub-centimeter positional accuracy with 0.2 noise acceptance. At this stage, all candidate models were trained and evaluated, and the model achieving the highest test success rate was selected for progression to the next stage.

2.6.3. Cross-Section Classification

Since the object could belong to one of two cross-section classes (20 cm2 or 40 cm2), a class-based model rather than a numeric regression model was developed for cross-section analysis. Predictions from the coordinate regression stage were concatenated with the original six sensor readings to form an augmented feature vector. This vector was fed into candidate classifiers:
  • Logistic Regression (imax = 500);
  • Random Forest Classifier (n = 400, min. split size = 4 samples);
  • Gradient Boosting Classifier (n = 300, rate = 0.05, max. depth = 3).
The evaluation used SCV on the training split, reporting the mean and standard deviation of the classification accuracy, macro-F1 score, and weighted-F1 score. Three classification models were trained in parallel, and the results from the model with the highest accuracy were used for the subsequent stage.

2.6.4. Height Regression

In the previous coordinate regression stage, the X and Y coordinates were predicted, and in the cross-section classification stage, the class prediction was produced. In the third stage, these outputs were used in a height regression model. The models were trained using the six raw inputs in addition to the coordinate and cross-section class data. The same models as those used in the coordinate regression stage were applied to estimate the object height in the 5–15 cm range.
The trained models were evaluated, as in the coordinate regression stage, using 5-fold Stratified Cross-Validation (SCV), with metrics including the coefficient of determination (R2), mean absolute error (MAE), and root mean squared error (RMSE). The difference was that the definition of model success was changed from sub-centimeter prediction consistency to ±2 cm error consistency. Among the models trained in parallel, the one with the highest success rate was carried forward to the final stage.

2.6.5. Final Model Training and Testing

After all models were evaluated and tested within their respective stages, the selected models were retrained and executed sequentially on the entire dataset. Stage-specific and overall success rates were recalculated.
Finally, for visualization purposes, coordinate-wise success mapping was performed. These visualizations plotted the test set grid positions and color-coded each cell by the proportion of successful predictions within that spatial region. The maps illustrated how results varied across different points of the coordinate plane for each of the three stages, enabling the identification of potential blind spots or biased conditions in the sensor system.
Based on these visual inspections, points determined to be within the sensor system’s blind spots were excluded, and a separate dataset, referred to as near-field measurements, was created. To prevent errors arising from the physical limitations of the sensor from affecting the machine learning model, only the near-field measurements dataset was used during training and testing.

3. Results

3.1. Dataset and Exploratory Checks

A large dataset comprising N = 13,750 unique samples was compiled. For the model training, a reduced dataset with N = 6974 unique samples was constructed. As features, the dataset contains readings from six distinct IR emitters. The cross-section classes were perfectly balanced (20 cm2: 3487; 40 cm2: 3487). For positioning, measurements were available at every grid point from (−6, −6) to (+6, +6) at 0.5 cm intervals. Additionally, entries were collected for object heights from 5 cm to 15 cm at 1 cm increments.

3.2. Coordinate Regression

In the first stage of the pipeline, the model estimates the target object’s location in the coordinate plane as (X, Y). The four candidate regression models specified in the Materials and Methods were evaluated using the R2, per-axis MAE/RMSE, axis-wise success, and radial success. The Multi-Output Random Forest (MORF) achieved the best mean radial success and was selected for downstream stages. Performance results are reported in Table 1. The success criterion was defined as an absolute error < 1.2 cm on both X and Y axes relative to the ground truth.
The selected model achieves an overall radial success of approximately 60%, with a mean absolute error (MAE) below 1 cm. This demonstrates that the trained model can deliver sub-centimeter accuracy. Figure 6a presents the coordinate-based success rate, whereas Figure 6b,c show ground truth–prediction comparisons.

3.3. Cross-Section Classification

In the second stage, the classification model used the six raw IR value features together with the outputs of the coordinate regression model as inputs. Three classifier models were evaluated in this stage. The model performance was assessed using the 5-fold Stratified Cross-Validation, with metrics including accuracy, macro-F1, and weighted-F1. The best-performing model was selected for progression to the next stage. Detailed results for all models are presented in Table 2.
The selected Random Forest Classifier achieved an accuracy of 91.2% (macro-F1 = 0.912) on the test set. The coordinate-based success rate is shown in Figure 7a, while the confusion matrix is presented in Figure 7b.

3.4. Height Regression

The final regression stage was trained using the six raw sensor readings, the predicted coordinates, and the cross-section classification output as inputs to a height regression model. The goal was to predict the object height as a numerical value. Predictions within ±2 cm of the ground truth were considered successful. Table 3 presents the results of four different candidate models.
The Multi-Output Random Forest (MORF) regression model, which achieved the highest custom-defined success rate, was selected. The coordinate-based success rate is shown in Figure 8a, while Figure 8b presents the comparison between the model’s predictions and the ground truth. When the error tolerance in the success definition was reduced to ±1 cm, the MORF regression model achieved a success rate of 75.3%.

4. Conclusions

In this study, we demonstrated that our multi-emitter IR sensor system can recover an object’s 2-D position, cross-section class, and height from six IR reflections using a three-stage machine learning model.
The multi-transmitter approach provides angular diversity with six illumination directions. This reduces the uncertainty related to the reflectivity, color, and surface orientation. Since a single receiver is protected, the circuit complexity and cost are not high. The measurement relies on the relative pattern and differences between channels rather than the absolute intensity. Thus, the machine learning model predicts the position, cross-section class, and height more accurately.

Limitations

Experimental results show that a position MAE value of approximately 1 cm can be achieved in the near field. The fundamental limiting factor is the photodiode viewing angle, and the reliable operating region is limited to a diameter of approximately 10 cm in the working position. In a different positioning, the effective operating region widens as the distance between the sensor and the target object increases.
In this study, all objects were wrapped with the same surface material to eliminate differences in reflectivity and color. Therefore, the effect of the object’s color or surface reflectivity on the measurement results was not directly evaluated. Surfaces with darker or less reflective coatings are expected to reduce the signal level, while lighter or reflective surfaces may increase it. Although the multi-emitter configuration relies on relative inter-channel patterns, which may compensate for part of this variation, performance degradation is possible for objects with very low reflectivity. Future studies should systematically investigate the influence of different colors and surface materials on prediction accuracy.
To observe the individual effect of targets of different colors on the sensor, three different colored objects were used as target objects: white crepe paper, gray cellulose paper, and black printing paper. Measurements revealed a decrease in signal strength of up to 52% on the white paper and 75% on the black paper. This decrease leads to a similar decrease in the detection of the distance of the objects. In addition, the signal-to-noise ratio (SNR) increases. Only white objects were used in the current dataset. The effect of different colored objects on the sensor performance will be the subject of future studies. Results of this experiment can be seen in Figure 9.
The sensor can perform self-calibration by reading the ambient signal level without triggering any of its emitters. However, strong sunlight may saturate the photodiode and cause signal distortion. Rapidly varying infrared sources in the environment may also increase overall noise. Since all experiments and the dataset collection were conducted indoors under controlled lighting, further research is needed to evaluate the system’s robustness in outdoor conditions and under direct sunlight.

5. Discussion

Overall, the ultra-low-cost integrated hardware and staged learning approach meet reliability goals in the near field: a 60% sub-centimeter radial XY, ≥68% per-axis sub-centimeter positioning MAE, ~1.5 cm height MAE, and >80% cross-section classification accuracy from a compact, low-cost sensor. The model’s performance is primarily bounded by the sensor’s physical field-of-view constraints. Owing to the IR receiver diode’s narrow field of view, reliable positioning can be achieved only within an operational region of at most 10 cm in diameter. It is expected that this boundary will be increased by increasing the viewing angle of the photodiode or the distance between the photodiode and the target object.
Overall, the results of this study are promising for deeper analyses and a wide range of area applications of this sensor system. In addition to contributing a dataset to the literature in this field, this study provides data on how a single-photodiode circular active IR reflective intensity array sensor system performs in a narrow-area, short-distance object detection scenario.

Funding

This research was partially funded by Deka Electronic Research and Development Center.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The dataset generated and analyzed during the current study is publicly available in the Kaggle repository, “Multi-Emitter IR Sensor System Dataset” by Eren Bülbül (2025), available at https://www.kaggle.com/datasets/erenbulbulx/multi-emitter-ir-sensor-data (accessed on 21 December 2025). This repository contains all raw and processed IR sensor readings, coordinate labels, cross-section class annotations, and height measurements used to train and evaluate our three-stage machine learning pipeline.

Acknowledgments

The authors would like to thank Mesut Can Özbıçakcı for their valuable assistance during the development and testing phases of this study.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Kejık, P.; Kluser, C.; Bischofberger, R.; Popovic, R.S. A Low-Cost Inductive Proximity Sensor for Industrial Applications. Sens. Actuators A Phys. 2004, 110, 93–97. [Google Scholar] [CrossRef]
  2. Thakare, A.; Bhagat, G.; Indulkar, C.; Kale, P.; Powalkar, A. A Survey of Proximity and Range Sensing Technologies for Reliable Distance Estimation. In Congress on Control, Robotics, and Mechatronics; Springer: Singapore, 2024; pp. 23–38. [Google Scholar] [CrossRef]
  3. Everett, H.R.; Flynn, A. A Programmable Near-Infrared Proximity Detector for Robot Navigation. In Mobile Robots I; SPIE: Bellingham, WA, USA, 1987. [Google Scholar] [CrossRef]
  4. Um, D.; Ryu, D.; Kal, M. Multiple Intensity Differentiation for 3-D Surface Reconstruction with Mono-Vision Infrared Proximity Array Sensor. IEEE Sens. J. 2011, 11, 3352–3358. [Google Scholar] [CrossRef]
  5. Yang, M. Feedback Controlled Infrared Proximity Sensing System. Measurement 2015, 69, 81–86. [Google Scholar] [CrossRef]
  6. Frey, L.; Marty, M.; Andre, S.; Moussy, N. Enhancing Near-Infrared Photodetection Efficiency in SPAD with Silicon Surface Nanostructuration. IEEE J. Electron Devices Soc. 2018, 6, 392–395. [Google Scholar] [CrossRef]
  7. Chen, E.-C.; Shih, C.-Y.; Dai, M.-Z. Polymer Infrared Proximity Sensor Array. IEEE Trans. Electron Devices 2011, 58, 1215–1220. [Google Scholar] [CrossRef]
  8. Kuttner, A.; Hauser, M.; Zimmermann, H.; Hofbauer, M. Highly Sensitive Indirect Time-of-Flight Distance Sensor with Integrated Single-Photon Avalanche Diode in 0.35 Μm CMOS. IEEE Photonics J. 2022, 14, 6835806. [Google Scholar] [CrossRef]
  9. Niclass, C.; Soga, M.; Matsubara, H.; Ogawa, M.; Kagami, M. A 0.18-µm CMOS SoC for a 100-m-Range 10-Frame/s 200 × 96-Pixel Time-of-Flight Depth Sensor. IEEE J. Solid-State Circuits 2014, 49, 315–330. [Google Scholar] [CrossRef]
  10. Rahman, A. Precision and Accuracy of Ultrasonic and Infrared Laser ToF IoT Sensors. J. Inform. Telecommun. Eng. 2025, 8, 219–226. [Google Scholar]
  11. Ismail, O. 3D Head Tracking and Gesture Recognition Using an 8-By-8 Array of Infrared Sensors. Doctoral Dissertation, Technische Universität Wien, Vienna, Austria, 2024. [Google Scholar] [CrossRef]
  12. Liu, Z.; Xu, X.; Mamishev, M.; Mamishev, A.V. Sensor Fusion for Non-Intrusive Adaptive Distancing of Visually Impaired Users. In Proceedings of the 2024 IEEE 20th International Conference on Body Sensor Networks (BSN), Chicago, IL, USA, 15–17 October 2024; pp. 1–4. [Google Scholar] [CrossRef]
  13. Corsi, C. History Highlights and Future Trends of Infrared Sensors. J. Mod. Opt. 2010, 57, 1663–1686. [Google Scholar] [CrossRef]
  14. Rogalski, A. Infrared Detectors: An Overview. Infrared Phys. Technol. 2002, 43, 187–210. [Google Scholar] [CrossRef]
  15. Kipp, S.; Mistele, B.; Urs, S. The Performance of Active Spectral Reflectance Sensors as Influenced by Measuring Distance, Device Temperature and Light Intensity. Comput. Electron. Agric. 2014, 100, 24–33. [Google Scholar] [CrossRef]
  16. Benet, G.; Blanes, F.; Simó, J.E.; Pérez, P. Using Infrared Sensors for Distance Measurement in Mobile Robots. Robot. Auton. Syst. 2002, 40, 255–266. [Google Scholar] [CrossRef]
  17. Pavlov, V.; Ruser, H.; Horn, M. Model-Based Object Characterization with Active Infrared Sensor Array. In Proceedings of the 2007 IEEE Sensors, Atlanta, GA, USA, 28–31 October 2007; pp. 360–363. [Google Scholar] [CrossRef]
  18. Korba, L.; Elgazzar, S.; Welch, T. Active Infrared Sensors for Mobile Robots. IEEE Trans. Instrum. Meas. 1994, 43, 283–287. [Google Scholar] [CrossRef]
  19. Odysseus, A.A.; Anmol, S.M.; Kumar, S.; Sahin, F. A Time-of-Flight On-Robot Proximity Sensing System to Achieve Human Detection for Collaborative Robots. In Proceedings of the 2019 IEEE 15th International Conference on Automation Science and Engineering (CASE), Vancouver, BC, Canada, 22–26 August 2019. [Google Scholar] [CrossRef]
  20. Fasolino, A.; Vitolo, P.; Liguori, R.; Benedetto, L.D.; Rubino, A.; Gian, D.L.; Pau, D. Object Classification Using Ultra Low Resolution Time-of-Flight Sensor and Tiny Convolutional Neural Network. In Proceedings of the 2024 IEEE Sensors Applications Symposium (SAS), Naples, Italy, 23–25 July 2024; pp. 1–6. [Google Scholar] [CrossRef]
  21. Himmelsbach, U.B.; Wendt, T.M.; Nikolai, H.; Gawron, P. Single Pixel Time-of-Flight Sensors for Object Detection and Self-Detection in Three-Sectional Single-Arm Robot Manipulators. In Proceedings of the 2019 Third IEEE International Conference on Robotic Computing (IRC), Naples, Italy, 25–27 February 2019. [Google Scholar] [CrossRef]
  22. Novotny, P.M.; Ferrier, N.J. Using Infrared Sensors and the Phong Illumination Model to Measure Distances. In Proceedings of the Proceedings 1999 IEEE International Conference on Robotics and Automation (Cat. No. 99CH36288C), Detroit, MI, USA, 10–15 May 1999. [Google Scholar] [CrossRef]
  23. Ruser, H. Object Recognition with a Smart Low-Cost Active Infrared Sensor Array. In Proceedings of the 1st International Conference on Sensing Technology, Palmerston North, New Zealand, 21–23 November 2005. [Google Scholar]
  24. Tar, Á.; Cserey, G. Object Outline and Surface-Trace Detection Using Infrared Proximity Array. IEEE Sens. J. 2011, 11, 2486–2493. [Google Scholar] [CrossRef]
  25. Sabatini, A.M.; Genovese, V.; Guglielmelli, E.; Mantuano, A.; Ratti, G.; Dario, P. A Low-Cost, Composite Sensor Array Combining Ultrasonic and Infrared Proximity Sensors. Available online: https://ieeexplore.ieee.org/abstract/document/525872 (accessed on 21 December 2025).
  26. Zhao, Z.-Q.; Zheng, P.; Xu, S.-T.; Wu, X. Object Detection with Deep Learning: A Review. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 3212–3232. [Google Scholar] [CrossRef]
  27. Klank, U.; Carton, D.; Beetz, M. Transparent Object Detection and Reconstruction on a Mobile Platform. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011. [Google Scholar] [CrossRef]
  28. Haeske, N. Viability of Low-Cost Infrared Sensors for Short Range Tracking. arXiv 2024. [Google Scholar] [CrossRef]
  29. Jain, P.; Joshi, A.M.; Mohanty, S.P. IGLU: An Intelligent Device for Accurate Noninvasive Blood Glucose-Level Monitoring in Smart Healthcare. IEEE Consum. Electron. Mag. 2020, 9, 35–42. [Google Scholar] [CrossRef]
  30. Kesuma, H.; Ahmed, A.; Paul, S.; Sebald, J. Bit-Error-Rate Measurement of Infrared Physical Channel Using Reflection via Multi Layer Insulation inside in ARIANE 5 Vehicle Equipment Bay for Wireless Sensor Network Communication. In Proceedings of the 2015 IEEE International Conference on Wireless for Space and Extreme Environments (WiSEE), Orlando, FL, USA, 14–16 December 2015. [Google Scholar] [CrossRef]
  31. Huang, J.; Li, Z. Infrared-Based Short-Distance FSO Sensor Network System. Int. J. Online Biomed. Eng. 2018, 14, 43–56. [Google Scholar] [CrossRef]
  32. Syifaul, F.; Trio, A.; Putra, A.P.; Aska, Y. Noise Analysis in VLC Optical Link Based Discrette OP-AMP Trans-Impedance Amplifier (TIA). TELKOMNIKA (Telecommun. Comput. Electron. Control.) 2017, 15, 1012. [Google Scholar] [CrossRef]
  33. Little, T.D.C.; Dib, P.; Shah, K.; Barraford, N.; Gallagher, B. Using LED Lighting for Ubiquitous Indoor Wireless Networking. In Proceedings of the 2008 IEEE International Conference on Wireless and Mobile Computing, Networking and Communications, Avignon, France, 12–14 October 2008. [Google Scholar] [CrossRef][Green Version]
  34. PANJIT International Inc. PJA3441—N-Channel Enhancement Mode Power MOSFET Datasheet; PANJIT International Inc.: Hsinchu, Taiwan, 2023. Available online: https://www.panjit.com.tw/upload/datasheet/PJA3441.pdf (accessed on 21 December 2025).
  35. Wright, P.; Ozanyan, K.B.; Carey, S.J.; McCann, H. Design of High-Performance Photodiode Receivers for Optical Tomography. IEEE Sens. J. 2005, 5, 281–288. [Google Scholar] [CrossRef]
  36. Kamrani, E.; Lesage, F.; Sawan, M. Low-Noise, High-Gain Transimpedance Amplifier Integrated with SiAPD for Low-Intensity Near-Infrared Light Detection. IEEE Sens. J. 2014, 14, 258–269. [Google Scholar] [CrossRef]
  37. Maier, A.; Sharp, A.; Vagapov, Y. Comparative Analysis and Practical Implementation of the ESP32 Microcontroller Module for the Internet of Things. Available online: https://ieeexplore.ieee.org/abstract/document/8101926/ (accessed on 21 December 2025).
Figure 1. Placement of emitter LEDs and receiver photodiode: (a) view of electronic printed board circuit (PCB) layout; (b) bottom view diagram; and (c) horizontal view diagram.
Figure 1. Placement of emitter LEDs and receiver photodiode: (a) view of electronic printed board circuit (PCB) layout; (b) bottom view diagram; and (c) horizontal view diagram.
Engproc 118 00098 g001
Figure 2. IR photodiode receiver circuit with TIA (left), non-inverting gain stage, and RC low-pass filter for noise reduction.
Figure 2. IR photodiode receiver circuit with TIA (left), non-inverting gain stage, and RC low-pass filter for noise reduction.
Engproc 118 00098 g002
Figure 3. Sample target objects used for dataset generation, from left to right: heights of 5 cm, 7 cm, and 9 cm and cross-sectional areas of 20 cm2, 20 cm2, and 40 cm2, respectively.
Figure 3. Sample target objects used for dataset generation, from left to right: heights of 5 cm, 7 cm, and 9 cm and cross-sectional areas of 20 cm2, 20 cm2, and 40 cm2, respectively.
Engproc 118 00098 g003
Figure 4. The map shows the IR reflective intensity values of an object positioned at varying coordinates when only a single emitter LED is active. Rx denotes the central receiver photodiode, while the point labeled T1 represents the IR emitter LED. Object height = 9 cm; cross-section area = 20 cm2. A Gaussian blur filter was applied for visualization (σ = 2.5). The color scale represents the normalized IR reflection intensity, where brighter (yellow) colors indicate higher intensity and darker (purple) colors indicate lower intensity.
Figure 4. The map shows the IR reflective intensity values of an object positioned at varying coordinates when only a single emitter LED is active. Rx denotes the central receiver photodiode, while the point labeled T1 represents the IR emitter LED. Object height = 9 cm; cross-section area = 20 cm2. A Gaussian blur filter was applied for visualization (σ = 2.5). The color scale represents the normalized IR reflection intensity, where brighter (yellow) colors indicate higher intensity and darker (purple) colors indicate lower intensity.
Engproc 118 00098 g004
Figure 5. On the left are six coordinate IR reflective intensity maps plotted separately for each emitter LED. On the right is the simulated composite map obtained by merging the data from all six emitter LEDs. Rx denotes the central receiver photodiode, while the points labeled T1 through T6 represent the IR emitter LEDs. A Gaussian blur filter was applied for visualization (σ = 2.5). The color scale represents the normalized IR reflection intensity, where brighter (yellow) colors indicate higher intensity and darker (purple) colors indicate lower intensity.
Figure 5. On the left are six coordinate IR reflective intensity maps plotted separately for each emitter LED. On the right is the simulated composite map obtained by merging the data from all six emitter LEDs. Rx denotes the central receiver photodiode, while the points labeled T1 through T6 represent the IR emitter LEDs. A Gaussian blur filter was applied for visualization (σ = 2.5). The color scale represents the normalized IR reflection intensity, where brighter (yellow) colors indicate higher intensity and darker (purple) colors indicate lower intensity.
Engproc 118 00098 g005
Figure 6. Coordinate regression results. (a) Coordinate-wise XY success heatmap; each cell shows the proportion of samples with per-axis error ≤ 1.2 cm. (b) True vs. predicted X and (c) Y; the red dashed line denotes the perfect model reference.
Figure 6. Coordinate regression results. (a) Coordinate-wise XY success heatmap; each cell shows the proportion of samples with per-axis error ≤ 1.2 cm. (b) True vs. predicted X and (c) Y; the red dashed line denotes the perfect model reference.
Engproc 118 00098 g006
Figure 7. Cross-section classifier results. (a) Coordinate-wise classification success heatmap; each cell shows the accuracy at specific coordinates. (b) Confusion matrix.
Figure 7. Cross-section classifier results. (a) Coordinate-wise classification success heatmap; each cell shows the accuracy at specific coordinates. (b) Confusion matrix.
Engproc 118 00098 g007
Figure 8. Height regression results. (a) Coordinate-wise regression success (error ≤ 1 cm) heatmap; each cell shows the success ratio in specific coordinates. (b) True vs. predicted. The red dashed line represents the ideal prediction line (y = x), indicating perfect agreement between the true and predicted heights.
Figure 8. Height regression results. (a) Coordinate-wise regression success (error ≤ 1 cm) heatmap; each cell shows the success ratio in specific coordinates. (b) True vs. predicted. The red dashed line represents the ideal prediction line (y = x), indicating perfect agreement between the true and predicted heights.
Engproc 118 00098 g008
Figure 9. Mean sensor responses for three paper-covered targets of different colors (white crepe paper, gray cellulose paper, and black print paper) as a function of horizontal distance of targets up to 5 cm. Sensor response value decreases with both distance and reflectivity of targets color.
Figure 9. Mean sensor responses for three paper-covered targets of different colors (white crepe paper, gray cellulose paper, and black print paper) as a function of horizontal distance of targets up to 5 cm. Sensor response value decreases with both distance and reflectivity of targets color.
Engproc 118 00098 g009
Table 1. Coordinate regression candidate model performance results.
Table 1. Coordinate regression candidate model performance results.
Regression ModelR2 (X)R2 (Y)MAE (cm)Success % 1
Ridge0.0830.0741.98%12.5
Multi-Output Random Forest0.6330.6150.94%60
Multi-Output Gradient Boosting0.5780.5661.13%46
Multi-Output XGBoost Regressor0.6240.6091.01%55
1 Radial success percentage.
Table 2. Cross-section classification candidate model performance results.
Table 2. Cross-section classification candidate model performance results.
Regressor ModelAccuracy (Mean ± Std)
Logistic Regression0.641 ± 0.006
Random Forest0.912 ± 0.011
Gradient Boosting0.875 ± 0.014
Table 3. Height regression candidate model performance results.
Table 3. Height regression candidate model performance results.
Regressor ModelR2MAE (cm)RMSE (cm)Success % 1
Ridge0.0982.583.00%40
Multi-Output Random Forest0.8800.781.09%94
Multi-Output Gradient Boosting0.7621.161.54%85
RF + GBR (Avg. Ensemble)0.8400.941.26%90
1 Percentage of predictions that error ≤ 2 cm.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bülbül, E. Multi-Emitter Infrared Sensor System for Reliable Near-Field Object Positioning. Eng. Proc. 2025, 118, 98. https://doi.org/10.3390/ECSA-12-26549

AMA Style

Bülbül E. Multi-Emitter Infrared Sensor System for Reliable Near-Field Object Positioning. Engineering Proceedings. 2025; 118(1):98. https://doi.org/10.3390/ECSA-12-26549

Chicago/Turabian Style

Bülbül, Eren. 2025. "Multi-Emitter Infrared Sensor System for Reliable Near-Field Object Positioning" Engineering Proceedings 118, no. 1: 98. https://doi.org/10.3390/ECSA-12-26549

APA Style

Bülbül, E. (2025). Multi-Emitter Infrared Sensor System for Reliable Near-Field Object Positioning. Engineering Proceedings, 118(1), 98. https://doi.org/10.3390/ECSA-12-26549

Article Metrics

Back to TopTop