Next Article in Journal
Engineering Verification and Performance Analysis of Water Curtain Wall System Based on Multi-Sensor and Automatic Control Technologies
Previous Article in Journal
Towards Electoral Digitization: Automatic Classification of Handwritten Numbers in PREP System Records
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

IoT-Based Sensor Technologies for Object Detection in Low-Visibility Environments: Development and Validation of a Functional Prototype †

by
Pedro Escudero-Villa
1,* and
Cristian Escudero
2
1
Faculty of Engineering, Universidad Nacional de Chimborazo, Riobamba 060108, Ecuador
2
Center for Educational Technologies, Universidad Nacional de Chimborazo, Riobamba 060108, Ecuador
*
Author to whom correspondence should be addressed.
Presented at the 6th International Electronic Conference on Applied Sciences, 9–11 December 2025; Available online: https://sciforum.net/event/ASEC2025.
Eng. Proc. 2026, 124(1), 28; https://doi.org/10.3390/engproc2026124028
Published: 12 February 2026
(This article belongs to the Proceedings of The 6th International Electronic Conference on Applied Sciences)

Abstract

In emergency scenarios where visibility is compromised, rapid and accurate object detection becomes critical. This study addresses this challenge by proposing an IoT-enabled robotic solution capable of operating in low-visibility environments, with a focus on supporting search and rescue missions through autonomous sensing and real-time data communication. This research presents the development and implementation of an IoT-based sensorized system designed to detect objects in low-visibility environments. The system aims to enhance search and rescue operations by identifying potential human presence in areas with limited access due to smoke, darkness, or hazardous conditions. The platform integrates distance sensors, a thermal camera (AMG8833), a PIR motion sensor, and wireless communication through the Arduino MKR1000 and ESP32-CAM boards. The mobile robot is equipped with obstacle avoidance, person detection, and IoT communication modules, allowing data to be sent to the cloud via ThingSpeak and enabling remote commands through TalkBack. A structured methodology was followed, including technology selection, hardware/software design, and testing under various lighting and opacity conditions. Experimental results showed the effectiveness of the system in identifying obstacles and detecting heat signatures representing human body, with optimal performance observed at a 15 cm detection threshold. The system demonstrated robust operation in simulated rescue environments, providing real-time data transmission and remote-control capabilities.

1. Introduction

Object detection in low-visibility environments remains a major challenge in applications where safety, reliability, and rapid response are critical. Scenarios such as smoke-filled spaces, poorly illuminated areas, fog, dust, or adverse weather conditions significantly degrade human perception and limit the effectiveness of conventional vision-based systems. These conditions are frequently encountered in real-world contexts including search and rescue operations, surveillance of critical infrastructure, industrial safety, and emergency response. Consequently, there is a growing need for technological solutions capable of reliably detecting objects and human presence when visual information is severely constrained.
The emergence of the Internet of Things (IoT) has enabled the development of intelligent sensor-based systems that combine connectivity, real-time data acquisition, and distributed processing [1,2,3]. IoT-based architectures allow heterogeneous sensors to be integrated into unified platforms, facilitating remote monitoring, rapid information exchange, and coordinated responses to dynamic events. In low-visibility environments, these capabilities are particularly valuable, as they enable continuous situational awareness and timely decision-making even under degraded sensing conditions [1,4,5]. As a result, IoT-supported sensor technologies have become a promising approach for enhancing object detection performance in challenging environments [6,7].
Recent research in this field has focused on the use of complementary sensing modalities to overcome the limitations of individual sensors [8,9]. Thermal imaging sensors have demonstrated effectiveness in detecting human presence based on heat signatures, especially in darkness or smoke-filled environments [10,11,12]. Ultrasonic sensors are widely employed for obstacle detection due to their independence from lighting conditions, while radar-based systems provide robust localization capabilities in fog, rain, or dust [13]. These sensing technologies are often deployed within robotic platforms—including ground robots, aerial vehicles, underwater systems, and humanoid robots—selected according to terrain characteristics and mission requirements [14,15,16]. Additionally, IoT frameworks and wireless sensor networks have been integrated to improve data fusion, communication, and coordination among detection and response systems [17,18,19].
Despite these advances, significant challenges remain. Each sensing technology presents inherent limitations related to detection range, environmental interference, and resolution. Optical sensors are strongly affected by lighting conditions, while the performance of radar and LiDAR systems depends on target distance, particle density, and transmission power [20,21,22]. Furthermore, many existing studies focus on individual sensors or isolated components rather than evaluating integrated systems under realistic low-visibility conditions. Moreover, the majority of reported works rely on single sensing modalities or simulation-based evaluations, often conducted under idealized conditions. Only a limited number of studies provide system-level experimental validation of integrated IoT-based robotic platforms operating in controlled low-visibility environments with quantitative performance metrics [23]. In particular, comparative data on obstacle-avoidance thresholds, route-completion times, collision rates, and human-detection indicators across different visibility and opacity levels remain scarce [23]. This lack of experimentally validated benchmarks hinders the practical assessment, comparison, and replication of proposed solutions, emphasizing the need for functional prototypes evaluated under reproducible conditions [9,23].
In this context, the present work addresses these gaps by developing and validating an IoT-based functional prototype for object detection in low-visibility environments. The proposed system integrates multiple sensors within an IoT architecture to enable reliable detection, real-time data transmission, and remote monitoring. The research objectives include investigating suitable sensor technologies for low-visibility scenarios, designing and implementing a sensorized IoT-based system, characterizing its performance through functional testing, and evaluating its effectiveness under varying lighting conditions. Unlike many existing approaches that rely on single sensing modalities or simulation-based validation, this work introduces a lightweight multi-sensor strategy combining thermal triggering, motion verification, and ultrasonic navigation. The proposed approach is experimentally validated using quantitative metrics, demonstrating improved robustness and reliable performance under degraded visibility conditions.

2. Methodology

2.1. System Design Rationale and Sensor Selection

The design of the proposed system was guided by the requirement to ensure reliable operation under low-visibility conditions while maintaining simplicity, robustness, and low computational cost (Figure 1 includes the system architecture). For this reason, the sensing strategy prioritizes modalities that are minimally affected by illumination changes and environmental opacity. Ultrasonic sensing was selected for obstacle detection and avoidance due to its proven effectiveness in short-range navigation tasks and its independence from lighting conditions. The sensor HC-SR04 provides reliable distance measurements within a limited range, making it suitable for confined and cluttered environments typically encountered in rescue scenarios.
Thermal imaging was incorporated as the primary sensing modality for human detection. Unlike RGB cameras, thermal sensors rely on temperature contrast rather than visible light, enabling the detection of objects with thermal characteristics similar to the human body in darkness, smoke, or dust. The selected AMG8833 thermal sensor offers sufficient spatial resolution and temperature sensitivity to distinguish human presence from background elements under controlled conditions. To reduce false positives caused by static heat sources, a passive infrared HC-SR501 motion sensor was integrated as a complementary modality, allowing the system to confirm dynamic targets before triggering a rescue alert.

2.2. Sensor Integration and Decision Logic

The system integrates the ultrasonic, thermal, and motion sensors through a rule-based decision logic designed to ensure reliable detection while minimizing computational complexity. Obstacle avoidance is continuously managed using ultrasonic distance measurements, with a threshold distance of 15 cm selected based on preliminary testing. This value represents a trade-off between collision prevention and navigation efficiency, as shorter thresholds increased collision probability due to signal reflection effects, while longer thresholds limited maneuverability within confined spaces.
Human detection is activated when the thermal sensor registers a significant increase in temperature relative to the background, followed by confirmation from the motion sensor. This sequential validation reduces false detections and improves confidence in identifying a potential victim. Once detection criteria are met, an alarm event is generated and transmitted through the IoT platform.

2.3. IoT Architecture and Data Transmission

IoT connectivity was incorporated to enable real-time monitoring, data logging, and post-experimental analysis. Sensor data, including temperature readings, obstacle counts, navigation time, and alarm status, are transmitted to a cloud-based platform using periodic updates sent by the Arduino MKR-1000 and ESP32-CAM to the ThinkSpeak platform. The communication architecture was designed to support monitoring without interfering with real-time navigation decisions, which are executed locally on the platform to avoid latency-related issues. This hybrid local–remote approach enables autonomous operation while ensuring the reproducibility of experimental results.

3. Results and Discussion

This section presents the experimental results obtained from the evaluation of the proposed IoT-based sensorized platform under different visibility and opacity conditions. The analysis focuses on navigation behavior, obstacle avoidance, human detection performance, and system robustness, based on the metrics extracted from the experimental trials.

3.1. Navigation Performance Under Different Visibility Conditions

The trajectories followed by the mobile platform under light, shadow, and darkness conditions are shown in Figure 2. The results indicate that the overall navigation patterns are highly consistent across the three environments. The platform was able to traverse most of the test area without becoming trapped in loops or deadlock situations, demonstrating stable autonomous behavior.
Minor variations in the trajectories were observed, particularly during obstacle avoidance maneuvers. These deviations are attributed to changes in ultrasonic signal reflection angles rather than to visibility constraints, as the navigation strategy does not rely on optical information. Complete darkness did not prevent the platform from completing its route or reaching the target area, indicating that navigation performance is independent of ambient illumination under the tested conditions.

3.2. Obstacle Detection and Avoidance Analysis

Obstacle detection and avoidance results for different experimental cases are summarized in Figure 3 for low-opacity and deep-opacity environments, respectively. In all trials, the system successfully detected and avoided multiple obstacles, maintaining continuous motion throughout the test duration.
The selected obstacle-avoidance threshold of 15 cm proved to be effective, enabling the platform to prevent collisions while preserving route efficiency. In low-opacity conditions, no collisions were recorded across all cases. In deep-opacity conditions, only one collision occurred, which was caused by the irregular geometry of an obstacle rather than a failure in sensor detection. The platform recovered autonomously and continued its trajectory without further incidents.
Comparative analysis between opacity levels shows that the number of detected obstacles increased with longer route durations, while collision rates remained minimal, indicating reliable short-range sensing and decision logic.

3.3. Human Detection and Alarm Activation

Human detection performance was evaluated using thermal and motion sensing data, as illustrated in Figure 3 and Figure 4. Compared with vision-only or single-sensor approaches reported in prior studies, the proposed system maintains detection and navigation performance under zero-visibility conditions without requiring computationally intensive processing. During navigation, temperature readings remained below 30 °C when only environmental obstacles were present. Once the platform approached the target representing a human subject, both the maximum and minimum temperature readings increased noticeably, exceeding the background values.
This temperature change, combined with confirmation from the motion sensor, triggered the alarm system. In all cases evaluated, the alarm was activated immediately after detection, with no false positives observed during the trials. These results demonstrate that the combined use of thermal imaging and motion sensing provides reliable identification of objects with human-like thermal characteristics, even under zero-visibility conditions.

3.4. Comparative Evaluation Under Low and Deep Opacity

Figure 5 presents a comparison of route-completion time, number of obstacles detected, and collision events for low-opacity and deep-opacity environments. Although variations in completion time were observed between cases, the platform consistently completed its search task in both environments. In some deep-opacity trials, shorter detection times were achieved due to trajectory changes induced by sharper turning angles. This behavior demonstrates that the proposed sensing and navigation logic not only ensures robustness under visibility degradation but also enables efficient target acquisition, as reflected by reduced detection times and consistent alarm activation across all experimental cases. Overall, the results confirm that sensor performance and system operation were not significantly affected by severe visibility degradation. Ultrasonic, thermal, and motion sensors maintained stable functionality, and IoT-based data transmission remained reliable throughout all experiments.

3.5. Comparison with Existing Approaches

Compared to vision-based or single-sensor systems commonly reported in the literature, which often degrade significantly under smoke or darkness, the proposed platform maintains stable navigation and detection performance due to its complementary sensing strategy. Thermal-based detection enables reliable identification of human-like targets, while ultrasonic ranging ensures obstacle avoidance independent of illumination. Unlike simulation-oriented studies, this work provides system-level experimental validation, demonstrating zero false positives in human detection and minimal collision rates across different opacity levels. These results highlight the practical advantages of combining thermal triggering with motion verification in real-world rescue-oriented scenarios.

4. Conclusions

This work demonstrates the feasibility of an IoT-based mobile robotic platform for object detection and human identification in low-visibility environments. By integrating ultrasonic sensing, thermal imaging, motion detection, and cloud-based communication, the system achieved reliable obstacle avoidance, consistent detection of human-related thermal patterns, and real-time data transmission under different lighting and opacity conditions. The experimental validation of navigation stability, detection thresholds, and IoT connectivity provides quantitative evidence of system performance beyond simulation-based evaluation. While further improvements in sensor calibration, mechanical robustness, and communication latency are possible, the presented prototype contributes practical design and validation insights for IoT-enabled rescue robotics operating in restricted-visibility scenarios.

Author Contributions

Conceptualization, C.E.; methodology, C.E. and P.E.-V.; software, C.E.; validation, C.E.; formal analysis, C.E. and P.E.-V.; investigation, C.E. and P.E.-V.; resources, C.E.; data curation, C.E.; writing—original draft preparation, P.E.-V.; writing—review and editing, P.E.-V. and C.E.; visualization, P.E.-V.; supervision, P.E.-V.; project administration, P.E.-V.; funding acquisition, P.E.-V. and C.E. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially supported by funding from Vicerrectorado de Investigación, Vinculación y Posgrado (VIVP-Project No. 358-DIR.INV-GI-UNACH-2025), Universidad Nacional de Chimborazo, Ecuador.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

The experimental evaluation involved the author acting as the target for human detection. No personal, medical, or sensitive data were collected, and no physical or psychological risks were involved. Therefore, informed consent was implicitly provided by the participant, and formal approval from a bioethics committee was not required according to institutional guidelines for minimal-risk research.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Ikram, S.; Bajwa, I.S.; Ikram, A.; de la Torre Díez, I.; Ríos, C.E.U.; Castilla, Á.K. Obstacle Detection and Warning System for Visually Impaired Using IoT Sensors. IEEE Access 2025, 13, 35309–35321. [Google Scholar] [CrossRef]
  2. Rehman, A.; Qureshi, M.A.; Ali, T.; Irfan, M.; Abdullah, S.; Yasin, S.; Draz, U.; Glowacz, A.; Nowakowski, G.; Alghamdi, A.; et al. Smart Fire Detection and Deterrent System for Human Savior by Using Internet of Things (IoT). Energies 2021, 14, 5500. [Google Scholar] [CrossRef]
  3. Varela-Aldás, J.; Escudero, P.; Casa, S. IoT-Based System for Web Monitoring of Thermal Processes. In Proceedings of the HCI International 2023 Posters; Stephanidis, C., Antona, M., Ntoa, S., Salvendy, G., Eds.; Springer Nature: Cham, Switzerland, 2023; pp. 549–553. [Google Scholar]
  4. Guo, H.; Huang, Z.; Ho, Q.; Ang, M.; Rus, D. Autonomous Navigation in Dynamic Environments with Multi-Modal Perception Uncertainties. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; pp. 9255–9261. [Google Scholar]
  5. Ayala-Chauvin, M.; Escudero, P.; Lara-Alvarez, P.; Domènech-Mestres, C. IoT Monitoring for Real-Time Control of Industrial Processes. In Proceedings of the Technologies and Innovation; Valencia-García, R., Bucaram-Leverone, M., Del Cioppo-Morstadt, J., Vera-Lucio, N., Jácome-Murillo, E., Eds.; Springer International Publishing: Cham, Switzerland, 2022; pp. 203–213. [Google Scholar]
  6. Tahir, N.U.A.; Zhang, Z.; Asim, M.; Chen, J.; ELAffendi, M. Object Detection in Autonomous Vehicles under Adverse Weather: A Review of Traditional and Deep Learning Approaches. Algorithms 2024, 17, 103. [Google Scholar] [CrossRef]
  7. Essien, E.; Frimpong, S. Enhancing Autonomous Truck Navigation in Underground Mines: A Review of 3D Object Detection Systems, Challenges, and Future Trends. Drones 2025, 9, 433. [Google Scholar] [CrossRef]
  8. Masalskyi, V.; Dzedzickis, A.; Korobiichuk, I.; Bučinskas, V. Hybrid Mode Sensor Fusion for Accurate Robot Positioning. Sensors 2025, 25, 3008. [Google Scholar] [CrossRef] [PubMed]
  9. Rodríguez-Martínez, E.A.; Flores-Fuentes, W.; Achakir, F.; Sergiyenko, O.; Murrieta-Rico, F.N. Vision-Based Navigation and Perception for Autonomous Robots: Sensors, SLAM, Control Strategies, and Cross-Domain Applications—A Review. Eng 2025, 6, 153. [Google Scholar] [CrossRef]
  10. Cruz Ulloa, C.; Prieto Sánchez, G.; Barrientos, A.; Del Cerro, J. Autonomous Thermal Vision Robotic System for Victims Recognition in Search and Rescue Missions. Sensors 2021, 21, 7346. [Google Scholar] [CrossRef] [PubMed]
  11. Çakmak, F.; Uslu, E.; Amasyalı, M.F.; Yavuz, S. Thermal Based Exploration for Search and Rescue Robots. In Proceedings of the 2017 IEEE International Conference on INnovations in Intelligent SysTems and Applications (INISTA), Gdynia, Poland, 3–5 July 2017; pp. 113–118. [Google Scholar]
  12. Nguyen, T.X.B.; Rosser, K.; Chahl, J. A Review of Modern Thermal Imaging Sensor Technology and Applications for Autonomous Aerial Navigation. J. Imaging 2021, 7, 217. [Google Scholar] [CrossRef] [PubMed]
  13. Goll, S.; Maximova, J. Ultrasonic Rangefinder with Resolution in Hundredths of the Probing Signal’s Wavelength for the Mobile Rescue Robot. Acta IMEKO 2019, 8, 41–46. [Google Scholar] [CrossRef]
  14. Schroth, C.A.; Eckrich, C.; Kakouche, I.; Fabian, S.; von Stryk, O.; Zoubir, A.M.; Muma, M. Emergency Response Person Localization and Vital Sign Estimation Using a Semi-Autonomous Robot Mounted SFCW Radar. IEEE Trans. Biomed. Eng. 2024, 71, 1756–1769. [Google Scholar] [CrossRef] [PubMed]
  15. Drew, D.S. Multi-Agent Systems for Search and Rescue Applications. Curr. Robot Rep. 2021, 2, 189–200. [Google Scholar] [CrossRef]
  16. Chen, J.; Li, S.; Liu, D.; Li, X. AiRobSim: Simulating a Multisensor Aerial Robot for Urban Search and Rescue Operation and Training. Sensors 2020, 20, 5223. [Google Scholar] [CrossRef]
  17. Alam, M.N.; Saiam, M.; Mamun, A.A.; Rahman, M.M.; Hany, U. A Prototype of Multi Functional Rescue Robot Using Wireless Communication. In Proceedings of the 2021 5th International Conference on Electrical Engineering and Information Communication Technology (ICEEICT), Dhaka, Bangladesh, 18–20 November 2021; pp. 1–5. [Google Scholar]
  18. Gulati, H.; Vaishya, S.; Veeramachaneni, S. Bluetooth and Wi-Fi Controlled Rescue Robots. In Proceedings of the 2011 Annual IEEE India Conference, Hyderabad, India, 16–18 December 2011; pp. 1–5. [Google Scholar]
  19. Freeman, J.D.; Omanan, V.; Ramesh, M.V. Wireless Integrated Robots for Effective Search and Guidance of Rescue Teams. In Proceedings of the 2011 Eighth International Conference on Wireless and Optical Communications Networks, Paris, France, 24–26 May 2011; pp. 1–5. [Google Scholar]
  20. Fritsche, P.; Zeise, B.; Hemme, P.; Wagner, B. Fusion of Radar, LiDAR and Thermal Information for Hazard Detection in Low Visibility Environments. In Proceedings of the 2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR), Shanghai, China, 11–13 October 2017; pp. 96–101. [Google Scholar]
  21. Ronen, A.; Agassi, E.; Yaron, O. Sensing with Polarized LIDAR in Degraded Visibility Conditions Due to Fog and Low Clouds. Sensors 2021, 21, 2510. [Google Scholar] [CrossRef] [PubMed]
  22. Zang, X.; Zhang, C.; Liu, Y.; Zhao, J. Environment Feature Recognition Algorithm for Rescue Robot Based on a 2D Laser Radar. In Proceedings of the 2016 IEEE International Conference on Real-time Computing and Robotics (RCAR), Angkor Wat, Cambodia, 6–10 June 2016; pp. 357–361. [Google Scholar]
  23. Surmann, H.; Daun, K.; Schnaubelt, M.; von Stryk, O.; Patchou, M.; Böcker, S.; Wietfeld, C.; Quenzel, J.; Schleich, D.; Behnke, S.; et al. Lessons from Robot-Assisted Disaster Response Deployments by the German Rescue Robotics Center Task Force. J. Field Robot. 2024, 41, 782–797. [Google Scholar] [CrossRef]
Figure 1. Hybrid local–remote approach and connectivity architecture between sensors and communication to the cloud.
Figure 1. Hybrid local–remote approach and connectivity architecture between sensors and communication to the cloud.
Engproc 124 00028 g001
Figure 2. Platform trajectories at distances of 20, 15 and 10 cm between the robot system and the obstacle, where the red square indicates the route starting point and the red dashed line represents the route finish line.
Figure 2. Platform trajectories at distances of 20, 15 and 10 cm between the robot system and the obstacle, where the red square indicates the route starting point and the red dashed line represents the route finish line.
Engproc 124 00028 g002
Figure 3. Schematic trajectory to locate the target for three visibility conditions for low, middle and deep opacity, where the red square indicates the route starting point and the red dashed line represents the route finish line.
Figure 3. Schematic trajectory to locate the target for three visibility conditions for low, middle and deep opacity, where the red square indicates the route starting point and the red dashed line represents the route finish line.
Engproc 124 00028 g003
Figure 4. Different target configurations used to evaluate the robotic system under low opacity (up) and deep opacity (down) conditions, (a) Case 1, (b) Case 2, and (c) Case 3.
Figure 4. Different target configurations used to evaluate the robotic system under low opacity (up) and deep opacity (down) conditions, (a) Case 1, (b) Case 2, and (c) Case 3.
Engproc 124 00028 g004
Figure 5. Comparison values for (a) time employed to get the target, (b) obstacles detected, and (c) collisions.
Figure 5. Comparison values for (a) time employed to get the target, (b) obstacles detected, and (c) collisions.
Engproc 124 00028 g005
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Escudero-Villa, P.; Escudero, C. IoT-Based Sensor Technologies for Object Detection in Low-Visibility Environments: Development and Validation of a Functional Prototype. Eng. Proc. 2026, 124, 28. https://doi.org/10.3390/engproc2026124028

AMA Style

Escudero-Villa P, Escudero C. IoT-Based Sensor Technologies for Object Detection in Low-Visibility Environments: Development and Validation of a Functional Prototype. Engineering Proceedings. 2026; 124(1):28. https://doi.org/10.3390/engproc2026124028

Chicago/Turabian Style

Escudero-Villa, Pedro, and Cristian Escudero. 2026. "IoT-Based Sensor Technologies for Object Detection in Low-Visibility Environments: Development and Validation of a Functional Prototype" Engineering Proceedings 124, no. 1: 28. https://doi.org/10.3390/engproc2026124028

APA Style

Escudero-Villa, P., & Escudero, C. (2026). IoT-Based Sensor Technologies for Object Detection in Low-Visibility Environments: Development and Validation of a Functional Prototype. Engineering Proceedings, 124(1), 28. https://doi.org/10.3390/engproc2026124028

Article Metrics

Back to TopTop