Next Article in Journal
Comparative Evaluation of CNN and Transformer Architectures for Flowering Phase Classification of Tilia cordata Mill. with Automated Image Quality Filtering
Previous Article in Journal
Enhancing LiDAR–IMU SLAM for Infrastructure Monitoring via Dynamic Coplanarity Constraints and Joint Observation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

GICEDCam: A Geospatial Internet of Things Framework for Complex Event Detection in Camera Streams

1
Department of Geomatics Engineering, University of Calgary, Calgary, AB T2N 1N4, Canada
2
Department of Electrical and Software Engineering, University of Calgary, Calgary, AB T2N 1N4, Canada
*
Authors to whom correspondence should be addressed.
Sensors 2025, 25(17), 5331; https://doi.org/10.3390/s25175331
Submission received: 6 July 2025 / Revised: 24 August 2025 / Accepted: 25 August 2025 / Published: 27 August 2025
(This article belongs to the Special Issue Intelligent Multi-Sensor Fusion for IoT Applications)

Abstract

Complex event detection (CED) adds value to camera stream data in various applications such as workplace safety, task monitoring, security, and health. Recent CED frameworks have addressed the issues of limited spatiotemporal labels and costly training by decomposing the CED into low-level features, as well as spatial and temporal relationship extraction. However, these frameworks suffer from high resource costs, low scalability, and an increased number of false positives and false negatives. This paper proposes GICEDCAM, which distributes CED across edge, stateless, and stateful layers to improve scalability and reduce computation cost. Additionally, we introduce a Spatial Event Corrector component that leverages geospatial data analysis to minimize false negatives and false positives in spatial event detection. We evaluate GICEDCAM on 16 camera streams covering four complex events. Relative to a strong open-source baseline configured for our setting, GICEDCAM reduces end-to-end latency by 36% and total computational cost by 45%, with the advantage widening as objects per frame increase. Among corrector variants, Bayesian Network (BN) yields the lowest latency, Long Short-Term Memory (LSTM) achieves the highest accuracy, and trajectory analysis offers the best accuracy–latency trade-off for this architecture.
Keywords: complex event detection; video processing; internet of things; cloud computing; spatial relationships detection; trajectory analysis; object tracking; object detection; computer vision complex event detection; video processing; internet of things; cloud computing; spatial relationships detection; trajectory analysis; object tracking; object detection; computer vision

Share and Cite

MDPI and ACS Style

Honarparvar, S.; Honarparvar, Y.; Ashena, Z.; Liang, S.; Saeedi, S. GICEDCam: A Geospatial Internet of Things Framework for Complex Event Detection in Camera Streams. Sensors 2025, 25, 5331. https://doi.org/10.3390/s25175331

AMA Style

Honarparvar S, Honarparvar Y, Ashena Z, Liang S, Saeedi S. GICEDCam: A Geospatial Internet of Things Framework for Complex Event Detection in Camera Streams. Sensors. 2025; 25(17):5331. https://doi.org/10.3390/s25175331

Chicago/Turabian Style

Honarparvar, Sepehr, Yasaman Honarparvar, Zahra Ashena, Steve Liang, and Sara Saeedi. 2025. "GICEDCam: A Geospatial Internet of Things Framework for Complex Event Detection in Camera Streams" Sensors 25, no. 17: 5331. https://doi.org/10.3390/s25175331

APA Style

Honarparvar, S., Honarparvar, Y., Ashena, Z., Liang, S., & Saeedi, S. (2025). GICEDCam: A Geospatial Internet of Things Framework for Complex Event Detection in Camera Streams. Sensors, 25(17), 5331. https://doi.org/10.3390/s25175331

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop