Abstract
The Internet of Things (IoT) has transformed industries, healthcare, and smart environments, but introduces severe security threats due to resource constraints, weak protocols, and heterogeneous infrastructures. Traditional Intrusion Detection Systems (IDS) fail to address critical challenges including scalability across billions of devices, interoperability among diverse protocols, real-time responsiveness under strict latency, data privacy in distributed edge networks, and high false positives in imbalanced traffic. This study provides a systematic comparative evaluation of three representative AI models, CNN-BiLSTM, Random Forest, and XGBoost for IoT intrusion detection on the NSL-KDD and UNSW-NB15 datasets. The analysis quantifies the achievable detection performance and inference latency of each approach, revealing a clear accuracy–latency trade-off that can guide practical model selection: CNN-BiLSTM offers the highest detection capability (F1 up to 0.986) at the cost of higher computational overhead, whereas XGBoost and Random Forest deliver competitive accuracy with significantly lower inference latency (sub-millisecond on conventional hardware). These empirical insights support informed deployment decisions in heterogeneous IoT environments where accuracy-critical gateways and latency-critical sensors coexist.