Abstract
This paper presents a markerless augmented reality (AR) navigation system for guiding users across a university campus, independent of internet or wireless connectivity, integrating machine learning (ML) and deep learning techniques. The system employs computer vision to detect campus signage “Meeting Point” and “Directory”, and classifies them through a binary classifier (BC) and convolutional neural networks (CNNs). The BC distinguishes between the two types of signs using RGB values with algorithms such as Perceptron, Bayesian classification, and k-Nearest Neighbors (KNN), while the CNN identifies the specific sign ID to link it to a campus location. Navigation routes are generated with the Floyd–Warshall algorithm, which computes the shortest path between nodes on a digital campus map. Directional arrows are then overlaid in AR on the user’s device via ARCore, updated every 200 milliseconds using sensor data and direction vectors. The prototype, developed in Android Studio, achieved over 99.5% accuracy with CNNs and 100% accuracy with the BC, even when signs were worn or partially occluded. A usability study with 27 participants showed that 85.2% successfully reached their destinations, with more than half rating the system as easy or very easy to use. Users also expressed strong interest in extending the application to other environments, such as shopping malls or airports. Overall, the solution is lightweight, scalable, and sustainable, requiring no additional infrastructure beyond existing campus signage.