Abstract
Engineering education is based on experiential learning, but the problem is that in laboratory conditions, it is difficult to give feedback to the students in real time and personalize this feedback. The paper introduces the proposal of an innovative approach to the laboratories, called Adaptive Lab Mentor (ALM), which combines the technologies of Artificial Intelligence (AI), Internet of Things (IoT), and sensor technology to facilitate intelligent and customized laboratory setting. ALM is supported by a new real-time multimodal sensor fusion model in which a sensor-instrumented laboratory is used to record real-time electrical measurements (voltage and current) which are used in parallel with symbolic component measurements (target resistance) with a lightweight, dual-input Convolutional Neural Network (1D-CNN) running on an edge device. In this initial validation, visual context is presented as a symbolic target value, which establishes a pathway for the future integration of full computer vision. The architecture will enable monitoring of the student progress, making error diagnoses within a short time period, and provision of adaptive feedback based on information available in the context. To test this strategy, a high-fidelity model of an Ohm Laboratory was developed. LTspice was used to generate a huge amount of current and voltage time series of various circuit states. The trained model achieved 93.3% test accuracy and demonstrated that the proposed system could be applied. The ALM model, compared to the current Intelligent Tutoring Systems, is based on physical sensing and edge AI inference in real-time, as well as adaptive and safety-sensitive feedback throughout hands-on engineering demonstrations. The ALM framework serves as a blueprint for the new smart laboratory assistant.