AI for Academic Integrity: GPU-Free Pose Estimation Framework for Automated Invigilation
Abstract
1. Introduction
- Novel Dual-Triangle Pose Estimation Approach: Introduced a unique method to estimate head pose using two geometric triangles (nose-eyes and nose-ears), allowing accurate detection of suspicious head movements during exams.
- GPU Free Real-Time Operation: Developed an efficient system that runs on low-power hardware like Raspberry Pi 5 without requiring GPU acceleration, making it accessible and cost-effective for institutions.
- Dynamic Angle Thresholding: Designed a dynamic thresholding mechanism (centered around 60°) that adjusts per student and scene context, significantly reducing false positives and negatives in pose-based cheating detection.
- Integration with YOLOv8 for Accurate Student and Key point Detection: Leveraged YOLOv8 for high-precision detection and tracking of students and key facial landmarks, achieving over 96% accuracy and real-time performance (6 FPS).
- Automated Logging and Cloud-Based Reporting: Incorporated a fully automated logging system that records cheating instances with time, duration, and screenshots in an Excel sheet saved directly to Google Drive.
- Validated in Real-World Exam Settings: Tested the model over six consecutive examination days and in controlled experiments, achieving 96.18% accuracy and 96.2% precision in detecting various forms of cheating.
- Benchmarking and Outperformance of State-of-the-Art: Demonstrated superior performance compared to existing cheating detection systems in terms of both accuracy and computational efficiency, setting a new standard for low-resource exam monitoring.
2. Literature Review
3. Methodology
4. Results and Discussion
- Comparison of Static and Dynamic Thresholds
- B.
- Real-Time Performance
- C.
- Data Logging and Communication
- D.
- Cheating Behavior Classification and Accuracy
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Guerrero-Dib, J.G.; Portales, L.; Heredia-Escorza, Y.; Prieto, I.M.; García, S.P.G.; León, N. Impact of academic integrity on workplace ethical behaviour. Int. J. Educ. Integr. 2020, 16, 2. [Google Scholar] [CrossRef]
- Mahmood, F.; Arshad, J.; Ben Othman, M.T.; Hayat, M.F.; Bhatti, N.; Jaffery, M.H.; Rehman, A.U.; Hamam, H. Implementation of an intelligent exam supervision system using deep learning algorithms. Sensors 2022, 22, 6389. [Google Scholar] [CrossRef] [PubMed]
- Fogelberg, K. Educational Principles and Practice in Veterinary Medicine; John Wiley & Sons: Hoboken, NJ, USA, 2024; p. 568. Available online: https://books.google.com/books/about/Educational_Principles_and_Practice_in_V.html?id=CnnsEAAAQBAJ (accessed on 14 June 2025).
- Zubair, M.; Waleed, A.; Rehman, A.; Ahmad, F.; Islam, M.; Javed, S. Machine Learning Insights into Retail Sales Prediction: A Comparative Analysis of Algorithms. In Proceedings of the 2024 Horizons of Information Technology and Engineering (HITE), Lahore, Pakistan, 15–16 October 2024; Available online: https://ieeexplore.ieee.org/abstract/document/10777132/ (accessed on 14 June 2025).
- Zubair, M.; Waleed, A.; Rehman, A.; Ahmad, F.; Islam, M.; Javed, S. Next-Generation Healthcare: Design and Implementation of a Smart Medicine Vending System. In Proceedings of the 2024 Horizons of Information Technology and Engineering (HITE), Lahore, Pakistan, 15–16 October 2024; Available online: https://ieeexplore.ieee.org/abstract/document/10777201/ (accessed on 30 March 2025).
- Xu, Q.; Wei, Y.; Gao, J.; Yao, H.; Liu, Q. ICAPD framework and simAM-YOLOv8n for student cognitive engagement detection in classroom. IEEE Access 2023, 11, 136063–136076. [Google Scholar] [CrossRef]
- Elkhatat, A.M.; Elsaid, K.; Almeer, S. Evaluating the efficacy of AI content detection tools in differentiating between human and AI-generated text. Int. J. Educ. Integr. 2023, 19, 17. [Google Scholar] [CrossRef]
- Singh, T.; Nair, R.R.; Babu, T.; Duraisamy, P. Enhancing academic integrity in online assessments: Introducing an effective online exam proctoring model using yolo. Procedia Comput. Sci. 2024, 235, 1399–1408. [Google Scholar] [CrossRef]
- Wang, Q.; Hou, L.; Hong, J.C.; Yang, X.; Zhang, M. Impact of Face-Recognition-Based Access Control System on College Students’ Sense of School Identity and Belonging During COVID-19 Pandemic. Front. Psychol. 2022, 13, 808189. [Google Scholar] [CrossRef]
- Ke, F.; Liu, R.; Sokolikj, Z.; Dahlstrom-Hakki, I. Using eye-tracking in education: Review of empirical research and technology. Educ. Technol. Res. Dev. 2024, 72, 1383–1418. [Google Scholar] [CrossRef]
- Ethics of American Youth: 2010|Office of Justice Programs. Available online: https://www.ojp.gov/ncjrs/virtual-library/abstracts/ethics-american-youth-2010 (accessed on 30 March 2025).
- Hamlin, A.; Barczyk, C.; Powell, G.; Frost, J. A comparison of university efforts to contain academic dishonesty. J. Leg. Ethical Regul. Isses 2013, 16, 35. [Google Scholar]
- Southerland, J. Engagement of Adult Undergraduates: Insights from the National Survey of Student Engagement. 2010. Available online: https://search.proquest.com/openview/885020f967747052f313f5d6d8732c34/1?pq-origsite=gscholar&cbl=18750 (accessed on 30 March 2025).
- der Vossen, M.M.-V.; van Mook, W.; van der Burgt, S.; Kors, J.; Ket, J.C.F.; Croiset, G.; Kusurkar, R. Descriptors for unprofessional behaviours of medical students: A systematic review and categorisation. BMC Med. Educ. 2017, 17, 164. [Google Scholar] [CrossRef]
- Padhiyar, P.; Parmar, K.; Parmar, N.; Degadwala, S. Visual Distance Fraudulent Detection in Exam Hall using YOLO Detector. In Proceedings of the 2023 International Conference on Inventive Computation Technologies (ICICT), Lalitpur, Nepal, 26–28 April 2023; Available online: https://ieeexplore.ieee.org/abstract/document/10134271/ (accessed on 30 March 2025).
- Radwan, T.M.; Alabachi, S.; Al-Araji, A.S. In-class exams auto proctoring by using deep learning on students’ behaviors. J. Optoelectron. Laser 2022, 41, 969–980. [Google Scholar]
- Wan, Z.; Li, X.; Xia, B.; Luo, Z. Recognition of Cheating Behavior in Examination Room Based on Deep Learning. In Proceedings of the 2021 International Conference on Computer Engineering and Application (ICCEA), Kunming, China, 25–27 June 2021; Available online: https://ieeexplore.ieee.org/abstract/document/9581122/ (accessed on 30 March 2025).
- Malhotra, M.; Chhabra, I. Automatic invigilation using computer vision. In Proceedings of the 3rd International Conference on Integrated Intelligent Computing Communication & Security (ICIIC 2021), Bangalore, India, 4–5 June 2021; Available online: https://www.atlantis-press.com/proceedings/iciic-21/125960815 (accessed on 30 March 2025).
- Alsabhan, W. Student Cheating Detection in Higher Education by Implementing Machine Learning and LSTM Techniques. Sensors 2023, 23, 4149. [Google Scholar] [CrossRef]
- Genemo, M.D. Suspicious activity recognition for monitoring cheating in exams. Proc. Indian Natl. Sci. Acad. 2022, 88, 1–10. [Google Scholar] [CrossRef]
- Roa’a, M.; Aljazaery, I.A.; Alaidi, A.H.M. Automated cheating detection based on video surveillance in the examination classes. Int. J. Interact. Mob. Technol. 2022, 16, 125. [Google Scholar]
- Menanno, M.; Riccio, C.; Benedetto, V.; Gissi, F.; Savino, M.M.; Troiano, L. An Ergonomic Risk Assessment System Based on 3D Human Pose Estimation and Collaborative Robot. Appl. Sci. 2024, 14, 4823. [Google Scholar] [CrossRef]
- Salisu, S.; Danyaro, K.U.; Nasser, M.; Hayder, I.M.; Younis, H.A. Review of models for estimating 3D human pose using deep learning. PeerJ Comput. Sci. 2025, 11, e2574. [Google Scholar] [CrossRef]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. Available online: https://www.academis.eu/machine_learning/_downloads/51a67e9194f116abefff5192f683e3d8/yolo.pdf (accessed on 30 March 2025).
- Annannaidu, P.; Gayatri, M.; Sreeja, P.; Kumar, V.R.; Tharun, P.S.; Divakar, B. Computer Vision–Based Malpractice Detection System. In Proceedings of the Accelerating Discoveries in Data Science and Artificial Intelligence II, Vizianagaram, India, 24–25 April 2024; pp. 95–107. [Google Scholar] [CrossRef]
- Alkentar, S.M.; Alsahwa, B.; Assalem, A.; Karakolla, D. Practical comparation of the accuracy and speed of YOLO, SSD and Faster RCNN for drone detection. J. Eng. 2021, 27, 19–31. [Google Scholar] [CrossRef]
- Maji, D.; Nagori, S.; Mathew, M.; Poddar, D. Yolo-pose: Enhancing yolo for multi person pose estimation using object keypoint similarity loss. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 8–24 June 2022; Available online: http://openaccess.thecvf.com/content/CVPR2022W/ECV/html/Maji_YOLO-Pose_Enhancing_YOLO_for_Multi_Person_Pose_Estimation_Using_Object_CVPRW_2022_paper.html (accessed on 30 March 2025).
- Pose-Ultralytics YOLO Docs. Available online: https://docs.ultralytics.com/tasks/pose/ (accessed on 30 March 2025).
- Ultralytics YOLO11-Ultralytics YOLO Docs. Available online: https://docs.ultralytics.com/models/yolo11/ (accessed on 30 March 2025).
- Explore Ultralytics YOLOv8-Ultralytics YOLO Docs. Available online: https://docs.ultralytics.com/models/yolov8/ (accessed on 30 March 2025).
- Sun, K.; Xiao, B.; Liu, D.; Wang, J. Deep High-Resolution Representation Learning for Human Pose Estimation. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–17 June 2019; Volume 2019, pp. 5686–5696. [Google Scholar] [CrossRef]
- Bazarevsky, V.; Grishchenko, I.; Raveendran, K.; Zhu, T.; Zhang, F.; Grundmann, M. BlazePose: On-Device Real-Time Body Pose Tracking. Available online: https://arxiv.org/pdf/2006.10204 (accessed on 17 October 2025).
- Yuan, Y.; Fu, R.; Huang, L.; Lin, W.; Zhang, C.; Chen, X.; Wang, J. HRFormer: High-Resolution Transformer for Dense Prediction. Adv. Neural Inf. Process. Syst. 2021, 9, 7281–7293. [Google Scholar]
- Leon, J.M.; Fernandez, F.; Ortiz, M. Real-time embedded human pose estimation using Intel Myriad X VPU on low-power platforms. IEEE Access 2024, 12, 45792–45805. [Google Scholar]
- Tran, N.; Nguyen, M.; Le, T.; Huynh, T.; Nguyen, T.; Nguyen, T. Exploring the potential of skeleton and machine learning in classroom cheating detection. Indones. J. Electr. Eng. Comput. Sci. 2023, 32, 1533–1544. [Google Scholar] [CrossRef]
- Alkalbani, A. Cheating Detection in Online Exams Based on Captured Video Using Deep Learning. Master’s Thesis, United Arab Emirates University, Al Ain, United Arab Emirates, 2023. Available online: https://scholarworks.uaeu.ac.ae/all_theses/1060 (accessed on 30 March 2025).
- Hossain, M.N.; Long, Z.A.; Seid, N. Emotion Detection Through Facial Expressions for Determining Students’ Concentration Level in E-Learning Platform. Lecture Notes in Networks and Systems. In Proceedings of the International Congress on Information and Communication Technology, London, UK, 19–22 February 2024; Volume 1012, pp. 517–530. [Google Scholar] [CrossRef]
- Liu, Y.; Ren, J.; Xu, J.; Bai, X.; Kaur, R.; Xia, F. Multiple Instance Learning for Cheating Detection and Localization in Online Examinations. IEEE Trans. Cogn. Dev. Syst. 2024, 16, 1315–1326. [Google Scholar] [CrossRef]
- Zhen, Y.; Zhu, X. An Ensemble Learning Approach Based on TabNet and Machine Learning Models for Cheating Detection in Educational Tests. Educ. Psychol. Meas. 2024, 84, 780–809. [Google Scholar] [CrossRef] [PubMed]







| Feature | Two-Stage Detectors | Single-Stage Detectors |
|---|---|---|
| Detection Process | Two-step process: region proposal generation, followed by object detection and classification. | Single-step process: Directly predicts bounding boxes and class scores from input. |
| Accuracy | Higher accuracy, especially for small objects and overlapping objects. | Generally lower accuracy, especially for small and overlapping objects. Higher accuracy for large objects. |
| Speed | Slower processing speed. | Faster processing speed. |
| Computational Resources | Requires high computational resources for training and deployment. | Requires fewer computational resources. |
| Handling Small Objects | Performs better in detecting small objects. | Performs less accurately in detecting small objects. |
| Handling Overlapping Objects | Performs better in detecting overlapping objects. | Performs less accurately in detecting overlapping objects. |
| Performance on Large Objects | Less efficient than single-stage detectors for large objects. | Highly efficient, especially with large objects. |
| Examples of Algorithms | Faster R-CNN, Mask R-CNN, R-FCN, Cascade R-CNN, Libra R-CNN. | YOLOv8, SSD, RetinaNet, EfficientNet, CenterNet. |
| Regional Proposal Network (RPN) | Uses RPN for generating region proposals, making it more accurate and reducing computational time. | Does not use RPN; instead, it divides the frame into grid cells for detection. |
| Usage Scenario | Preferred when accuracy is prioritized, particularly in dense scenes. | Preferred when speed is prioritized, particularly in real-time applications. |
| Model | Size (Pixels) | mAPpose 50–95 | mAPpose 50 | Speed CPU ONNX (ms) | Speed A100 TensorRT (ms) | Params (M) | FLOPs (B) |
|---|---|---|---|---|---|---|---|
| YOLOv8n-pose | 640 | 50.4 | 80.1 | 131.8 | 1.18 | 3.3 | 9.2 |
| YOLOv8s-pose | 640 | 60 | 86.2 | 233.2 | 1.42 | 11.6 | 30.2 |
| YOLOv8m-pose | 640 | 65 | 88.8 | 456.3 | 2 | 26.4 | 81 |
| YOLOv8l-pose | 640 | 67.6 | 90 | 784.5 | 2.59 | 44.4 | 168.6 |
| YOLOv8x-pose | 640 | 69.2 | 90.2 | 1607.1 | 3.73 | 69.4 | 263.2 |
| YOLOv8x-pose-p6 | 1280 | 71.6 | 91.2 | 4088.7 | 10.04 | 99.1 | 1066.4 |
| Aspect | 2D Pose Estimation | 3D Pose Estimation |
|---|---|---|
| Output | Detects key points in 2D space (x, y coordinates) | Detects key points in 3D space (x, y, depth coordinates) |
| Complexity | Less complex | Highly Complex |
| Computation Requirements | Lower Computation Requirements | Higher Computational Requirements |
| Accuracy | Sufficient when depth information is not required | More Accurate especially when depth information is required |
| Application | Preferred when speed is prioritised, particularly in real-time applications | Preferred when accuracy in terms of depth information is prioritised |
| Example Algorithms | OpenPose, HRNet, PoseNet, YOLO | Human 3.6M, DeepLabCut, VoxelPose |
| Aspects | Top-Down Approach | Bottom-Up Approach |
|---|---|---|
| Process | Detects objects first, then estimates poses for each detected object. | Estimates key points for all objects first, then groups them into individual poses |
| Object Detection | Requires an additional object detection step | Estimate keypoints directly without the object detection step |
| Speed | Slower as 2 steps are involved | Faster due to only one step |
| Accuracy | More accurate due to focusing each object separately | Less accurate due to potential error in keypoint grouping |
| Handling Occlusion | Better at handling occlusion in crowded scenes, as it focuses on each object | More errors in occlusion, due to incorrectly grouping of different objects |
| Training | Requires training for both object detection and pose estimation | Only requires training for pose estimation, simplifying the pipeline |
| Example Algorithm | YOLO-Pose, Mask R-CNN, Alpha Pose, HRNet | OpenPose, DeepCut, PAF, HigherHRNet |
| Day | Total Students | Detected Students | TP (True Positives) | FP (False Positives) | FN (False Negatives) | Accuracy | Precision |
|---|---|---|---|---|---|---|---|
| Departmental Examination | |||||||
| Day 1 | 60 | 60 | 58 | 2 | 2 | 96.66 | 96.66 |
| Day 2 | 50 | 52 | 49 | 3 | 1 | 98 | 94.23 |
| Day 3 | 55 | 56 | 54 | 2 | 1 | 98.18 | 96.42 |
| Day 4 | 60 | 59 | 58 | 1 | 2 | 96.66 | 98.30 |
| Day 5 | 56 | 57 | 54 | 3 | 2 | 96.42 | 94.74 |
| Day 6 | 58 | 55 | 54 | 1 | 4 | 93.10 | 96.18 |
| Average | 96.50 | 96.08 | |||||
| Dedicated Test | |||||||
| Day 1 | 200 | 200 | 197 | 3 | 3 | 98.5 | 98.5 |
| Day 2 | 35 | 35 | 35 | 0 | 0 | 100 | 100 |
| Day 3 | 40 | 40 | 40 | 0 | 0 | 100 | 100 |
| Day 4 | 33 | 33 | 33 | 0 | 0 | 100 | 100 |
| Day 5 | 30 | 30 | 30 | 0 | 0 | 100 | 100 |
| Day 6 | 35 | 35 | 35 | 0 | 0 | 100 | 100 |
| Average | 99.75 | 99.75 | |||||
| Day | Total Key points (Students × Key Points) | Detected Key Points | TP (True Positives) | FP (False Positives) | FN (False Negatives) | Accuracy | Precision |
|---|---|---|---|---|---|---|---|
| Departmental Examination | |||||||
| Day 1 | 60 × 5 = 300 | 305 | 296 | 9 | 4 | 98.66 | 97.04 |
| Day 2 | 50 × 5 = 250 | 245 | 240 | 5 | 10 | 97.95 | 98 |
| Day 3 | 55 × 5 = 275 | 265 | 259 | 6 | 16 | 94.18 | 97.73 |
| Day 4 | 60 × 5 = 300 | 300 | 287 | 13 | 13 | 95.66 | 95.66 |
| Day 5 | 56 × 5 = 280 | 285 | 270 | 15 | 10 | 94.73 | 97.50 |
| Day 6 | 58 × 5 = 295 | 296 | 285 | 11 | 19 | 96.61 | 96.28 |
| Average | 96.29 | 97.03 | |||||
| Dedicated Test | |||||||
| Day 1 | 40 × 5 = 200 | 200 | 197 | 3 | 3 | 98.5 | 98.5 |
| Day 2 | 35 × 5 = 175 | 176 | 174 | 2 | 1 | 99.42 | 98.86 |
| Day 3 | 40 × 5 = 200 | 198 | 196 | 2 | 4 | 98.98 | 99 |
| Day 4 | 33 × 5 = 165 | 163 | 161 | 2 | 4 | 97.57 | 98.77 |
| Day 5 | 30 × 5 = 150 | 149 | 149 | 1 | 1 | 99.33 | 99.33 |
| Day 6 | 35 × 5 = 175 | 178 | 174 | 4 | 1 | 97.75 | 99.42 |
| Average | 98.59 | 98.98 | |||||
| Day | Day | LL 1 | LR 2 | LB 3 | TC 4 | TP (True Positives) | FP (False Positives) | FN (False Negatives) | Accuracy | Precision |
|---|---|---|---|---|---|---|---|---|---|---|
| Departmental Examination | ||||||||||
| 1 | Actual Data | 20 | 15 | 17 | 52 | - | - | - | - | - |
| 1 | Detected Data | 18 | 15 | 16 | 49 | 49 | 3 | 3 | 94.23 | 94.23 |
| 2 | Actual Data | 16 | 18 | 17 | 51 | - | - | - | - | - |
| 2 | Detected Data | 15 | 17 | 17 | 49 | 49 | 1 | 2 | 96.07 | 98 |
| 3 | Actual Data | 23 | 19 | 14 | 56 | - | - | - | - | - |
| 3 | Detected Data | 22 | 18 | 14 | 55 | 55 | 1 | 2 | 98.18 | 96.42 |
| Average | 96.18 | 96.2 | ||||||||
| Dedicated Test | ||||||||||
| 1 | Actual Data | 20 | 20 | 11 | 51 | - | - | - | - | - |
| 1 | Detected Data | 19 | 20 | 11 | 51 | 50 | 1 | 1 | 98 | 98 |
| 2 | Actual Data | 15 | 15 | 15 | 45 | - | - | - | - | - |
| 2 | Detected Data | 15 | 15 | 14 | 44 | 44 | 0 | 1 | 97.77 | 100 |
| 3 | Actual Data | 14 | 13 | 15 | 42 | - | - | - | - | - |
| 3 | Detected Data | 14 | 13 | 15 | 43 | 42 | 1 | 0 | 100 | 97.7 |
| Average | 98.59 | 98.56 | ||||||||
| Actual/Predicted | Left | Right | Backward | Normal |
|---|---|---|---|---|
| Left | 95.6% | 2.2% | 0.8% | 1.4% |
| Right | 2.9% | 95.2% | 0.5% | 1.4% |
| Backward | 1.8% | 0.9% | 96.0% | 1.3% |
| Normal | 0.6% | 0.5% | 0.7% | 98.2% |
| Model | Year | Computational Power | Accuracy | Key Features |
|---|---|---|---|---|
| HRNet (High-Resolution Network) [31] | 2019 | High; GPU required (~63.6 M params, ~32.9 GFLOPs) | 75.5% (COCO AP) openaccess.thecvf.com | Maintains high-resolution feature maps throughout; state-of-the-art keypoint accuracyopenaccess.thecvf.com, but large model size and high compute make it unsuitable for real-time CPU use. |
| MediaPipe BlazePose (Pose) [32] | 2020 | Low; mobile CPU (Pixel5 ≈ 32 FPS) blog.tensorflow.org | (33-point pose) | Lightweight mobile pipeline with 33-body-landmark estimationresearch.google; runs in real time on smartphones (e.g., ~30 FPS on) |
| HRFormer (Transformer Pose) [33] | 2021 | High; GPU required (~26 GFLOPs, ~43 M parameters) | 76% (COCO mAP) | Transformer-based high-resolution pose estimator; achieves higher keypoint accuracy than CNN models but requires powerful hardware. (Not GPU-free.) |
| ViTPose (Vision Transformer Pose) [34] | 2022 | Very High; GPU required (>18 GFLOPs, ≥85 M parameters) | 81.1% (COCO mAP) | State-of-the-art Vision Transformer pose model with ~81% key point AP; extremely large model unsuitable for CPU deployment. (Not GPU-free.) |
| OpenPose with XGBoost [35] | 2023 | Nvidia Jetson Nano (real-time, ~10 FPS) | 90% | Combines pose estimation and machine learning to detect suspicious behaviors during exams. |
| LSTM with Adam Optimizer [19] | 2023 | Moderate; requires GPU for deep learning processing | 90% | Uses a behavior dataset to analyze and predict cheating based on sequential patterns in online exams. |
| YOLOv4 for Object Detection [36] | 2023 | Requires GPU for real-time detection | 95% | Detects specific cheating behaviors and objects (e.g., phones) during exams. |
| Facial Emotion-Based Model [37] | 2024 | Moderate (GPU optimized) | 92.40% | Detects cheating using facial emotions in real-time. |
| Multiple Instance Learning (MIL) with GCN [38] | 2024 | Requires GPU for real-time detection | 90% | Combines MIL with Graph Convolutional Networks (GCN) to analyze multi-modal features like head posture and gaze. |
| Ensemble of CNN and RNN [39] | 2024 | High; GPU required for video analysis and classification | 92% | Employs a combination of convolutional and recurrent layers to analyze video streams for anomalies. |
| VideoPose3D (dilated temporal CNN for 3D pose) [22] | 2024 | Not specified (presumably GPU-accelerated inference) | Achieved 13% reduction in overall ergonomic stress (OES) and 33% increase in production capacity in a real-case study | 3D pose estimation via VideoPose3D (17-joint skeleton); semi-supervised skeleton detection and joint-angle computation; RULA-based fuzzy-logic criticality index for ergonomic risk; (parallel frame processing). |
| Proposed Approach | 2025 | Low (GPU-free) | 96.18% | Real-time detection using YOLOv8 and pose estimation; flags cheating based on facial angles; logs incidents efficiently. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Haider, S.M.S.; Zubair, M.; Waleed, A.; Shahid, M.; Asghar, F.; Khan, M.O. AI for Academic Integrity: GPU-Free Pose Estimation Framework for Automated Invigilation. Automation 2025, 6, 82. https://doi.org/10.3390/automation6040082
Haider SMS, Zubair M, Waleed A, Shahid M, Asghar F, Khan MO. AI for Academic Integrity: GPU-Free Pose Estimation Framework for Automated Invigilation. Automation. 2025; 6(4):82. https://doi.org/10.3390/automation6040082
Chicago/Turabian StyleHaider, Syed Muhammad Sajjad, Muhammad Zubair, Aashir Waleed, Muhammad Shahid, Furqan Asghar, and Muhammad Omer Khan. 2025. "AI for Academic Integrity: GPU-Free Pose Estimation Framework for Automated Invigilation" Automation 6, no. 4: 82. https://doi.org/10.3390/automation6040082
APA StyleHaider, S. M. S., Zubair, M., Waleed, A., Shahid, M., Asghar, F., & Khan, M. O. (2025). AI for Academic Integrity: GPU-Free Pose Estimation Framework for Automated Invigilation. Automation, 6(4), 82. https://doi.org/10.3390/automation6040082

