From Signal to Semantics: The Multimodal Haptic Informatics Index for Triangulating Haptic Intent at the Edge
Abstract
1. Introduction
1.1. The Challenge of Intent Recognition in Modern HCI
1.2. Midas Touch Issue: Misalignment Between Signals and Intentions
1.3. The Privacy–Utility Paradox of Edge Computing
2. Literature Review
2.1. Haptic Sensing and Intention Recognition Technology
2.2. Edge Artificial Intelligence and Lightweight Computing
2.3. Multimodal Fusion and Context Understanding
2.4. Interdisciplinary Application of Triangular Verification Methodology
3. Method and Materials
3.1. Experimental Structure
3.2. Stage One: Edge Data Acquisition
3.2.1. Scene Channel (Camera Setup)
3.2.2. Action Channel (Wearable Sensor)
3.2.3. Trigger Channel (TAP Setup)
3.3. Stage Two: Local Feature Extraction
3.3.1. Entropy Computer
3.3.2. Skeleton Visualizer
3.3.3. Transcript Recorder
4. Results
4.1. Stage Three: Informatics State Derivation
| Algorithm 1 Multimodal haptic data alignment. |
|
4.1.1. Action Channel (Event-Triggered Hand State)
4.1.2. Scene Channel (Event-Triggered Scene Window)
4.1.3. Trigger Channel (Event-Triggered Audio Window)
- Confirming: ;
- Planning: ;
- Confusion: .
4.2. Stage Four: MHI Triangulation
- : The spatial displacement (Speed, Smoothness, Stability).
- : The kinematic classification (Clear Action, Hesitation, and Transition).
- : The cognitive label from the Think Aloud Protocol.

4.2.1. Single Modal Scoring
4.2.2. Score Weighting
4.2.3. Hand State Mapping
- (The Baseline): It represents the standard informatic state at time t before adding real-time sensor deviations.
- (The Scaling Factor): It determines how much the physical sensors are allowed to change the baseline. A higher makes the system more responsive to sudden movements.The Deviations :
- : The difference between current “Action certainty” and average certainty.
- : The difference between current “Scene stability” and average stability.
- : The difference between current “Trigger similarity” and average vocal pattern.
5. Discussion
5.1. MHI’s Potential of Resolving the “Midas Touch” Dilemma
5.2. The Structural Necessity of the SAT Framework
5.2.1. Orthogonality Is Prerequisite to Accuracy
5.2.2. Event-Triggered Privacy and Efficiency
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
| AI | Artificial Intelligence |
| BLE | Bluetooth Low Energy |
| CNN | Convolutional Neural Network |
| CV | Computer Vision |
| DPFCM | Differentially Private Fractional Coverage Model |
| EMG | Electromyography |
| HCI | Human–Computer Interaction |
| IMU | Inertial Measurement Unit |
| IoT | Internet of Things |
| LLM | Large Language Model |
| LSTM | Long Short-Term Memory |
| MHI | Multimodal Haptic Informatics |
| MT | Midas Touch |
| SAT | Scene, Action, Trigger |
| SENS | Sensible Energy System |
| TAP | Think Aloud Protocol |
| TinyML | Tiny Machine Learning |
References
- Thottempudi, P.; Acharya, B.; Moreira, F. High-Performance Real-Time Human Activity Recognition Using Machine Learning. Mathematics 2024, 12, 3622. [Google Scholar] [CrossRef]
- Namiki, A.; Yokosawa, S. Origami Folding by Multifingered Hands with Motion Primitives. Cyborg Bionic Syst. 2021, 2021, 9851834. [Google Scholar] [CrossRef] [PubMed]
- Smith, A.; Anderson, B.R.; Otto, J.T.; Karth, I.; Sun, Y.; Joon Young Chung, J.; Roemmele, M.; Kreminski, M. Fuzzy Linkography: Automatic Graphical Summarization of Creative Activity Traces. In Proceedings of the 2025 Conference on Creativity and Cognition, Virtual, UK, 23–25 June 2025; pp. 637–650. [Google Scholar] [CrossRef]
- Smith, A.; Anderson, B.R.; Otto, J.T.; Karth, I.; Sun, Y.; Chung, J.J.Y.; Roemmele, M.; Kreminski, M. Scaling Analysis of Creative Activity Traces via Fuzzy Linkography. In Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 26 April–1 May 2025; pp. 1–10. [Google Scholar] [CrossRef]
- Kelly, N.; Greentree, J.; Sosa, R.; Evans, R. Automating Useful Representations of the Design Process from Design Protocols. In Proceedings of the International Conference On-Design Computing and Cognition, Glasgow, UK, 4–6 July 2024; Springer: Berlin/Heidelberg, Germany, 2024; pp. 3–20. [Google Scholar]
- Chang, T.W.; Huang, H.Y.; Hung, C.W.; Datta, S.; McMinn, T. A Network Sensor Fusion Approach for a Behaviour-Based Smart Energy Environment for Co-making Spaces. Sensors 2020, 20, 5507. [Google Scholar] [CrossRef]
- Chang, T.W.; Huang, H.Y.; Hong, C.C.; Datta, S.; Nakapan, W. SENS+: A Co-Existing Fabrication System for a Smart DFA Environment Based on Energy Fusion Information. Sensors 2023, 23, 2890. [Google Scholar] [CrossRef] [PubMed]
- Chang, C.C.; Chang, T.W.; Huang, H.Y.; Tsai, S.T. Discovering Semantic and Visual Hints with Machine Learning of Real Design Templates to Support Insight Exploration in Informatics. Adv. Eng. Inform. 2024, 59, 102244. [Google Scholar] [CrossRef]
- Gambo, A.A.; Ali, E.Y.; Arungbemi, D.A.; Hanif, M.; Anefu, P.N.; Ali, N.O.; Thomas, S.; Chinda, F.E.; May, Z.; Qureshi, S.; et al. An End to End Wearable Device and System for Indefinite, Continuous, Real Time Gesture Recognition of Directional and Shape-Based Arm Gestures. IEEE Access 2025, 13, 153436–153463. [Google Scholar] [CrossRef]
- Kim, M.; Cho, J.; Lee, S.; Jung, Y. IMU Sensor-Based Hand Gesture Recognition for Human-Machine Interfaces. Sensors 2019, 19, 3827. [Google Scholar] [CrossRef]
- Xia, S.; Chu, L.; Pei, L.; Zhang, Z.; Yu, W.; Qiu, R.C. Learning Disentangled Representation for Mixed- Reality Human Activity Recognition with a Single IMU Sensor. IEEE Trans. Instrum. Meas. 2021, 70, 2514314. [Google Scholar] [CrossRef]
- Kaichi, T.; Maruyama, T.; Tada, M.; Saito, H. Resolving Position Ambiguity of IMU-based Human Pose with a Single RGB Camera. Sensors 2020, 20, 5453. [Google Scholar] [CrossRef]
- Zhang, D.; Liao, Z.; Xie, W.; Wu, X.; Xie, H.; Xiao, J.; Jiang, L. Fine-Grained and Real-Time Gesture Recognition by Using IMU Sensors. IEEE Trans. Mob. Comput. 2023, 22, 2177–2189. [Google Scholar] [CrossRef]
- Xu, S.; Fan, K.K. A Silent Revolution: From Sketching to Coding—A Case Study on Code-Based Design Tool Learning. EURASIA J. Math. Sci. Technol. Educ. 2017, 13, 2959–2977. [Google Scholar] [CrossRef]
- Chen, K.C.; Lee, C.F.; Chang, T.W.; Wang, C.G.; Li, J.R. From Viewing to Structure: A Computational Framework for Modeling and Visualizing Visual Exploration. Appl. Sci. 2025, 15, 7900. [Google Scholar] [CrossRef]
- Cao, J.; Zhao, W.; Hu, H.; Liu, Y.; Guo, X. Using Linkography and Situated FBS Co-Design Model to Explore User Participatory Conceptual Design Process. Processes 2022, 10, 713. [Google Scholar] [CrossRef]
- Siirtola, P.; Röning, J. Context-Aware Incremental Learning-Based Method for Personalized Human Activity Recognition. J. Ambient Intell. Humaniz. Comput. 2021, 12, 10499–10513. [Google Scholar] [CrossRef]
- Theodoridou, E.; Cinque, L.; Mignosi, F.; Placidi, G.; Polsinelli, M.; Tavares, J.M.R.S.; Spezialetti, M. Hand Tracking and Gesture Recognition by Multiple Contactless Sensors: A Survey. IEEE Trans. Hum. Mach. Syst. 2023, 53, 35–43. [Google Scholar] [CrossRef]
- Öztürk Kösenciĝ, K.; Özbayraktar, M. Unveiling Interactions among Architectural Sketching, Parametric Design, and Digital Fabrication Using Linkography. Int. J. Des. Creat. Innov. 2025, 13, 1–22. [Google Scholar] [CrossRef]
- Freitas, A.; Santos, D.; Lima, R.; Santos, C.G.; Meiguins, B. Pactolo Bar: An Approach to Mitigate the Midas Touch Problem in Non-Conventional Interaction. Sensors 2023, 23, 2110. [Google Scholar] [CrossRef]
- Zeng, L.; Weber, G. User Interfaces for Pin-Array Tactile Displays. In Advancements in Pin-Array Tactile Displays; Springer Nature: Cham, Switzerland, 2025; pp. 29–45. [Google Scholar] [CrossRef]
- Mummadi, C.; Leo, F.; Verma, K.; Kasireddy, S.; Scholl, P.; Kempfle, J.; Laerhoven, K. Real-Time and Embedded Detection of Hand Gestures with an IMU-based Glove. Informatics 2018, 5, 28. [Google Scholar] [CrossRef]
- Dong, D.; Zhu, N.; Wang, J.; Li, Y. Lattice-Based Sensor Data Acquisition Strategy to Solve Sensor Position Drift in Human Gait Phase Recognition System with a Single Inertia Measurement Unit. Eng. Appl. Artif. Intell. 2025, 147, 110286. [Google Scholar] [CrossRef]
- Colli Alfaro, J.G.; Trejos, A.L. User-Independent Hand Gesture Recognition Classification Models Using Sensor Fusion. Sensors 2022, 22, 1321. [Google Scholar] [CrossRef]
- Li, C.; Lee, C.F.; Xu, S. Stigma Threat in Design for Older Adults: Exploring Design Factors That Induce Stigma Perception. Int. J. Des. 2020, 14, 51–64. [Google Scholar]
- Kil, Y.S.; Lee, Y.J.; Jeon, S.E.; Oh, Y.S.; Lee, I.G. Optimization of Privacy-Utility Trade-off for Efficient Feature Selection of Secure Internet of Things. IEEE Access 2024, 12, 142582–142591. [Google Scholar] [CrossRef]
- Lee, S.; Lim, Y.; Lim, K. Multimodal Sensor Fusion Models for Real-Time Exercise Repetition Counting with IMU Sensors and Respiration Data. Inf. Fusion 2024, 104, 102153. [Google Scholar] [CrossRef]
- Rivadeneira, J.E.; Borges, G.A.; Rodrigues, A.; Boavida, F.; Sá Silva, J. A Unified Privacy Preserving Model with AI at the Edge for Human-in-the-Loop Cyber-Physical Systems. Internet Things 2024, 25, 101034. [Google Scholar] [CrossRef]
- Yemata, S.; Lemma, D.; Moussa, S.M. Exploring Factors and Features Impacting Data Privacy and Utility Trade-Offs in IoT-based Healthcare Systems: A Systematic Literature Review. Inf. Commun. Soc. 2025, 28, 3145–3174. [Google Scholar] [CrossRef]
- Lamaakal, I.; Essahraui, S.; Maleh, Y.; Makkaoui, K.E.; Ouahbi, I.; Bouami, M.F.; El-Latif, A.A.A.; Almousa, M.; Peng, J.; Niyato, D. A Comprehensive Survey on Tiny Machine Learning for Human Behavior Analysis. IEEE Internet Things J. 2025, 12, 32419–32443. [Google Scholar] [CrossRef]
- Daniel, N.; Klein, I. INIM: Inertial Images Construction with Applications to Activity Recognition. Sensors 2021, 21, 4787. [Google Scholar] [CrossRef]
- Wang, H.; Qiu, C.; Zhang, C.; Xu, J.; Su, C. P-CA: Privacy-preserving Convolutional Autoencoder-Based Edge–Cloud Collaborative Computing for Human Behavior Recognition. Mathematics 2024, 12, 2587. [Google Scholar] [CrossRef]
- Kim, J.; Cho, S.H. A Differential Privacy Framework with Adjustable Efficiency–Utility Trade-Offs for Data Collection. Mathematics 2025, 13, 812. [Google Scholar] [CrossRef]
- Zhou, H.; Zhang, X.; Feng, Y.; Zhang, T.; Xiong, L. Efficient Human Activity Recognition on Edge Devices Using DeepConv LSTM Architectures. Sci. Rep. 2025, 15, 13830. [Google Scholar] [CrossRef]
- Li, J.; Liu, X.; Wang, Z.; Zhao, H.; Zhang, T.; Qiu, S.; Zhou, X.; Cai, H.; Ni, R.; Cangelosi, A. Real-Time Human Motion Capture Based on Wearable Inertial Sensor Networks. IEEE Internet Things J. 2022, 9, 8953–8966. [Google Scholar] [CrossRef]
- Ahmed, M.A.; Zaidan, B.B.; Zaidan, A.A.; Alamoodi, A.H.; Albahri, O.S.; Al-Qaysi, Z.T.; Albahri, A.S.; Salih, M.M. Real-Time Sign Language Framework Based on Wearable Device: Analysis of MSL, DataGlove, and Gesture Recognition. Soft Comput. 2021, 25, 11101–11122. [Google Scholar] [CrossRef]
- Pan, T.Y.; Chang, C.Y.; Tsai, W.L.; Hu, M.C. Multisensor-Based 3D Gesture Recognition for a Decision-Making Training System. IEEE Sens. J. 2021, 21, 706–716. [Google Scholar] [CrossRef]
- Li, J.; Huang, L.; Shah, S.; Jones, S.J.; Jin, Y.; Wang, D.; Russell, A.; Choi, S.; Gao, Y.; Yuan, J.; et al. SignRing: Continuous American Sign Language Recognition Using IMU Rings and Virtual IMU Data. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2023, 7, 1–29. [Google Scholar] [CrossRef]
- Qureshi, T.S.; Shahid, M.H.; Farhan, A.A.; Alamri, S. A Systematic Literature Review on Human Activity Recognition Using Smart Devices: Advances, Challenges, and Future Directions. Artif. Intell. Rev. 2025, 58, 276. [Google Scholar] [CrossRef]
- He, Z.; Sun, Y.; Zhang, Z. Human Activity Recognition Based on Deep Learning Regardless of Sensor Orientation. Appl. Sci. 2024, 14, 3637. [Google Scholar] [CrossRef]
- Shavit, Y.; Klein, I. Boosting Inertial-Based Human Activity Recognition with Transformers. IEEE Access 2021, 9, 53540–53547. [Google Scholar] [CrossRef]
- Hoang, M.L. A Review of Developments and Metrology in Machine Learning and Deep Learning for Wearable IoT Devices. IEEE Access 2025, 13, 106035–106054. [Google Scholar] [CrossRef]
- Müller, P.N.; Müller, A.J.; Achenbach, P.; Göbel, S. IMU-based Fitness Activity Recognition Using CNNs for Time Series Classification. Sensors 2024, 24, 742. [Google Scholar] [CrossRef]
- Hashi, A.O.; Hashim, S.Z.M.; Asamah, A.B. A Systematic Review of Hand Gesture Recognition: An Update from 2018 to 2024. IEEE Access 2024, 12, 143599–143626. [Google Scholar] [CrossRef]
- Just, F.; Ghinami, C.; Zbinden, J.; Ortiz-Catalan, M. Deployment of Machine Learning Algorithms on Resource-Constrained Hardware Platforms for Prosthetics. IEEE Access 2024, 12, 40439–40449. [Google Scholar] [CrossRef]
- Liu, H.I.; Galindo, M.; Xie, H.; Wong, L.K.; Shuai, H.H.; Li, Y.H.; Cheng, W.H. Lightweight Deep Learning for Resource-Constrained Environments: A Survey. ACM Comput. Surv. 2024, 56, 267. [Google Scholar] [CrossRef]
- Rahman, S.; Pal, S.; Yearwood, J.; Karmakar, C. Analysing Performances of DL-based ECG Noise Classification Models Deployed in Memory-Constraint IoT-enabled Devices. IEEE Trans. Consum. Electron. 2024, 70, 704–714. [Google Scholar] [CrossRef]
- Gill, S.S.; Golec, M.; Hu, J.; Xu, M.; Du, J.; Wu, H.; Walia, G.K.; Murugesan, S.S.; Ali, B.; Kumar, M.; et al. Edge AI: A Taxonomy, Systematic Review and Future Directions. Clust. Comput. 2025, 28, 18–61. [Google Scholar] [CrossRef]
- Pawłowski, M.; Wróblewska, A.; Sysko-Romańczuk, S. Effective Techniques for Multimodal Data Fusion: A Comparative Analysis. Sensors 2023, 23, 2381. [Google Scholar] [CrossRef]
- Malinverni, L.; Schaper, M.M.; Pares, N. Multimodal Methodological Approach for Participatory Design of Full-Body Interaction Learning Environments. Qual. Res. 2019, 19, 71–89. [Google Scholar] [CrossRef]
- Badidi, E.; Moumane, K.; Ghazi, F.E. Opportunities, Applications, and Challenges of Edge-AI Enabled Video Analytics in Smart Cities: A Systematic Review. IEEE Access 2023, 11, 80543–80572. [Google Scholar] [CrossRef]
- Xaviar, S.; Yang, X.; Ardakanian, O. Centaur: Robust Multimodal Fusion for Human Activity Recognition. IEEE Sens. J. 2024, 24, 18578–18591. [Google Scholar] [CrossRef]
- Tzimos, N.; Parafestas, E.; Voutsakelis, G.; Kontogiannis, S.; Kokkonis, G. Multimodal Interaction with Haptic Interfaces on 3D Objects in Virtual Reality. Electronics 2025, 14, 4035. [Google Scholar] [CrossRef]
- Chang, T.W. Supporting Design Learning with Design Puzzles: Some Observations of on-Line Learning with Design Puzzles. In Recent Advances in Design and Decision Support Systems in Architecture and Urban Planning; Van Leeuwen, J.P., Timmermans, H.J.P., Eds.; Springer: Dordrecht, The Netherlands, 2004; pp. 293–307. [Google Scholar] [CrossRef]






| Data Channel | Data Type | Movements Scale | Body Place | AI Technology |
|---|---|---|---|---|
| Scene | CV | Macro | Full body | PoseNet |
| Action | IMU | Micro | Wrist | TinyML |
| Trigger | SoundWave | Middle | Oral | LLM |
| Hand State | Clear Action | Hesitation | Transition |
|---|---|---|---|
| Constrain Rule 1 | N/A | ||
| Constrain Rule 2 | |||
| Action Meaning | This is a decisive, intentional movement. | The user is paused, confused, or holding the object still. | The nuanced micro-adjustments and non-gestural movements. |
| abs_Time | P_Up | P_Down | P_Left | P_Right | Certainty | Entropy | Hand_State |
|---|---|---|---|---|---|---|---|
| 14:44:35 | 0.000022 | 0.955280 | 0 | 0.044698 | 95.53% | 0.182854 | Clear Action |
| 14:44:39 | 0.000250 | 0.980874 | 0 | 0.018876 | 98.09% | 0.095951 | Clear Action |
| 14:45:24 | 0.000009 | 0.954141 | 0 | 0.045850 | 95.41% | 0.186223 | Clear Action |
| 14:45:32 | 0.000069 | 0.968338 | 0 | 0.031592 | 96.83% | 0.140962 | Clear Action |
| Hand State | P1 Tutorial | P1 Creative | P2 Tutorial | P2 Creative | Average |
|---|---|---|---|---|---|
| Clear Action | 23.91% | 21.21% | 6.06% | 18.18% | 17.34% |
| Hesitate | 28.26% | 16.67% | 30.30% | 36.36% | 27.90% |
| Transition | 47.83% | 62.12% | 63.64% | 45.45% | 54.76% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Xu, S.; Li, C.; Li, J.-R.; Chang, T.-W. From Signal to Semantics: The Multimodal Haptic Informatics Index for Triangulating Haptic Intent at the Edge. Electronics 2026, 15, 832. https://doi.org/10.3390/electronics15040832
Xu S, Li C, Li J-R, Chang T-W. From Signal to Semantics: The Multimodal Haptic Informatics Index for Triangulating Haptic Intent at the Edge. Electronics. 2026; 15(4):832. https://doi.org/10.3390/electronics15040832
Chicago/Turabian StyleXu, Song, Chen Li, Jia-Rong Li, and Teng-Wen Chang. 2026. "From Signal to Semantics: The Multimodal Haptic Informatics Index for Triangulating Haptic Intent at the Edge" Electronics 15, no. 4: 832. https://doi.org/10.3390/electronics15040832
APA StyleXu, S., Li, C., Li, J.-R., & Chang, T.-W. (2026). From Signal to Semantics: The Multimodal Haptic Informatics Index for Triangulating Haptic Intent at the Edge. Electronics, 15(4), 832. https://doi.org/10.3390/electronics15040832

