1. Introduction
According to traditional Chinese medicine (TCM) theory, meridians are pathways through which vital energy and blood circulate in the human body. These channels connect the internal organs to the five sensory organs, limbs, skin, and hair. Acupoints are specific points located along the meridians, serving as gateways through which energy flows between the body’s surface and internal organs, thereby helping to regulate physiological functions.
In 1989, the World Health Organization (WHO) convened the Scientific Group on Standard Acupuncture Point Locations, identifying a total of 361 standard acupoints and 48 extra points on the human body. Acupuncture or acupressure stimulation of these points induces neurophysiological reflex responses that help relieve local or systemic pain and discomfort. Additionally, regular stimulation of acupoints in the absence of illness can improve the circulation of qi and blood, achieving the TCM principle of preventive treatment of disease (
Figure 1).
Acupressure, similar to acupuncture, stimulates acupoints non-invasively using manual pressure. It facilitates the flow of energy to improve various symptoms [
2]. Accurate stimulation of the correct acupoints can enhance body function, reduce stress, and promote general wellness [
3,
4,
5]; a licensed acupuncturist and skincare expert demonstrates that a 20 min acupressure routine can rejuvenate facial muscles, reduce wrinkles, and restore a youthful appearance, highlighting the healthcare potential of facial acupressure.
Currently, most people rely on online resources to learn about acupoints. However, the unstructured nature of such information often leads to misidentification and incorrect stimulation, which not only reduces treatment effectiveness but may also cause physical harm. Therefore, developing a system that enables precise, standardized acupoint localization is crucial. With recent advances in AI, the integration of facial feature detection and fingertip tracking can address the issue of inaccurate acupoint identification. Such a system also supports personalized healthcare management and interactive learning, promoting the dissemination and practical application of TCM.
In TCM theory, facial acupoints are closely related to the meridian system of the human body. Proper acupoint massage can improve facial blood circulation and enhance skin health. However, for non-experts, accurately locating these acupoints and understanding their functions is difficult. Traditional learning methods rely heavily on static diagrams and textual descriptions, which lack intuitiveness and interactivity, and are often undermined by inaccurate labeling (
Figure 2). With the advancement of AI technology, facial feature models now offer a solution for acupoint detection and interactive learning. This study is designed to pursue the following objectives,
Enhancing accuracy in acupoint recognition: TCM approaches often depend on practitioner experience and static diagrams, which can be influenced by individual expertise and labeling inconsistencies, leading to errors in identification. By integrating AI technology, acupoints can be localized with greater precision, minimizing deviations caused by personal differences or learning mistakes.
Improving interactivity in TCM learning: Conventional learning methods, such as diagrams and text-based instructions, provide limited engagement. Through the combination of AI with hand-tracking and marking technologies, learners can receive interactive guidance during acupoint pressing. This offers a more intuitive and immersive learning experience, thereby strengthening both participation and educational outcomes.
Promoting accessibility and global dissemination of TCM: Traditional acupressure typically requires professional instruction or extensive information searching, which restricts its accessibility to the general public. AI-driven healthcare knowledge retrieval, supported by real-time voice interaction, can deliver clear and user-friendly explanations. This facilitates the spread of TCM knowledge across diverse linguistic and cultural contexts, contributing to its broader adoption and international development.
2. Literature Review
Acupuncture points are anatomically localized relative to specific markers on the body. However, accurately mapping them to individuals is a major challenge for inexperienced practitioners of Chinese medicine. Zhang et al. [
6] proposed a system to locate and visualize acupoints on an individual’s face using an augmented reality (AR) environment. The system combines a face alignment model with a hair segmentation model to provide highly accurate acupoint reference points at 60 frames per second (FPS). Conventional acupoint localization relies on B-cun measurements, which are performed by specialized physicians. In practice, the experience and skill level of the physician will affect the accuracy of the measurement and lead to errors. With this system, even users with no relevant professional skills can accurately locate acupoints on the face during self-training or self-care.
In the rapid development of the AI environment, many researchers have started to extend the basis of Deep Learning to images. Zhang et al. [
7] proposed an image-based face acupoint detection method, which utilizes the deep learning high-resolution network (HRNet) model to continuously maintain high-resolution feature maps in the whole network structure, and improves the representation capability and precision of acupoint localization through multi-scale feature fusion. The HRNet deep learning model is utilized to maintain high-resolution feature maps throughout the network and to enhance the representation capability and accuracy of acupoint localization through multi-scale feature fusion. Yuan et al. [
8] adopted the YOLOv8-pose model and an improved version, YOLOv8-ACU, specifically designed for efficient recognition of facial acupoints. Several optimizations have been made to the YOLOv8 model, including the introduction of the efficient channel attention mechanism to enhance the extraction of acupoint features, the replacement of the original neck module with a lightweight Slim-neck module to reduce the parameter and computational burdens, and the modification of the loss function to generalized intersection over union to improve the localization accuracy. The experimental results show that YOLOv8-ACU achieves 97.5% mean average precision at IoU threshold (mAP@0.5) and 76.9% mAP@0.5–0.95 on a custom facial acupoint dataset. Additionally, the model parameters are reduced by 0.44 M, the model size by 0.82 megabytes, and giga floating point operations by 9.3%, demonstrating excellent computational performance.
To address facial acupoint localization, Zhang et al. [
1] proposed the FAcupoint dataset, a dense facial acupoint annotation set aimed at advancing automatic recognition in TCM. Five certified physicians manually annotated 654 face pictures and labeled the locations of 43 facial acupoints, which provides rich data resources for the training and evaluation of deep learning models, as well as valuable resources for future research on the fusion of Chinese medicine and artificial intelligence, and fills a long-standing gap in annotated data for Chinese medicine.
The accumulation of these research results has expanded the application of acupoints beyond traditional acupuncture, into self-health management and healthcare massage. While traditionally dependent on professional experience and manual measurement, acupoint identification is now moving toward standardization and intelligence through AI-based image processing. Furthermore, the range of the finger pressure markers for the face acupoints is much wider than that of the acupuncture points, and the existing techniques are sufficient for acupoint labeling. Existing technologies are sufficient for acupoint labeling. Furthermore, academic research on specific acupoints.
Table 1 demonstrates the important correlation between acupoints and human health. With the continuous optimization of AI models and further accumulation of data, the popularization and advancement of Chinese medicine health care technology can be promoted by using intelligent devices and recognition technology that accurately adapt to individual differences. By employing user-friendly operations, acupoint recognition and support personalized massage, healthcare, and learning can be enhanced. This represents a key direction for future development.
3. Research Methods
In this study, we established the correspondence between TCM acupoint mapping and facial feature data to identify the respective acupoints. These acupoints were labeled by capturing a dynamic human face through a webcam. Hand recognition was used to highlight the index finger’s feature point, assisting users in accurately obtaining facial acupoint coordinates. This was integrated with hand tracking to allow users to intuitively locate and press the correct acupoints. The model is also well-equipped with a good range of features. In addition, the model demonstrates good adaptability to varying lighting conditions, angles, and facial expressions, ensuring stable and accurate acupoint recognition. An additional audio-guided teaching module enables users to learn and experience health benefits interactively (
Figure 3).
3.1. Face Recognition Model
With the rapid development of artificial intelligence and computer vision, facial recognition models have been widely applied in biometrics, medical image analysis, and human–computer interaction. Current facial recognition techniques can be broadly categorized into traditional computer vision methods and deep learning models. Traditional methods, such as OpenCV and Dlib, rely on feature engineering for face detection. Although they are fast and suitable for basic recognition and feature point calibration, they are less adaptable to complex environments and support a limited set of features, which makes them insufficient for high-precision applications. In contrast, deep learning models such as FaceNet and DeepFace utilize deep neural networks to train models to convert face images into high-dimensional vectors, which are capable of accurate identity recognition and facial matching, while maintaining stability under various lighting and angle conditions. However, these models are primarily designed for identity recognition rather than fine-scale facial feature annotation, making them less suitable for acupoint recognition in TCM applications. Some of the models require high computational resources and may have delay problems in real-time processing applications, which further affects the user experience. Therefore, among many face recognition technologies, MediaPipe is the best choice for this study due to its lightweight, high efficiency, and real-time operation.
MediaPipe Face Mesh is an open-source machine learning framework introduced by Google in 2020 that focuses on real-time image processing to quickly and accurately detect facial features. In this study, we utilize the 468 facial landmarks provided by MediaPipe Face Mesh (
Figure 4), which cover key areas such as the eyes, nose, mouth, zygomatic bones, and chin, as the basis for recognition, and can cope with different lighting conditions, changes in face angle, facial gestures, and individual face size differences.
Since the landmarks provided by MediaPipe are not specifically designed for identifying TCM acupoints, some points cannot be directly mapped, resulting in inaccuracies in acupoint labeling. In this study, we focused on adjusting the points that could not be directly labeled and performed additional dynamic mathematical calculations on the feature points to achieve more accurate point positioning. For example, neighboring landmarks were used to calculate coordinates based on specific XYZ axis ratios and spatial proportions. This approach not only compensates for the limitations of MediaPipe in acupoint recognition but also adjusts point positions based on individual facial shapes, ensuring both stability and accuracy in the recognition process. Compared with static labeling of TCM acupoints, this dynamic calculation method is unaffected by face size or camera distance, and it automatically adjusts acupoint coordinates, thereby improving recognition accuracy. A total of 45 acupoints were labeled in this study (
Figure 5).
3.2. Hand Recognition Model
The MediaPipe Hand Gesture Recognizer detects the positions of 21 finger joint landmarks within the detected hand region (
Figure 6). The model was trained on approximately 30,000 real-world images, using synthesized hand poses across various backgrounds to train a model that can accurately detect the information of the key points of the hand in real-time images. In this study, the MediaPipe hand recognition function was used to track the position of the user’s index finger and mark the yellow dots of the index finger (
Figure 7). With the aid of visual marking technology, the user understands the spatial relationship between their hand and facial acupoints, and ensures that the pressing points of the index finger are accurately matched with the acupoints of the Chinese medicine practitioners, which further improves the accuracy and effectiveness of the operation, and avoids wrong presses due to angular error or hand movement. This improves the accuracy and effectiveness of the operation, avoiding wrong presses due to angular errors or hand movement.
3.3. Face Integration Hand Recognition
During integration, when the user’s index finger approaches a specific facial acupoint, the system detects the overlap between the finger position and the acupoint on the face, and the name of the acupoint and a brief description of its function are displayed in the upper-left corner of the interface (
Figure 8). This real-time visual feedback helps the user to confirm the correctness of the press position and avoid misplacement problems caused by personal inexperience or mapping errors. Compared with the traditional way of learning acupoints, AI’s interactive recognition method can greatly improve the learning efficiency and accuracy, and make the operation of acupoint pressing more convenient.
To ensure the execution efficiency of the system, the cave description data is built directly into the code of the development system (
Figure 9), which eliminates the need to query an external AI server and enables rapid information display to enhance the real-time performance and user experience, even when the operation is carried out without an Internet connection.
3.4. 3D Mesh Model Calibration
In webcam-based views, acupuncture points are displayed in 2D over the facial image, which makes it difficult to observe the specific height difference. To address this issue, the system converts facial landmark data into a 3D mesh model using Open3D modeling, so that it has the zoom and flip functions, providing an understanding of the location and function of the acupoints from different perspectives. However, the
Z-axis data extracted from 2D images via MediaPipe lacks sufficient depth variation to represent true 3D facial structure after conversion to a 3D model (
Figure 10).
Therefore, in this study, the
Z-axis data were magnified 1200 times to effectively enhance the face stereoscopic sensation (
Figure 11). The magnification of the data varies depending on the specifications of the webcam lenses used, and the developers need to adjust the magnification according to the actual situation. The equipment used in the study is an Adesso CyberTrack K4 4K ultra high definition webcam.
4. System Operation
This AI face and acupuncture points interactive voice healthcare teaching system was developed using the Python v3.11 programming language. The system interface is divided into three sections: the face recognition and acupoint labeling area on the left, the acupoint information area on the top right, and the voice-assisted teaching area at the bottom.
Left face recognition area: The system automatically detects the presence of a face using the MediaPipe model and quickly locates its contours and key features. It recognizes TCM acupoints (displayed as red dots) and displays their names based on facial features. Supports face rotation of 90° left and right, and also provide limited up and down angle changes. Regardless of face size or whether the user is wearing glasses, the system can track the face in real time and maintain stable recognition performance (
Figure 12).
- 2.
In the left face recognition area, the MediaPipe hand model detects the position of the user’s index finger, and the tip of the index finger is marked with a yellow dot. When the yellow dot overlaps with a red acupoint marker, the system automatically displays the corresponding acupoint description in the upper-right corner (
Figure 13), which makes it easy to learn the correct location of the acupoints.
- 3.
Because the acupoint markers occupy a relatively large area in the 2D view, learners can press the 3D diagram button in the system interface to further understand the exact location of the points, and the system marks the most recent touch point with a small red dot on the 3D facial mesh (
Figure 14). This feature allows learners to freely rotate and zoom the 3D model to observe acupoint positions from different perspectives. For example, in the work of facial estheticians, most of them carry out massage from the top of the head to the tip of the nose; they can adjust the up and down to familiarize themselves with the location of the acupuncture points.
- 4.
The right half of the interface serves as an interactive teaching area, combining voice guidance with hands-on practice to help learners master the knowledge of face acupoint health care and the correct pressing method. Nine acupoints are built in, and by clicking the relevant buttons and using voice guidance, users are guided step-by-step to perform correct acupressure techniques, helping them achieve effective health outcomes (
Figure 15).
5. Conclusions
We integrate MediaPipe face and hand recognition technologies to develop an intelligent, standardized, and personalized AI-based facial acupoint identification and healthcare consultation system. The system addresses the limitations in TCM acupuncture point identification, which traditionally relies on practitioner experience and lacks standardization and interactivity. By utilizing AI image processing technology, the system accurately locates facial acupuncture points, while hand recognition technology provides intuitive guidance for pressing the correct points. This ensures that users can precisely stimulate the appropriate acupoints, aiding Chinese medicine students in their practical learning and promoting the advancement of TCM. MediaPipe technology is used to build an accurate acupoint labeling system based on facial feature points, in combination with hand index finger tracking technology, to ensure precise acupoint pressing. As prevention is better than a cure, the use of acupoints for self-regulation may reduce the frequency of medical visits, particularly benefiting individuals in remote or underserved areas with limited access to healthcare resources.
The findings of this study have broad applicability in TCM healthcare, health education, and intelligent health management. In the future, this system may be extended to identify acupoints on the ears [
15], hands, feet, and back. Furthermore, the enhancement of a related healthcare database supports the development of more comprehensive and personalized health management plans. This system also promotes the popularization of acupressure knowledge, enabling the general public to alleviate discomfort before seeking medical attention. Consequently, TCM healthcare is no longer confined to professionals; it can be disseminated to a broader audience in a simple, scientific, and intelligent manner, thereby achieving standardized and digital TCM assistance.
Author Contributions
Conceptualization, W.-C.C. and Y.-H.C. (Yu-Hsuan Chen); methodology, W.-C.C.; software, W.-C.C. and Y.-H.C. (Yu-Hsuan Chen); validation, J.-W.W., H.-J.C. and J.-W.T.; formal analysis, W.-C.C. and Y.-H.C. (Yu-Hsuan Chen); investigation, W.-C.C. and Y.-H.C. (Yu-Hsuan Chen); resources, H.-J.C. and J.-W.T.; data curation, Y.-H.C. (Yu-Hsing Chen) and J.-W.W.; writing—original draft preparation, W.-C.C. and Y.-H.C. (Yu-Hsing Chen); writing—review and editing, J.-W.W., H.-J.C. and J.-W.T.; visualization, W.-C.C. and Y.-H.C. (Yu-Hsing Chen); supervision, H.-J.C. and J.-W.T.; project administration, W.-C.C.; funding acquisition, none. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Institutional Review Board Statement
Ethical review and approval were waived for this study because the research involved non-invasive facial image analysis conducted solely for system development and demonstration purposes. All participants were members of the research team and provided informed consent prior to participation.
Informed Consent Statement
Informed consent was obtained from all subjects involved in the study. Written informed consent has been obtained from the patient(s) to publish this paper.
Data Availability Statement
The data presented in this study are available on request from the corresponding author due to privacy considerations.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Zhang, T.; Liu, C.; Zhou, J.; Yang, H.; Lin, Y. FAcupoint: The first dense facial acupoint localization dataset and baselines. Expert Syst. Appl. 2025, 272, 126683. [Google Scholar] [CrossRef]
- Lee, E.J.; Frazier, S.K. The efficacy of acupressure for symptom management: A systematic review. J. Pain Symptom Manag. 2011, 42, 589–603. [Google Scholar] [CrossRef] [PubMed]
- Chen, K.M.; Lee, Y.H. Effects of head and face massage on heart rate variability and anxiety in adult women. Evid.-Based Nat. Med. 2018, 2, 17–27. [Google Scholar]
- Yeung, W.F.; Ho, F.Y.Y.; Chung, K.F.; Zhang, Z.J.; Yu, B.Y.M.; Suen, L.K.P.; Lao, L.X. Self-administered acupressure for insomnia disorder: A pilot randomized controlled trial. J. Sleep Res. 2018, 27, 220–231. [Google Scholar] [CrossRef] [PubMed]
- Goldstein, S. Your Best Face Now: Look Younger in 20 Days With the Do-It-Yourself Acupressure Facelift; Penguin: New York, NY, USA, 2012. [Google Scholar]
- Zhang, M.; Schulze, J.; Zhang, D. FaceAtlasAR: Atlas of facial acupuncture points in augmented reality. arXiv 2021, arXiv:2111.14755. [Google Scholar]
- Zhang, T.; Yang, H.; Ge, W.; Lin, Y. An image-based facial acupoint detection approach using high-resolution network and attention fusion. IET Biom. 2023, 12, 146–158. [Google Scholar] [CrossRef]
- Yuan, Z.; Shao, P.; Li, J.; Wang, Y.; Zhu, Z.; Qiu, W.; Han, A. YOLOv8-ACU: Improved YOLOv8-pose for facial acupoint detection. Front. Neurorobot. 2024, 18, 1355857. [Google Scholar] [CrossRef] [PubMed]
- Jiang, Q.; Zheng, Y.; Davis, T. Clinical study on the pattern of acupuncture point selection for treating juvenile myopia. Young Think. Rev. 2025, 1, 20–28. [Google Scholar] [CrossRef]
- Liu, Y.; Lee, D.H.; Kosowicz, E.; Li, J.; Ma, L.; Yao, S.; Kong, J. Targeting mental health: A scoping review of acupoints selection in acupressure for depression, anxiety, and stress. Brain Behav. Immun. Integr. 2025, 10, 100111. [Google Scholar] [CrossRef]
- Leedasawat, P.; Sangvatanakul, P.; Tungsukruthai, P.; Kamalashiran, C.; Phetkate, P.; Patarajierapun, P.; Sriyakul, K. The efficacy and safety of Chinese eye exercise of acupoints in dry eye patients: A randomized controlled trial. Complement. Med. Res. 2024, 31, 149–159. [Google Scholar] [CrossRef] [PubMed]
- Chou, H.-J.; Tsai, H.-Y.; Sun, T.-C.; Lin, M.-F. Effectiveness of acupressure in relieving cancer-related fatigue: A systematic literature review and analysis. J. Nurs. 2022, 69, 75–87. [Google Scholar]
- Lu, L.; Wen, Q.; Hao, X.; Zheng, Q.; Li, Y.; Li, N. Acupoints for tension-type headache: A literature study based on data mining technology. Evid.-Based Complement. Altern. Med. 2021, 2021, 5567697. [Google Scholar] [CrossRef]
- Chaochao, Y.; Li, W.; Lihong, K.; Feng, S.; Chaoyang, M.; Yanjun, D.; Hua, Z. Acupoint combinations used for treatment of Alzheimer’s disease: A data mining analysis. J. Tradit. Chin. Med. 2018, 38, 943–952. [Google Scholar] [CrossRef]
- Zhang, M.; Schulze, J.P.; Zhang, D. E-faceatlasAR: Extend atlas of facial acupuncture points with auricular maps in augmented reality for self-acupressure. Virtual Real. 2022, 26, 1763–1776. [Google Scholar]
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |