Next Article in Journal
Emotion Classification Algorithm for Audiovisual Scenes Based on Low-Frequency Signals
Next Article in Special Issue
Supervised Video Cloth Simulation: Exploring Softness and Stiffness Variations on Fabric Types Using Deep Learning
Previous Article in Journal
Development of Fast Protection System with Xilinx ZYNQ SoC for RAON Heavy-Ion Accelerator
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automatic Puncture Timing Detection for Multi-Camera Injection Motion Analysis

1
Department of Medical Engineering, Graduate School of Science and Engineering, Chiba University, Chiba 263-8522, Japan
2
Department of Orthopedic Surgery, Chiba University, Chiba 260-0856, Japan
3
Department of Bioenvironmental Medicine, Graduate School of Medicine, Chiba University, Chiba 260-0856, Japan
4
Department of Community-Oriented Medical Education, Graduate School of Medicine, Chiba University, Chiba 260-0856, Japan
5
Department of Medical Education, Graduate School of Medicine, Chiba University, Chiba 260-0856, Japan
6
Center for Frontier Medical Engineering, Chiba University, Chiba 263-8522, Japan
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(12), 7120; https://doi.org/10.3390/app13127120
Submission received: 18 April 2023 / Revised: 31 May 2023 / Accepted: 12 June 2023 / Published: 14 June 2023
(This article belongs to the Special Issue Advances of Intelligent Imaging Technology)

Abstract

:
Precisely detecting puncture times has long posed a challenge in medical education. This challenge is attributable not only to the subjective nature of human evaluation but also to the insufficiency of effective detection techniques, resulting in many medical students lacking full proficiency in injection skills upon entering clinical practice. To address this issue, we propose a novel detection method that enables automatic detection of puncture times during injection without needing wearable devices. In this study, we utilized a hardware system and the YOLOv7 algorithm to detect critical features of injection motion, including puncture time and injection depth parameters. We constructed a sample of 126 medical injection training videos of medical students, and skilled observers were employed to determine accurate puncture times. Our experimental results demonstrated that the mean puncture time of medical students was 2.264 s and the mean identification error was 0.330 s. Moreover, we confirmed that there was no significant difference (p = 0.25 with a significance level of α = 0.05) between the predicted value of the system and the ground truth, which provides a basis for the validity and reliability of the system. These results show our system’s ability to automatically detect puncture times and provide a novel approach for training healthcare professionals. At the same time, it provides a key technology for the future development of injection skill assessment systems.

1. Introduction

Medical injection is a commonly utilized technique for delivering medications and vaccines to patients across various healthcare settings [1]. This method of administration offers several advantages, including rapid relief and accurate dosing, making it a primary choice for millions of individuals in need of treatment [2]. However, injections come with inherent risks such as discomfort, infection, and adverse drug reactions [3], making the accuracy of this technique crucial for ensuring positive patient outcomes. Despite its importance, achieving accuracy in medical injections is a complex process that requires careful consideration of multiple factors, and this challenge has persisted over time [4,5,6,7]. Therefore, it is essential to explore the factors that contribute to accurate injections, as well as potential strategies to improve injection accuracy and mitigate risks [8].
  • Intricacy. Vascular puncture is an intricate procedure that demands a high level of precision and dexterity [9]. The healthcare practitioner must identify the appropriate vein and skillfully insert the needle at the precise depth, ensuring its firm anchorage within the vein.
  • Limited visibility. Veins are frequently inconspicuous, particularly in patients with dark skin or dehydration. This may pose a challenge for novice practitioners in terms of vein localization and manipulation.
  • Patient apprehension. Many patients experience anxiety or trepidation when it comes to needle injections, which may impede the efficacy of inexperienced practitioners. Novices may inadvertently aggravate patient anxiety by lacking self-confidence or spending too much time on the procedure.
  • Inexperience. Vascular puncture, like any other skill, necessitates practice and proficiency [10]. Novice professionals may have inadequate exposure to the procedure during their training and limited opportunities to perform it on patients.
In view of these challenges, vascular puncture has emerged as a complex skill for novice nurses to acquire [1]. Therefore, it is crucial that extensive training in vascular puncture is given. Nonetheless, the current disruption caused by the COVID-19 pandemic has led medical institutions, including those involved in medical education, to suspend in-person lectures [11,12]. To surmount the constraints imposed by social distancing, an innovative approach to the crisis is necessary, involving distance learning and online evaluation techniques [13,14].
At this stage, numerous hospitals employ simulation training for nursing skills that involve physical procedures, in conjunction with theoretical learning, thereby offering a secure means of delivering medical education [15,16,17,18]. The advantage of this approach is that it allows novice professionals to practice in a secure and controlled environment, as well as to receive constructive feedback on their skills. Majima et al. [5,6] utilized a high-speed magnetic hand-motion capture device, Hand-MoCap, to establish a measurement system that is customized for measuring phlebotomy techniques. This technique enables novice medical students to acquire injection skills by mimicking the demonstration of an experienced practitioner and validating the instructor’s and student’s techniques through precise measurements. However, the comparison is subjective and does not offer an objective or quantitative evaluation of the injection technique. Saito, M. et al. [7] developed a learning support system for blood collection techniques. The system calculates the angle of the syringe in relation to the arm model and displays the difference between the instructor’s and the student’s angles in real-time, thereby providing quantitative feedback. However, it has certain limitations in that it requires individual calibration for each student. To further replicate nursing decision making in a realistic environment, virtual reality (VR) can also be utilized. Loukas et al. [16] conducted a study to evaluate the effectiveness of simulation training on medical students’ intravenous (IV) intubation skills. Studies have shown that novices’ injection skills significantly improved after simulator training, and these findings have been replicated in other medical student populations as well [4,19]. Furthermore, ongoing education could also help to build confidence and proficiency in novice healthcare professionals [20]. However, these studies did not demonstrate a significant difference in the accuracy of intravenous injections.
YOLOv7 [21] is an advanced object detection model that can be trained on various types of images. With its fast and accurate object detection capabilities, YOLOv7 can be helpful in many medical applications. For example, Sapitri A. I. et al. [22] devised a deep learning mode requiring the YOLOv7 framework to detect fetal hearts (measuring approximately 4–8 cm) in ultrasound videos in real-time. The effectiveness of YOLOv7 in detecting small objects was confirmed through validation studies [23,24] and has the potential to aid medical professionals in accurate medical diagnoses. Durve M. et al. [25] evaluated the performance of the YOLOv5 and YOLOv7 models, along with DeepSORT, for droplet identification and tracking in microfluidics. Their results showed that YOLOv7 is faster, but lighter YOLO models can achieve real-time tracking. Oka S. et al. [26] developed an image recognition system for detecting dental instruments during treatment to prevent injuries and leftover instruments. YOLOv4 and YOLOv7 were used, with mean detection accuracy ranging from 85.3% to 89.7%. These studies not only demonstrate the faster detection speed and detection accuracy of YOLOv7, but also highlight its efficacy as a detection method for identifying instrument components.
This study aims to address the issues highlighted in the aforementioned research by proposing an automatic puncture time detection method that utilizes the YOLOv7 algorithm. The aim of the proposed methodology is to detect time-related parameters in simulated injection training and obtain characteristic parameters such as puncture depth through post-processing. The major contributions of this study can be outlined as follows:
  • Develop a novel multi-camera data acquisition system capable of capturing video data directly from medical injection education alongside relevant parameters, including operation time and reverse blood detection.
  • The area of significance for the injection operation is manually demarcated, followed by the application of image processing techniques such as image rotation and segmentation to train an automated needle detection model based on YOLOv7.
  • Utilize the trained model to calculate the original needle length and puncture timing of injection, thereby providing a novel evaluation metric for medical injection training feedback and a crucial technique for establishing a subsequent evaluation system.
Section 2 provides an overview of the materials and data collection, including the design and implementation of the multi-camera data acquisition system. Section 3 details the YOLOv7 algorithm for clinical injection needle detection and the image processing techniques employed. Section 4 presents the experimental results, analyzes the errors, and verifies the robustness of the study. Section 5 provides a comprehensive discussion of the conclusions and implications of the study, including potential future research directions.

2. Materials and Data Collection

2.1. Multi-Camera Injection Analysis System

The system configuration is illustrated in Figure 1a, which is comprised of a timing device, a multi-camera system, a VICON verification system (a 3D motion analysis system), an arm model, and a reverse blood detection device. The precise parameters are enumerated in Table 1 as follows.
The camera configuration of the VICON verification system is depicted in Figure 1b, which includes 11 Vero cameras and one Vue HD full-synchronous high-speed camera. In this experiment, the Vero cameras were sampled at 100 Hz. The multi-camera system, as shown in Figure 1c, comprises four CMOS cameras that are directly connected to a personal computer (PC) and the VICON system to construct the operator’s hand injection model separately. Figure 1d displays the general camera for puncture detection, which is employed to monitor the puncture time during the injection. The timing device and the reverse blood detection device are shown in Figure 1e. The PC is linked to the timing device via a type C interface. Moreover, the photo sensor in the reverse blood detection device is linked to the Arduino main board of the timing device. The PC is responsible for power supply and data transmission, while the arm model is powered independently. The data acquisition software system was developed using Microsoft Visual C# 2022.

2.1.1. Timing Mechanism

The system’s timing button is fashioned using an Arduino development board, as illustrated in Figure 2. To initiate the operation, the operator needs to press the white button, which triggers the start signal to be sent to both the multi-camera system and the VICON system. Upon completing the injection simulation training, the operator can stop the timing by pressing the white button again, which transmits the end signal. This simple process enables accurate timekeeping of the entire operation duration.

2.1.2. Multi-Camera System

The system utilized a total of five cameras for analyzing hand injection actions, comprising four CMOS cameras for recording these actions, and one general camera for detecting the moment of the injection stabbing. It is important to note that in addition to the multi-camera system discussed in this paper, the VICON system, which contains 12 cameras, was also incorporated for the testing experiments. As the study did not consider a hand injection model, only the camera used for detecting the moment of injection is discussed in this study, as illustrated in Figure 3.

2.1.3. Reverse Blood Detection Devices

‘Reverse blood’, also referred to as ‘backflow’ or ‘reflux’, is a phenomenon frequently encountered in medical procedures such as blood withdrawal or IV initiation [27]. Figure 4 shows the reverse blood during an injection.
This phenomenon results from pressure fluctuations within the blood vessel upon puncture, which temporarily leads to the opposite direction of blood flow [28]. The occurrence of reverse blood is typically regarded by healthcare professionals as indicative of a successful operation. In this experiment, we aim to detect reverse blood to evaluate the success of the injection, for which we have developed a hardware device employing a photo sensor. The operational principle of this device is illustrated in Figure 5.
A working principle diagram of the reverse blood detection device is shown in Figure 5a. When either no injection training is being conducted or the needle has not yet punctured the skin, no blood flows within the blood vessel, and, hence, the photo sensor reliably outputs a stable signal by sensing the LED located on the opposite end of the needle tube, as shown in Figure 6 from 0 s to 17 s. Nevertheless, the figure reveals the presence of extraneous disturbances, stemming from ambient illumination and interference originating from the VICON system’s IR camera.
Once the needle is inserted into the vein, however, the reverse blood takes place, wherein the blood flows back into the syringe via the needle, resulting in a discernible alteration in the signal transmitted by the photo sensor, as depicted in Figure 6, spanning the time interval of 17 s to 22 s. The oscillations observed during this phase are a product of simulating a regular pulsation in the human body. It is worth noting that, through numerous preliminary experiments, we have ascertained that all saturation signal variations, due to this remarkably rapid transformation, surpass a threshold of 100, as delineated in Table 2. Therefore, we were able to determine the exact point in time when the reverse blood occurred and record the exact time of detection.
As illustrated in Figure 5b, we have designed the sensor hardware keeping in mind the need to avoid some disturbance to the operator’s posture during the injection process. Hence, we have employed 3D printing technology to miniaturize the photo sensor and fix it onto the injection needle. Furthermore, to counteract occasional noise interference, we have wrapped the sensor with black tape to mitigate any possible light interference.

2.2. Experimental Subjects

The primary aim of this study was to automate the detection of the puncture time during medical injection training. To accomplish this aim, a total of 126 medical students in grades 4–6 from Chiba University School of Medicine participated in the study. These students had completed a basic nursing skills course and had learned injection techniques through instructional videos but lacked practical experience in a clinical setting. Figure 7 displays the images captured during the data collection process using a multi-camera system.

3. Experiment Method

The YOLO model has gained popularity as a powerful deep learning tool for target detection [29]. In this experiment, we have chosen YOLOv7 as the baseline model, as it strikes a good balance between detection accuracy and speed. However, YOLOv7 is a general-purpose model and is not directly applicable to complex field scenarios [21]. Thus, in order to achieve target detection of the injection needle, we must optimize the data and model parameters accordingly. The specific flowchart for the automatic detection of puncture time in clinical injections using the YOLOv7 algorithm in this paper is shown in Figure 8.
As depicted in Figure 8, we began by capturing one frame from the camera and rectifying the distortion caused by the wide-angle lens of the camera. During image preprocessing, we encountered a challenge in calibrating the bounding box as some operators held the needle horizontally. To overcome this issue, we rotated the image to increase the height of the bounding box and improve recognition accuracy. We then cropped the frame images to exclude unnecessary recognition ranges. Once preprocessing was complete, we input the frame by frame images into our trained YOLOv7 model to identify the needle part and output the bounding box information. In order to calculate the original length of the needle, it was necessary to ensure that the needle did not puncture the skin. To enhance the applicability of the system, we determined the minimum time from the start of the operator’s timing to skin puncture. Since we input frame by frame, we determined the minimum time required for puncturing the skin to be 20 frames, experimentally. In our system, frames with i < 20 were considered as if the needle had not punctured the skin, frames with i = 20 were used to calculate the original needle length, and frames with i > 20 were considered as if the needle had already punctured the skin during the injection.
We determined the original needle length by employing the diagonal length of the recognition frame as provided by YOLOv7. Subsequently, we utilized the diagonal length of the bounding box during the puncture process to calculate the ratio of the needle outside the skin. Then, we corrected special values of the needle outside the ratio due to hand occlusion and applied a moving average of the time series of the needle outside the ratio. Additionally, we selected the best window parameters to obtain smoother information on the needle puncture. Finally, we automated the detection of the puncture time by evaluating the ratio of the needle remaining outside the skin. We determined the optimal puncture time threshold to be 0.96 through comparison tests. If the ratio was >0.96, indicating that the skin has not been punctured, we proceeded to read the next frame. If the ratio was ≤0.96, it means that the needle was starting to puncture the skin, and we output the puncture time to achieve automatic detection.

3.1. Image Preprocessing

In the initial stage of the experiment, the acquired data underwent preprocessing to enhance its quality. Calibration was imperative due to the usage of a wide-angle camera and the data size of 1920 × 1080. To achieve calibration, we utilized the camera calibration approach suggested by Zhang et al. [30], owing to its effectiveness and simplicity over conventional methods. The calibration plate was configured with a 9 × 6 corner, each measuring 10 × 10 mm, as demonstrated in Figure 9.
After the image rectification process, we opted to rotate the image by 30° to address the issue of the operator holding the injection needle at an overly flat angle, as illustrated in Figure 10.
This maneuver significantly enhanced the precision of subsequent experiments since a flat needle can cause a bounding box’s height to become too small. Rotating the image amplified the height of the bounding box, leading to more accurate needle height labeling and facilitating needle detection.
As the needle size was minuscule, direct detection posed a challenge. To overcome this obstacle, we manually determined a region of interest (ROI) from the original image based on the injection operation’s location, with the camera and hand model positions fixed. The resulting image size was 534 × 534, as demonstrated in Figure 11, for the results of preprocessing.

3.2. YOLOv7 Model Training

The Pytorch 1.7 deep learning framework and Python 3.8 programming language were utilized in the experiment, along with CUDA version 11.1. The training data was divided into a 7:2:1 ratio of training set, validation set, and test set. To ensure differences between adjacent frames, one frame was extracted every 30 frames during the calibration process, as the FPS of the generic camera is 30. The input image size remained consistent with the image preprocessing process to extract an ROI of 534 × 534. To achieve superior training results and reduce the training time, a learning rate of 0.001, a weight decay coefficient of 0.0005, and the Adam optimizer were employed, while the training batch size was set at 8, and the iteration period was set to 200. The weight files were saved after training for subsequent testing. The model output is composed of three parts: the label ID, needle size, and location information.
As depicted in Figure 12, syringes used in clinical injections are intentionally made small to reduce the likelihood of causing harm to the patient, but this design also makes detection of the small targets challenging. In our study, we not only manually defined the region of interest (ROI) to enhance the accuracy of injection needle detection, but also discovered that the detection of the needle site alone was inadequate despite multiple model trainings. Hence, we redefined the detection site, taking into account the detection process without any interference from the hand grip. Figure 12 displays the precise location of the detection needle. The YOLOv7 test results ultimately exhibited a 99% accuracy rate in detecting the needle precisely.

3.3. Post-Processing

3.3.1. Needle Original Length

In the experiment, we employ the YOLOv7 algorithm to train the needle recognition model and utilize the diagonal length of the recognition frame to estimate the length of the needle. The green line represents the length of the injection needle in Figure 13.
To ensure the precision of the original length detection, it is imperative to verify the operator’s injection process at this stage. Thus, four experimental parameters were defined to obtain precise information at this point, namely:
T s : S t a r t t i m e , T n : N e e d l e p u n c t u r e t h e s k i n t i m e , T r : R e v e r s e b l o o d g e n e r a t e d t i m e , T e : E n d t i m e .
By utilizing these parameters, the operator’s injection training can be effectively categorized into three distinct phases, preparation time, operation time, and processing time.
P r e p a r a t i o n t i m e = T n T s , O p e r a t i o n t i m e = T r T n , P r o c e s s i n g t i m e = T e T r .
After establishing the preparation time, our approach is to determine the original length of the needle by utilizing the bounding box information extracted by YOLOv7 during the preparation time. In order to establish the minimum required operating time for medical students with limited clinical experience, we relied on data obtained from 17 experts, each with at least three years of clinical injection experience. Ultimately, we were able to gather reliable results from 40 injections. The data collected from these experts was analyzed and the findings are presented in Figure 14.
As illustrated in Figure 14, optimizing the system’s practicality entails utilizing the operator’s minimal operation time and ensuring it remains within the preparation time stage, thereby ensuring precise calculation of the original needle length. Failing to enforce a minimum operation time could result in some operators initiating skin puncture and moving into the second stage (operation time), leading to inaccurate length calculations. To address this issue, a minimum operation time of 0.694 s was implemented in this study. Moreover, considering that the camera used in this experiment recorded at 30 frames per second (FPS), we chose the bounding box at the 20th frame to compute the original needle length.

3.3.2. Exception Handling

After obtaining the original length of the needle, it enables us to compute the ratio of the needle section that remains outside the body, which can be calculated as follows:
N e e d l e o u t s i d e r a t i o = A c t u a l l e n g t h O r i g i n a l l e n g t h × 100 %
We can track the change in needle depth during the injection process by monitoring this percentage of change. However, in practical applications we have observed the following two situations:
  • A single or multiple frames are not recognized.
    During data processing, it is possible that certain frames may not be recognized, as depicted in Figure 15a. In such cases, we adopted an approach to compute the average value of the previous and subsequent frames of the unrecognizable frame, which was used as the output for further processing. The resultant processed data is presented in Figure 15b.
    o u t p u t = p r e v i o u s f r a m e + n e x t f r a m e 2
  • The calculated percentage is over 100%.
    As a result of the YOLOv7 recognition and post-processing calculations, the size of the bounding box may vary from the true size due to the angle at which the operator holds it, resulting in a ratio exceeding 100%, as depicted in Figure 15c. In order to align with practical conditions, the regions that surpass the 100% limit require exception handling. Since this situation basically occurs in the stage before skin puncture, we assign randomized values between the skin puncture threshold and 100% to prevent the threshold from being exceeded and affecting the determination of skin puncture, as illustrated in Figure 15d. The specific skin puncture threshold will be expounded upon in Section 3.3.4.

3.3.3. Moving Average

In the previous section, we resolved some anomalous data by exception handling. However, the actual data exhibited fluctuations, making it necessary to utilize a moving average technique to ensure precise determination in subsequent automatic detection of the puncture time. A moving average is a statistical method used to reduce fluctuations in data over time by computing the mean of a set of values over a defined time period and then sliding the time window forward to compute a new average for the following time period [31]. The final result is shown in Figure 16.

3.3.4. Judgment of the Puncture Moment

In order to automatically detect the moment of puncture, it is crucial to determine an appropriate threshold value. Ideally, the needle length should gradually decrease from 100% when puncturing occurs. However, in the actual experimental setup, accurately calculating this decline from 100% proves challenging due to variations in operator technique and the varying angles of the needle resulting from different postures. Therefore, it becomes necessary to determine a suitable threshold value for this experiment. To achieve this, a comparative analysis was conducted by smoothing the needle puncture data described in Section 3.3.3, and the results were evaluated using four threshold values: 95%, 96%, 97%, and 98%. The outcomes are depicted in Figure 17.
The primary objective was to strike a balance between minimizing false positives, where a puncture is incorrectly detected when the needle has not actually punctured the skin, and minimizing false negatives, where a puncture goes undetected despite the needle successfully puncturing the skin. To determine the optimal threshold, we compared the standard deviation (SD) and mean absolute error (MAE) of four different data sets, as shown in Table 3.
By conducting a comparative analysis of the standard deviation (SD) of the predicted puncture time and the mean absolute error (MAE), calculated based on the difference from the ground truth, the extent of disparity between the predicted and true values can be evaluated. Consequently, when the threshold value is set to 96%, the standard deviation reaches its minimum, indicating a relatively concentrated distribution of predicted values and minimal deviations from the true values. This signifies the superior stability of the model. Furthermore, by comparing the mean absolute error (MAE) among different thresholds, one can assess the average level of disparity between the predicted and true values. Similarly, at the threshold value of 96%, the mean absolute error reaches its minimum, underscoring the heightened average accuracy of the model. Therefore, this study opted for a threshold value of 96% to achieve the utmost stability and accuracy in the model.

4. Results and Analysis

4.1. Results

In our experiment, a total of 126 operators participated. However, data for 17 of these participants were unavailable due to self-occlusion, wherein 9 participants fully obstructed the view with their arm while 8 others partially obstructed the view with their finger during the injection process. Therefore, we analyzed data from a total of 109 participants. The results of our analysis are presented in Figure 18.
The prediction performance of the YOLOv7-based test method proposed in this study was evaluated by comparing the predicted values with the ground truth. The ground truth data were analyzed frame by frame by an expert observer to ascertain the moment of skin puncture. As shown in the Bland–Altman plot in Figure 18a, the red dashed lines in the plot indicate the upper and lower permissible limits of agreement (LOA) for the measured values and the average difference between the measured values between the two methods. The LOA range from 1.1406 [95% CIs: 0.9828 to 1.2984] s to −0.7600 [95% CIs: −0.9178 to −0.6022] s. The mean difference in measurements between the methods was 0.1903 [95% confidence intervals (CIs): 0.09829 to 0.2824] s, with a fixed error (one sample t-test assuming zero population mean, p-value = 0.0001). Figure 18a shows that 104 of the 109 data are within the acceptable range of measured values, which indicates a high level of correlation between the two values. Furthermore, the average preparation time is 2.264 s and the mean error is 0.330 s. Figure 18b shows that there is no statistically significant difference (p = 0.25) between the predicted values and the ground truth, with a significance level of α = 0.05. This result provides evidence to support the validity and reliability of the proposed method.

4.2. Analysis

Upon analyzing the obtained results, we identified the reasons behind the considerable gap between the predicted values and the ground truth. It was discovered that the swift motion of the hand during the preparation phase of the injection led to the needle becoming blurred, while the intricate background caused the YOLOv7 model to misidentify objects, ultimately resulting in an incorrect identification of the key frame (frame 20, 0.5 s), as depicted in Figure 19.

4.3. Validation Experiment

Considering the robustness of the system, we designed two experiments.

4.3.1. Environmental Factors

We experimented with varying the image contrast and brightness of the original data. The aim was to investigate the effect of the environment on the system’s performance. Specifically, we employed data from the same person and conducted a single-factor comparison analysis by defining the parameters of brightness and contrast variation. The following steps were taken:
  • We normalized the input image f ( x , y ) from the range [0, 255] to [0, 1];
  • The parameters of brightness and contrast were defined with the following equations:
    g x , y = a × f x , y , 0 < a < 2
    where g ( x , y ) represents the pixel value of the output image, a represents the changed brightness value or contrast value, and f ( x , y ) represents the pixel value of the original image.
  • Finally, we normalized the resulting image g ( x , y ) from the range [0, 1] to [0, 255].
As depicted in Figure 20, we compared the accuracy of YOLOv7 in recognizing each frame and predicting the puncturing time by altering the brightness of the original video.
The evaluation method employed for this purpose is as follows:
R e c o g n i t i o n o f f r a m e ( % ) = S u c c e s s f u l l y r e c o g n i z e d f r a m e s T o t a l f r a m e s × 100 %
A c c u r a c y o f t i m e ( % ) = 1 G r o u n d t r u t h P r e d i c a t e d G r o u n d t r u t h × 100 %
The results in Figure 21 illustrate that the recognition rate of the needle frames increased significantly from a light intensity of 0.4 and maintained a high level until 1.4. Moreover, the accuracy of the recognition time reached 100% within the range of 0.7–1.0 in the recognition time accuracy results under the influence of light intensity.
Next, the same Equation (5) is used to compare the contrast, as shown in Figure 22, to change the contrast of the original data.
Using the same evaluation metric, the final outcomes were compared, as depicted in Figure 23. The proposed system exhibits a high level of performance in response to contrast variation, with the ability to maintain recognition of the needle at light intensity levels as low as 0.4, and reaching 100% recognition time accuracy at 0.9–1.1. Notably, the system’s accuracy results remained consistently high at various levels of contrast.

4.3.2. Complex Gestural Factors

In order to simulate real-world scenarios as much as possible, we added data from three additional left-handed operators and three operators who held the syringe incorrectly using their index and middle fingers to the original data from 126 experimenters. This additional data was used to further evaluate and test the robustness of our system, as illustrated in Figure 24.
As illustrated in Figure 25, the system’s robustness is verified by its ability to detect the moment of needle piercing automatically, even when the operator uses their left hand or incorrectly operates the syringe with their middle finger.

5. Discussion and Conclusions

5.1. Discussion

In this study, we introduce an innovative system for automatic detection of puncture timing based on clinical injection training. This system exploits image processing and YOLOv7 techniques to precisely detect the puncture time of trainers throughout the procedure. The proposed approach was assessed on a dataset comprising 109 experimenters, and it exhibited an average detection error of merely 0.330 s. It is worth noting that this value is considerably lower than the average total operation time of 11.00 s for each participant.
The system employed image processing techniques, such as region of interest (ROI) extraction and image rotation, to facilitate the extraction of pertinent information from frames. Furthermore, the YOLOv7 algorithm was employed to detect the injection needle, and the appropriate threshold for puncture time was determined through post-processing and experiments to achieve the final automatic detection of the puncture timing.
Our results illustrate the effectiveness and robustness of the proposed system, even in challenging scenarios where the operator is left-handed or holds the syringe incorrectly. These findings imply that the system is potentially applicable in a clinical environment, where it can facilitate the provision of objective and precise feedback during medical training for novices, given the current social limitations imposed by COVID-19.
However, our study is not without limitations that could be addressed in future research. One limitation resides in the YOLOv7 model’s incapacity to discern the state of the punctured vessel. Nevertheless, we hold the belief that ongoing research endeavors and advancements in computer vision technology possess the capacity to redress this limitation. This technological breakthrough could potentially be achieved by formulating more sophisticated algorithms or integrating supplementary modalities, such as ultrasound guidance.
Another aspect worthy of consideration is the exploration of additional variables, such as patient pain perception, which would yield significant value in future clinical trials. Not only would this yield a more comprehensive assessment to further comprehend patients’ sensations and responses throughout the injection process, but it would also contribute to the enhancement of comfort and patient satisfaction during the injection procedure. Furthermore, it would offer guidance for refining injection techniques and training, thereby augmenting its practical applicability in real-world scenarios.

5.2. Conclusions

In conclusion, our study introduces a novel approach to automatically detect puncture times during needle insertion image processing using YOLOv7 techniques. Our system has demonstrated high accuracy and robustness, providing a valuable reference for the future development of a comprehensive clinical injection skills training system.

Author Contributions

Conceptualization, Z.L. and T.N.; methodology, A.K.; software, Z.L. and T.N.; validation, A.K., A.H. and K.Y.; formal analysis, Z.L. and A.H.; investigation, A.K.; resources, S.I. data curation, S.I.; writing—original draft preparation, Z.L.; writing—review and editing, T.N. and Y.N.; visualization, K.Y.; supervision, K.Y.; project administration, T.S.; funding acquisition, T.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Ethics Review Committee of the Graduate School of Medicine, Chiba University (protocol code 3425).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Boykin, G.L., Sr. Measuring intravenous cannulation skills of practical nursing students using rubber mannequin intravenous training arms. Mil. Med. 2014, 179, 1361. [Google Scholar]
  2. Kermode, M. Unsafe injections in low-income country health settings: Need for injection safety promotion to prevent the spread of blood-borne viruses. Health Promot. Int. 2004, 19, 95–103. [Google Scholar] [CrossRef] [PubMed]
  3. da Silva, G.A.; Priebe, S.; Dias, F.N. Benefits of establishing an intravenous team and the standardization of peripheral intravenous catheters. J. Infus. Nurs. 2010, 33, 156–160. [Google Scholar] [CrossRef] [PubMed]
  4. Lee, J.S. Implementation and evaluation of a virtual reality simulation: Intravenous injection training system. Int. J. Environ. Res. Public Health 2022, 19, 5439. [Google Scholar] [CrossRef] [PubMed]
  5. Majima, Y.; Maekawa, Y.; Masato, S. Learning support system reproducing finger movements in practicing nursing techniques. In Proceedings of the NI 2012: 11th International Congress on Nursing Informatics, Montreal, QC, Canada, 23–27 June 2012; Volume 2012. [Google Scholar]
  6. Majima, Y.; Masuda, S.; Matsuda, T. Development of Augmented Reality in Learning for Nursing Skills. In MEDINFO 2019: Health and Wellbeing e-Networks for All; IOS Press: Amsterdam, The Netherlands, 2019; pp. 1720–1721. [Google Scholar]
  7. Saito, M.; Kikuchi, Y.; Kudo, Y.; Sasaki, M.; Mitobe, K. Development of a Learning Support System for Blood Sampling Techniques Using a Magnetic Motion Capture System. IEEJ Trans. Electr. Electron. Eng. 2022, 17, 757–759. [Google Scholar] [CrossRef]
  8. Tariq, R.A.; Vashisht, R.; Sinha, A.; Scherbak, Y. Medication Dispensing Errors and Prevention; StatPearls Publishing: Treasure Island, FL, USA, 2018. [Google Scholar]
  9. Leipheimer, J.M.; Balter, M.L.; Chen, A.I.; Pantin, E.J.; Davidovich, A.E.; Labazzo, K.S.; Yarmush, M.L. First-in-human evaluation of a hand-held automated venipuncture device for rapid venous blood draws. Technology 2019, 7, 98–107. [Google Scholar] [CrossRef]
  10. Wilcox, T.; Oyler, J.; Harada, C.; Utset, T. Musculoskeletal exam and joint injection training for internal medicine residents. J. Gen. Intern. Med. 2006, 21, 521–523. [Google Scholar] [CrossRef] [Green Version]
  11. Goh, P.S.; Sandars, J. A vision of the use of technology in medical education after the COVID-19 pandemic. MedEdPublish 2020, 9, 49. [Google Scholar] [CrossRef] [Green Version]
  12. Gaur, U.; Majumder, M.A.A.; Sa, B.; Sarkar, S.; Williams, A.; Singh, K. Challenges and opportunities of preclinical medical education: COVID-19 crisis and beyond. SN Compr. Clin. Med. 2020, 2, 1992–1997. [Google Scholar] [CrossRef]
  13. Gallagher, T.H.; Schleyer, A.M. “We signed up for this!”—Student and trainee responses to the Covid-19 pandemic. N. Engl. J. Med. 2020, 382, e96. [Google Scholar] [CrossRef]
  14. Liang, Z.C.; Ooi, S.B.S.; Wang, W. Pandemics and their impact on medical training: Lessons from Singapore. Acad. Med. 2020, 95, 1359–1361. [Google Scholar] [CrossRef]
  15. Schiavenato, M. Reevaluating simulation in nursing education: Beyond the human patient simulator. J. Nurs. Educ. 2009, 48, 388–394. [Google Scholar] [CrossRef] [PubMed]
  16. Loukas, C.; Nikiteas, N.; Kanakis, M.; Georgiou, E. Evaluating the effectiveness of virtual reality simulation training in intravenous cannulation. Simul. Healthc. 2011, 6, 213–217. [Google Scholar] [CrossRef] [PubMed]
  17. Reinhardt, A.C.; Mullins, I.L.; De Blieck, C.; Schultz, P. IV insertion simulation: Confidence, skill, and performance. Clin. Simul. Nurs. 2012, 8, e157–e167. [Google Scholar] [CrossRef]
  18. Wilfong, D.N.; Falsetti, D.J.; McKinnon, J.L.; Daniel, L.H. The effects of virtual intravenous and patient simulator training compared to the traditional approach of teaching nurses: A research project on peripheral iv catheter insertion. J. Infus. Nurs. 2011, 34, 55–62. [Google Scholar] [CrossRef]
  19. Lund, F.; Schultz, J.H.; Maatouk, I.; Krautter, M.; Möltner, A.; Werner, A.; Weyrich, P.; Jünger, J.; Nikendei, C. Effectiveness of IV cannulation skills laboratory training and its transfer into clinical practice: A randomized, controlled trial. PLoS ONE 2012, 7, e32831. [Google Scholar] [CrossRef]
  20. Keleekai, N.L.; Schuster, C.A.; Murray, C.L.; King, M.A.; Stahl, B.R.; Labrozzi, L.J.; Gallucci, S.; LeClair, M.W.; Glover, K.R. Improving nurses’ peripheral intravenous catheter insertion knowledge, confidence, and skills using a simulation-based blended learning program: A randomized trial. Simul. Healthc. 2016, 11, 376. [Google Scholar] [CrossRef] [Green Version]
  21. Wang, C.Y.; Bochkovskiy, A.; Liao, H.Y.M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. arXiv 2022, arXiv:2207.026. [Google Scholar]
  22. Sapitri, A.I.; Nurmaini, S.; Rachmatullah, M.N.; Tutuko, B.; Darmawahyuni, A.; Firdaus, F.; Rini, D.P.; Islami, A. Deep learning-based real time detection for cardiac objects with fetal ultrasound video. Inform. Med. Unlocked 2023, 36, 101150. [Google Scholar] [CrossRef]
  23. Du, Z.; Yin, J.; Yang, J. Expanding receptive field yolo for small object detection. J. Phys. Conf. Ser. 2019, 1314, 012202. [Google Scholar] [CrossRef] [Green Version]
  24. Liu, C.; Hu, S.C.; Wang, C.; Lafata, K.; Yin, F.F. Automatic detection of pulmonary nodules on CT images with YOLOv3: Development and evaluation using simulated and patient data. Quant. Imaging Med. Surg. 2020, 10, 1917. [Google Scholar] [CrossRef]
  25. Durve, M.; Orsini, S.; Tiribocchi, A.; Montessori, A.; Tucny, J.M.; Lauricella, M.; Camposeo, A.; Pisignano, D.; Succi, S. Benchmarking YOLOv5 and YOLOv7 models with DeepSORT for droplet tracking applications. arXiv 2023, arXiv:2301.081. [Google Scholar] [CrossRef] [PubMed]
  26. Oka, S.; Nozaki, K.; Hayashi, M. An efficient annotation method for image recognition of dental instruments. Sci. Rep. 2023, 13, 169. [Google Scholar] [CrossRef]
  27. Dang, T.; Annaswamy, T.M.; Srinivasan, M.A. Development and evaluation of an epidural injection simulator with force feedback for medical training. In Medicine Meets Virtual Reality 2001; IOS Press: Amsterdam, The Netherlands, 2001; pp. 97–102. [Google Scholar]
  28. Tsai, S.L.; Tsai, W.W.; Chai, S.K.; Sung, W.H.; Doong, J.L.; Fung, C.P. Evaluation of computer-assisted multimedia instruction in intravenous injection. Int. J. Nurs. Stud. 2004, 41, 191–198. [Google Scholar] [CrossRef] [PubMed]
  29. Lai, Y.; Ma, R.; Chen, Y.; Wan, T.; Jiao, R.; He, H. A Pineapple Target Detection Method in a Field Environment Based on Improved YOLOv7. Appl. Sci. 2023, 13, 2691. [Google Scholar] [CrossRef]
  30. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef] [Green Version]
  31. Petrie, A.; Sabin, C. Medical Statistics at a Glance; John Wiley & Sons: Hoboken, NJ, USA, 2019. [Google Scholar]
Figure 1. The multi-camera injection analysis system. (a) System configuration, indicating the location of the multi-camera system with red boxes and the location of the general camera with blue boxes. (b) The shooting range of the VICON system. (c) Four multi-cameras using CMOS. (d) The general camera. (e) Injection arm model, timing device, and reverse blood detection device equipment.
Figure 1. The multi-camera injection analysis system. (a) System configuration, indicating the location of the multi-camera system with red boxes and the location of the general camera with blue boxes. (b) The shooting range of the VICON system. (c) Four multi-cameras using CMOS. (d) The general camera. (e) Injection arm model, timing device, and reverse blood detection device equipment.
Applsci 13 07120 g001
Figure 2. Timing device to record operation time.
Figure 2. Timing device to record operation time.
Applsci 13 07120 g002
Figure 3. General camera for detecting the moment of injection stabbing.
Figure 3. General camera for detecting the moment of injection stabbing.
Applsci 13 07120 g003
Figure 4. Reverse blood in injection process. After puncturing the blood vessel, some blood flows back into the needle.
Figure 4. Reverse blood in injection process. After puncturing the blood vessel, some blood flows back into the needle.
Applsci 13 07120 g004
Figure 5. Reverse blood detection devices.
Figure 5. Reverse blood detection devices.
Applsci 13 07120 g005
Figure 6. The signal of the photo sensor.
Figure 6. The signal of the photo sensor.
Applsci 13 07120 g006
Figure 7. Injection experience with a subject.
Figure 7. Injection experience with a subject.
Applsci 13 07120 g007
Figure 8. Flow chart for detecting puncture time.
Figure 8. Flow chart for detecting puncture time.
Applsci 13 07120 g008
Figure 9. The calibration plate used in the experiment.
Figure 9. The calibration plate used in the experiment.
Applsci 13 07120 g009
Figure 10. Image rotation.
Figure 10. Image rotation.
Applsci 13 07120 g010
Figure 11. Image preprocessing.
Figure 11. Image preprocessing.
Applsci 13 07120 g011
Figure 12. Definition of the needle length—from the needle site of the syringe to the front of the blue-winged handle.
Figure 12. Definition of the needle length—from the needle site of the syringe to the front of the blue-winged handle.
Applsci 13 07120 g012
Figure 13. Calculation of the length of the needle.
Figure 13. Calculation of the length of the needle.
Applsci 13 07120 g013
Figure 14. Preparation time data for 17 experts. The results of the preparation time are presented using the ground truth.
Figure 14. Preparation time data for 17 experts. The results of the preparation time are presented using the ground truth.
Applsci 13 07120 g014
Figure 15. Exception handling. (a) The needle was not detected in some frames, which resulted in the percentage not being able to be calculated. (b) After the exception handling, the result of frame four was corrected. (c) The initial frames’ results exceed 100%. (d) After the exception handling, the result of exceeding 100% was corrected.
Figure 15. Exception handling. (a) The needle was not detected in some frames, which resulted in the percentage not being able to be calculated. (b) After the exception handling, the result of frame four was corrected. (c) The initial frames’ results exceed 100%. (d) After the exception handling, the result of exceeding 100% was corrected.
Applsci 13 07120 g015
Figure 16. Moving average processing.
Figure 16. Moving average processing.
Applsci 13 07120 g016
Figure 17. Relation between predicted puncture time (from our system) and ground truth for each threshold.
Figure 17. Relation between predicted puncture time (from our system) and ground truth for each threshold.
Applsci 13 07120 g017
Figure 18. Results of statistical analysis. (a) Bland–Altman plot between ground truth measurements and predictions. (b) Significance analysis.
Figure 18. Results of statistical analysis. (a) Bland–Altman plot between ground truth measurements and predictions. (b) Significance analysis.
Applsci 13 07120 g018
Figure 19. Examples of the YOLOv7 misjudgment. (a) The presence of a complex background can result in inaccurate identification of needles. (b) The presence of a finger in the background causes recognition errors.
Figure 19. Examples of the YOLOv7 misjudgment. (a) The presence of a complex background can result in inaccurate identification of needles. (b) The presence of a finger in the background causes recognition errors.
Applsci 13 07120 g019
Figure 20. The change in brightness.
Figure 20. The change in brightness.
Applsci 13 07120 g020
Figure 21. The result of brightness changes.
Figure 21. The result of brightness changes.
Applsci 13 07120 g021
Figure 22. The change in contrast.
Figure 22. The change in contrast.
Applsci 13 07120 g022
Figure 23. The result of contrast changes.
Figure 23. The result of contrast changes.
Applsci 13 07120 g023
Figure 24. Complex gestures.
Figure 24. Complex gestures.
Applsci 13 07120 g024
Figure 25. Complex gestures.
Figure 25. Complex gestures.
Applsci 13 07120 g025
Table 1. VICON system and Multi-camera system specifications.
Table 1. VICON system and Multi-camera system specifications.
SystemCamera TypeModelResolutionFrame RateCompany
VICON Verification SystemHigh-speed camerasVero2048 × 1088330 FPSVICON Company
HD full-synchronous high-speed cameraVue1920 × 108030 FPSVICON Company
Multi-Camera SystemCMOS camerasDFK 33UX2901920 × 108040 FPSIMAGINGSOURCE
General camera-1920 × 108030 FPS-
Table 2. Comparison of the saturation alteration upon the occurrence of the reverse blood phenomenon.
Table 2. Comparison of the saturation alteration upon the occurrence of the reverse blood phenomenon.
CountSaturation Value of
Previous Frame
Saturation Value of
Subsequent Frame
Difference
129247218
247179132
318178160
415203188
524224200
620258238
795225130
892212120
953264211
1030171141
Table 3. Comparing the results of different thresholds.
Table 3. Comparing the results of different thresholds.
Threshold (%)95969798
Standard Deviation (SD)0.5070.4840.5060.544
Mean Absolute Error (MAE)0.3580.3300.4060.539
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, Z.; Kanazuka, A.; Hojo, A.; Suzuki, T.; Yamauchi, K.; Ito, S.; Nomura, Y.; Nakaguchi, T. Automatic Puncture Timing Detection for Multi-Camera Injection Motion Analysis. Appl. Sci. 2023, 13, 7120. https://doi.org/10.3390/app13127120

AMA Style

Li Z, Kanazuka A, Hojo A, Suzuki T, Yamauchi K, Ito S, Nomura Y, Nakaguchi T. Automatic Puncture Timing Detection for Multi-Camera Injection Motion Analysis. Applied Sciences. 2023; 13(12):7120. https://doi.org/10.3390/app13127120

Chicago/Turabian Style

Li, Zhe, Aya Kanazuka, Atsushi Hojo, Takane Suzuki, Kazuyo Yamauchi, Shoichi Ito, Yukihiro Nomura, and Toshiya Nakaguchi. 2023. "Automatic Puncture Timing Detection for Multi-Camera Injection Motion Analysis" Applied Sciences 13, no. 12: 7120. https://doi.org/10.3390/app13127120

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop