Next Article in Journal
How to Obtain an Orthodontic Virtual Patient through Superimposition of Three-Dimensional Data: A Systematic Review
Next Article in Special Issue
High-Speed Time-Resolved Tomographic Particle Shadow Velocimetry Using Smartphones
Previous Article in Journal
Epoxy/Glass Fiber Nanostructured p- and n-Type Thermoelectric Enabled Model Composite Interphases
Previous Article in Special Issue
Analysis of a Wake-Up Task-Based Mobile Alarm App
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Unified Framework for Automatic Detection of Wound Infection with Artificial Intelligence

1
Department of Surgery, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei 100, Taiwan
2
Department of Surgery, National Taiwan University Hospital Hsin-Chu Biomedical Science Park Branch, Hsin-Chu County 300, Taiwan
3
Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei 106, Taiwan
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2020, 10(15), 5353; https://doi.org/10.3390/app10155353
Submission received: 25 March 2020 / Revised: 8 June 2020 / Accepted: 12 June 2020 / Published: 3 August 2020

Abstract

:
Background: The surgical wound is a unique problem requiring continuous postoperative care, and mobile health technology is implemented to bridge the care gap. Our study aim was to design an integrated framework to support the diagnosis of wound infection. Methods: We used a computer-vision approach based on supervised learning techniques and machine learning algorithms, to help detect the wound region of interest (ROI) and classify wound infection features. The intersection-union test (IUT) was used to evaluate the accuracy of the detection of color card and wound ROI. The area under the receiver operating characteristic curve (AUC) of our model was adopted in comparison with different machine learning approaches. Results: 480 wound photographs were taken from 100 patients for analysis. The average value of IUT on the validation set with fivefold stratification to detect wound ROI was 0.775. For prediction of wound infection, our model achieved a significantly higher AUC score (83.3%) than the other three methods (kernel support vector machines, 44.4%; random forest, 67.1%; gradient boosting classifier, 66.9%). Conclusions: Our evaluation of a prospectively collected wound database demonstrates the effectiveness and reliability of the proposed system, which has been developed for automatic detection of wound infections in patients undergoing surgical procedures.

1. Introduction

Surgical site infection (SSI) is one of the most common surgical infectious complications and can increase the length of hospital stay, costs, the number of readmissions, and the need for further wound debridement [1,2]. The risk factors for SSI are multifactorial and include wound cleanliness and microbial, patient-related, and procedure-related factors [3]. With advances in surgical skills and perioperative care, the rate of SSI may be expected to decrease. However, the rate of SSI after hospital discharge is increasing, as postoperative lengths of stay have dramatically decreased because of enhanced recovery after surgery programs in developed countries [4]. Furthermore, more elderly people are undergoing surgery with worldwide aging. Surgical patients require novel solutions to enable them to take a more active role in daily self-care of wounds. If the early signs of SSI are not detected by patients or caregivers, delays in visits to the clinic can result in severe wound complications that require hospital readmission, with or without surgical intervention [5].
Traditionally, the surgical staff would make a call to inquire the patients of the recovery of the general condition and the wound, whereas the condition of the wound is difficult to be remotely evaluated by verbal communications only. To reduce the gap between home and hospital, mobile health (mHealth) has been created to help patients upload wound images and obtain immediate responses regarding their wound condition and has proved feasible in several studies [6,7,8]. Furthermore, the patients receive feelings of reassurance because the medical staff continuously and seamlessly monitor their recovery. However, these novel programs depend on professional staff to review the wound pictures and to provide responses, which may add extra workload to already burdened staff [5].

Objectives

To reduce the workload on professional staff, we developed an automatic monitoring protocol for surgical wounds, using smartphone images in patients undergoing general surgery at an Asian academic center. These patients receive diverse surgical procedures, such as open laparotomy, hernia repair, and minimal invasive surgery. They account for a high proportion of surgical patients and are a challenge for artificial-intelligence-supported wound monitoring. At present, medical professionals diagnose wound infections based on the early signs and symptoms of skin redness, swelling, pain, and warmth; the first two components may be accessed on visual examination. With enhancement of the quality of digital images by smartphones, previous work has proved that these images are a reliable tool and may play an integral role in making diagnostic and therapeutic decisions comparable to those made by in-person evaluation [9]. The aim of this study was to develop an expert system for automatic identification of wound types and detection of wound infection.

2. Materials and Methods

2.1. Application for Collection of Wound Images

In order to collect wound images during the perioperative period, we developed a smartphone Android-compatible application (APP) for surgeons to upload these pictures. The study protocol was approved by the Institutional Review Board of the National Taiwan University Hospital (201705076RINC). The programming language to create this APP was Java, and the integrated development environment was Android Studio (ver. 3.0.1) with Java Develop Kit (ver. 1.8.0_152). Just after completing the operation, during hospitalization, and in outpatient clinic follow-up, the surgeon photographed the wound area (from subcostal area to inguinal area), accompanied with a reference of CameraTrax® color card by the same smartphone under a non-flash model, and the images were transmitted to the server, for further storage and analysis.

2.2. Interface of Wound Annotation

The surgical professionals annotated the images at the website and classified them according to type of procedure (laparotomy, minimal invasive surgery, or hernia repair), interest of wound area, and abnormal wound features. We developed a drawing tool implemented with a tablet personal computer and stylus to precisely mark the wound area and wound features with corresponding colors. The annotation process was as follows:
First, the surgical professional chose the type of procedure and ranked the interpretation according to the degree of difficulty. If some pictures were transmitted in error, they were classified as exceptions.
Second, the actual wound was circled in purple, the peri-wound area was circled in red, the region of interest (ROI; any area of abnormal features near the wound) was circled in orange, the color card was drawn in green, and the drainage tube was drawn in black (Figure 1). These features train the system to identify the location of the wound and focus on the significant wound area.
Third, the features of the wound infection were marked by painting them in corresponding colors. Nine colors were used to describe the characteristics of the wound infection, which were bleeding, erythema, necrosis, pus, bruise, granulation, swelling, allergy, and pigmentation (Figure 2). Next, these patterns were stored in the database, in the form of string of the Base64 encoding, for analysis.

2.3. Model Development

We adopted the deep convolutional neural networks (CNN), pretrained 50-layer residual networks (ResNets) [10], which was modified to our algorithm by replacement of both Global Average Pooling and Fully Connected layers with Conv6 (kernel size 1 × 1 and 21 predicting classes) and Bilinear Up-sampling 2D layers (resizing the feature fit to the input shape, 320 × 320) (Figure 3). This new algorithm was pretrained on PASCAL VOC 2012 database to have network weights [11].

2.4. Detection of Color Card and ROI

The function of a color card is to adjust the color and to describe the characteristics of the wound infection for each image. To identify the color card and the ROI, the predicting class in Conv6 layer was set at 2 (with or without this feature). We adopted the Speed-Up Robust Feature [12] to extract the four key points of the photographed color card and enhanced the match of the four key points between the actual and the photographed color card by the Fast Library for Approximate Nearest Neighbors [13].
Next, we used the intersection-union test (IUT) as a metric to evaluate the accuracy of the detection of color card and wound ROI [14]. The IUT is equal to the area of overlap between the color card image detected by the deep learning algorithm and the actual color card in the wound photograph divided by the area of union encompassed by the two color cards.

2.5. Image Processing and Color Correction

To correct for uneven brightness of the wound images for data analysis, we applied the log transform ( s = c · l o g v ( 1 + v · r ) , where s is the output, and r is the input value), followed by gamma transform ( s = c · r γ ), over all the RGB color of pixels in the image for brightness correction. After the adjustment, we normalized the value ranging from 0 to 255, to convert the result to the regular range of intensity in RGB color. With the identification of the color card in the pictures, we used eight pairs of colors between the actual and the photographed color card, to adjust the pixels by the correction matrix, which is described below.
[ x 1   x 2   x 3   x 4   y 1   y 2   y 3   y 4   z 1   z 2   z 3   z 4   ] [ r   g   b   1   ] = [ r   g   b ]
The wound length was calculated with reference to the length in the color card.

2.6. Detection of Wound Infection

Using the data from ROI images with corresponding diagnosis of abnormal wound features, we used Xception (one deep CNN) [15] to predict whether the images had the features of wound infection (binary classification task). Wound infection is defined as the presence of redness, pus, or necrosis. Kernel support vector machines classifiers [16], random forest [17], and gradient boosting classifiers [18] were trained to predict wound infection. The classification accuracy, recall, precision, F1 score, and area under the receiver operating characteristic curve (AUC) of these four methods were calculated. The discriminatory power of the four models was analyzed, using the AUC to assess the ability of the model in identifying images with wound infection. Furthermore, we also trained and tested the models by performing fivefold cross-validation.

2.7. Experimental Setup

This proposed framework was executed with MATLAB R2018b (MathWorks, Inc., Natick, MA, USA) on a Windows 10 operating system. All the experiments were performed on a server computer with Intel® Xeon central processing unit (CPU) E5-2650, 16 GB random access memory (RAM), and a NVIDIA (Santa Clara, CA, USA) GeForce GTX 1080 Ti GPU. With the graphics card, it offered the equivalent processing capability for both the training and the validation models.

3. Results

During the study period, 480 wound photographs were taken from 100 patients for further analysis. The numbers of cases of hernia, minimal invasive surgery, and laparotomy were 54, 28, and 18, respectively.

3.1. Detection of Color Card and Wound ROI

Of the 480 images, 384 were used as a training set, and 96 were used as a validation set. We applied the fivefold stratification as cross-validation with the same division between the training and validation sets [19]. The average value of IUT on the validation set with fivefold stratification to predict the area of the color card was 0.978 (standard deviation, 0.003).
With regard to the results of wound ROI, the average value of IUT on the validation set with fivefold stratification was 0.775 (standard deviation, 0.020). Figure 4 shows the process and autonomic output of our model.

3.2. Detection of Wound Infection

For the detection of wound infection, 380 images (including 108 images of wound infection) were used for training, while we validated the performance on 100 images (including 28 images of wound infection). We also adopted fivefold cross-validation to assess the outcomes of the four models.
Table 1 shows the classification accuracy, recall, precision, F1 score, and AUC for each method. Our algorithm achieved an accuracy of 79.5%, recall of 77.1%, precision of 82.7%, F1 score of 79.4%, and AUC score of 83.3%, whereas the other three methods achieved lower AUC scores (kernel support vector machines, 44.4%; random forest, 67.1%; gradient boosting classifier, 66.9%). For detailed comparisons among the four methods, we show the receiver operating characteristic (ROC) curves in Figure 5. The ROC curve is a probability curve, and AUC characterizes the degree of separability. One model with a higher score is better at detecting the aimed outcome.

4. Discussion

We have designed a novel wound image analysis system for surgical patients undergoing elective procedures, including laparotomy, minimal invasive surgery, and hernia. This artificial-intelligence-implemented algorithm recognizes not only wound borders, but also potential features of the wound infection, and calculates the length of the wound by comparison with a reference color card. The reason why we chose CNN as a model is that it has become dominant in various computer vision tasks with pre-training models and good performances in feature extractors (inception model or ResNets). With the pre-training model, you can extract useful attributes with its trained weights and tune your CNN model by feeding data on each level for the new task. This study demonstrates the feasibility of using expert-based machine learning methods to predict wound infection, with acceptable results. To the best of our knowledge, this is one of the innovative systems focusing on the automatic detection of wound area and diagnosis of wound infection.
Traditionally, patients discharged from surgery receive little care, until they report for a routine clinic follow-up [5]. Self-care is a unique problem for surgical patients rather than medical patients during this period. Among surgical patients under the early recovery and discharge program, wound complications are more likely to occur after discharge than before discharge [20]. Without appropriate management and early detection of wound complications, patients may be particularly vulnerable after discharge, given their lack of clinical experience, their increased medical costs, and the likelihood of hospital readmission.
In summary, mHealth is a scalable and flexible platform supporting the practice of medical and perioperative care with the provision of mobile devices [8,21,22,23], which improve the quality of perioperative care and have proliferated [24,25]. Although some academic centers have developed the mHealth system for postoperative wound care with positive results, it places an additional strain and workload on their provider teams, even if they are enthusiastic about it [5,7,25,26,27]. The advantage of our method is that it provides automatic detection of wound infection, which may alleviate the providers’ workload to a certain degree.
In this study, we used expert-based annotation of wound features to improve the performance because of the limited number of wound pictures. Traditionally, inpatient wounds are assessed periodically, but photographs are not taken or documented in the medical records unless complications occur. Furthermore, almost no data on wounds after discharge are available for analysis. It is necessary for surgical professionals to systemically collect perioperative wound photographs. Because of the shortage of available wound photographs, our initial task was to construct a cloud-based server to store wound data obtained by mobile phones. Next, we performed expert-based annotation of wound features to improve the prediction outcomes of deep learning. With this infrastructure, we can continue to collect wound data, which may further improve the accuracy of detection of wound complications [28,29].
One recent report questioned the accuracy of detection of wound complications by using wound photographs and clinical information from surgical professionals [25]. With such a simulation, the use of wound images increased the specificity of detection of wound complications and surgeon confidence, but worsened the sensitivity of detection of wound complications. Given the importance of image quality and experience with remote evaluation, some strategies should be implemented, such as participant training, standardization of patient photographic techniques, and synchronization of definitions of wound features [30]. In our study, the photographs were taken by surgeons during direct outpatient visits, which eliminated misdiagnosis and provided accurate data for further analysis.
Our next goal is to design one application that will provide knowledge about self-care and real-time feedback from surgical professionals. Every uploaded wound photograph will be evaluated by the algorithms implemented in the server, and the output with abnormal wound features will be assessed by surgical professionals before sending alarm messages to the patient or caregiver. Although previous studies utilized CPU or GPU in mobile phones to perform wound analysis, the image resolution should be compromised to fit the limited performance of mobile phones [31,32]. Moreover, it is difficult for mobile phones to run deep-learning algorithms. Therefore, to allow for a more complex and computationally demanding analysis of high-resolution photographs taken by modern smartphone cameras, we chose to do data mining in the server because the mobile internet is becoming popular, with fast internet speed.
Although the recent improvements of deep learning have been significant, there still exist challenges to its application to medical imaging/wound images. A well-annotated large medical dataset is needed and considered as an important key for clinical application, because most of the notable achievements of deep learning are majorly dependent on large datasets. In the future, surgical academic communities should continuously collect the wound images during the perioperative periods that would foster research on the detection of wound infections.
This study has some limitations. The study was performed on Asian subjects, which may limit its generalizability to patients of different nationalities or ethnicities. In addition, more patients participating in this study provide diverse quality of wound photographs, which contain inter-person variation because different mobile phone cameras have different color space characteristics or noise levels. This may cause a potential concern interfering with our algorithm. To overcome these limitations, color calibration is routinely adjusted by referenced color card to minimize these confounding factors. However, it may cause extra work for the patients or caregivers to take photographs with a color card. Finally, a “test” set in conjunction with a validation dataset should be considered to evaluate the performance of the final model. We will collect more wound images to reflect it.

5. Conclusions

We introduced a unified framework for automatic analysis of wound condition with the use of supervised machine learning, to overcome the shortage of data on wound images. Our system provides acceptable performance and is efficient enough to process the wound image within five seconds. The methodology and results presented in this study may have important implications for the field of surgical wound evaluation and diagnosis. With our proposed model, this system provides complementary information to partially relieve the workload of medical professionals in charge of telecare. Furthermore, this methodology can be directly extrapolated to other similar surgical procedures, such as cardiovascular and orthopedic operations.

Author Contributions

Conceptualization, J.-M.W. and T.-W.H.; data curation, J.-M.W. and M.-T.L.; formal analysis, T.-W.H. and C.-J.T.; funding acquisition, H.-C.T.; methodology, J.-M.W., C.-J.T., and F.L.; resources, F.L.; software, C.-J.T., T.-W.H., and F.L.; supervision, F.L., H.-C.T., and M.-T.L.; validation, T.-W.H. and H.-C.T.; writing—original draft, J.-M.W., T.-W.H., H.-C.T., and M.-T.L. All authors have read and agreed to the published version of the manuscript.

Funding

The authors would like to thank all members of the Department of Surgery at the National Taiwan University Hospital for their efforts and contributions. This study was supported by grants (numbers 106-2628-E-002-004-MY3) from the Ministry of Science and Technology and grants (109-S4629) from National Taiwan University Hospital, Taiwan, which had no role in the study design, data collection, or analysis; decision to publish; or preparation of the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lewis, S.S.; Moehring, R.W.; Chen, L.F.; Sexton, D.J.; Anderson, D.J. Assessing the relative burden of hospital-acquired infections in a network of community hospitals. Infect. Control. Hosp. Epidemiol. 2013, 34, 1229–1230. [Google Scholar] [CrossRef] [Green Version]
  2. Wiseman, J.T.; Guzman, A.M.; Fernandes-Taylor, S.; Engelbert, T.L.; Saunders, R.S.; Kent, K.C. General and vascular surgery readmissions: A systematic review. J. Am. Coll. Surg. 2014, 219, 552–569. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Young, P.Y.; Khadaroo, R.G. Surgical site infections. Surg. Clin. North Am. 2014, 94, 1245–1264. [Google Scholar] [CrossRef] [PubMed]
  4. Ljungqvist, O.; Scott, M.; Fearon, K.C. Enhanced Recovery After Surgery: A Review. JAMA Surg. 2017, 152, 292–298. [Google Scholar] [CrossRef] [PubMed]
  5. Gunter, R.L.; Fernandes-Taylor, S.; Rahman, S.; Awoyinka, L.; Bennett, K.M.; Weber, S.M.; Greenberg, C.C.; Kent, K.C. Feasibility of an Image-Based Mobile Health Protocol for Postoperative Wound Monitoring. J. Am. Coll. Surg. 2018, 226, 277–286. [Google Scholar] [CrossRef] [PubMed]
  6. Symer, M.M.; Abelson, J.S.; Milsom, J.; McClure, B.; Yeo, H.L. A Mobile Health Application to Track Patients After Gastrointestinal Surgery: Results from a Pilot Study. J. Gastrointest. Surg. 2017, 21, 1500–1505. [Google Scholar] [CrossRef]
  7. Fernandes-Taylor, S.; Gunter, R.L.; Bennett, K.M.; Awoyinka, L.; Rahman, S.; Greenberg, C.C.; Kent, K.C. Feasibility of Implementing a Patient-Centered Postoperative Wound Monitoring Program Using Smartphone Images: A Pilot Protocol. JMIR Res. Protoc. 2017, 6, e26. [Google Scholar] [CrossRef]
  8. Wu, J.M.; Ho, T.W.; Chang, Y.T.; Hsu, C.; Tsai, C.J.; Lai, F.; Lin, M.T. Wearable-Based Mobile Health App in Gastric Cancer Patients for Postoperative Physical Activity Monitoring: Focus Group Study. JMIR mHealth uHealth 2019, 7, e11989. [Google Scholar] [CrossRef]
  9. Wiseman, J.T.; Fernandes-Taylor, S.; Gunter, R.; Barnes, M.L.; Saunders, R.S.; Rathouz, P.J.; Yamanouchi, D.; Kent, K.C. Inter-rater agreement and checklist validation for postoperative wound assessment using smartphone images in vascular surgery. J. Vasc. Surg. Venous Lymphat. Disord. 2016, 4, 320–328. [Google Scholar] [CrossRef] [Green Version]
  10. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. arXiv 2015, arXiv:1512.03385. [Google Scholar]
  11. Everingham, M.; Van Gool, L.; Williams, C.K.I.; Winn, J.; Zisserman, A. The Pascal Visual Object Classes (VOC) Challenge. Intern. J. Comput. V. 2010, 88, 303–338. [Google Scholar] [CrossRef] [Green Version]
  12. Bay, H.; Tuyteelars, T.; Van Gool, L. SURF: Speeded Up Robust Features. In Proceedings of the 9th European Conference on Computer Vision, Graz, Austria, 7–13 May 2006; Leonardis, A., Bischof, H., Pinz, A., Eds.; Springer: Berlin/Heidelberg, Germany, 2006; Volume 3951. [Google Scholar]
  13. Lowe, D.G.; Muja, M. Fast Approximate Nearest Neighbors with Automatic Algorithm Configuration. In Proceedings of the Fourth International Conference on Computer Vision Theory and Applications, Lisbon, Portugal, 5–8 February 2009; pp. 331–340. [Google Scholar]
  14. Berger, R.L.; Hsu, J.C.; Hauck, W.W.; Anderson, S.; Meredith, M.P.; Heise, M.A.; Liu, J.-P.; Chow, S.-C.; Schuirmann, D.J.; Hwang, J.T.G.; et al. Bioequivalence trials, intersection-union tests and equivalence confidence: Comment. Stat. Sci. 1996, 11, 283–319. [Google Scholar] [CrossRef]
  15. Chollet, F. Xception: Deep learning with depthwise separable convolutions. arXiv 2016, arXiv:1610.02357v3. [Google Scholar]
  16. Chang, C.C.; Lin, C.J. Training nu-support vector classifiers: Theory and algorithms. Neural Comput. 2001, 13, 2119–2147. [Google Scholar] [CrossRef] [PubMed]
  17. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  18. Natekin, A.; Knoll, A. Gradient boosting machines, a tutorial. Front. Neurorobot. 2013, 7, 21. [Google Scholar] [CrossRef] [Green Version]
  19. Wang, C.; Yan, X.; Smith, M.; Kochhar, K.; Rubin, M.; Warren, S.M.; Wrobel, J.; Lee, H. A Unified Framework for Automatic Wound Segmentation and Analysis with Deep Convolutional Neural Networks. In Proceedings of the 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Milan, Italy, 25–29 August 2015; pp. 2415–2418. [Google Scholar] [CrossRef]
  20. Zhang, J.Q.; Curran, T.; McCallum, J.C.; Wang, L.; Wyers, M.C.; Hamdan, A.D.; Guzman, R.J.; Schermerhorn, M.L. Risk factors for readmission after lower extremity bypass in the American College of Surgeons National Surgery Quality Improvement Program. J. Vasc. Surg. 2014, 59, 1331–1339. [Google Scholar] [CrossRef] [Green Version]
  21. Fiordelli, M.; Diviani, N.; Schulz, P.J. Mapping mHealth research: A decade of evolution. J. Med. Internet Res. 2013, 15, e95. [Google Scholar] [CrossRef]
  22. Wayne, N.; Ritvo, P. Smartphone-Enabled Health Coach Intervention for People With Diabetes From a Modest Socioeconomic Strata Community: Single-Arm Longitudinal Feasibility Study. J. Med. Internet Res. 2014, 16, e149. [Google Scholar] [CrossRef]
  23. Kim, B.Y.; Lee, J. Smart Devices for Older Adults Managing Chronic Disease: A Scoping Review. JMIR Mhealth Uhealth 2017, 5, e69. [Google Scholar] [CrossRef] [Green Version]
  24. Gunter, R.L.; Chouinard, S.; Fernandes-Taylor, S.; Wiseman, J.T.; Clarkson, S.; Bennett, K.; Greenberg, C.C.; Kent, K.C. Current Use of Telemedicine for Post-Discharge Surgical Care: A Systematic Review. J. Am. Coll. Surg. 2016, 222, 915–927. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Kummerow Broman, K.; Gaskill, C.E.; Faqih, A.; Feng, M.; Phillips, S.E.; Lober, W.B.; Pierce, R.A.; Holzman, M.D.; Evans, H.L.; Poulose, B.K. Evaluation of Wound Photography for Remote Postoperative Assessment of Surgical Site Infections. JAMA Surg. 2018, 154, 117–124. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Gunter, R.; Fernandes-Taylor, S.; Mahnke, A.; Awoyinka, L.; Schroeder, C.; Wiseman, J.; Sullivan, S.; Bennett, K.; Greenberg, C.; Kent, K.C. Evaluating Patient Usability of an Image-Based Mobile Health Platform for Postoperative Wound Monitoring. JMIR mHealth uHealth 2016, 4, e113. [Google Scholar] [CrossRef] [PubMed]
  27. Wu, J.M.; Yu, H.J.; Ho, T.W.; Su, X.Y.; Lin, M.T.; Lai, F. Tablet PC-enabled application intervention for patients with gastric cancer undergoing gastrectomy. Comput. Methods Progr. Biomed. 2015, 119, 101–109. [Google Scholar] [CrossRef] [PubMed]
  28. Luo, G.; Stone, B.L.; Johnson, M.D.; Tarczy-Hornoch, P.; Wilcox, A.B.; Mooney, S.D.; Sheng, X.; Haug, P.J.; Nkoy, F.L. Automating Construction of Machine Learning Models With Clinical Big Data: Proposal Rationale and Methods. JMIR Res. Protoc. 2017, 6, e175. [Google Scholar] [CrossRef]
  29. Philbrick, K.A.; Weston, A.D.; Akkus, Z.; Kline, T.L.; Korfiatis, P.; Sakinis, T.; Kostandy, P.; Boonrod, A.; Zeinoddini, A.; Takahashi, N.; et al. RIL-Contour: A Medical Imaging Dataset Annotation Tool for and with Deep Learning. J. Digit. Imaging 2019, 32, 571–581. [Google Scholar] [CrossRef] [Green Version]
  30. Trajman, A.; Luiz, R.R. McNemar χ2 test revisited: Comparing sensitivity and specificity of diagnostic examinations. Scand. J. Clin. Lab. Investig. 2008, 68, 77–80. [Google Scholar] [CrossRef]
  31. Wang, L.; Pedersen, P.C.; Strong, D.M.; Tulu, B.; Agu, E.; Ignotz, R. Smartphone-based wound assessment system for patients with diabetes. IEEE Trans. Biomed. Eng. 2015, 62, 477–488. [Google Scholar] [CrossRef]
  32. Wang, L.; Pedersen, P.C.; Agu, E.; Strong, D.M.; Tulu, B. Area Determination of Diabetic Foot Ulcer Images Using a Cascaded Two-Stage SVM-Based Classification. IEEE Trans. Biomed. Eng 2017, 64, 2098–2109. [Google Scholar] [CrossRef]
Figure 1. Annotation page for wound region of interest (ROI) annotation (left: without annotation; right: with annotation).
Figure 1. Annotation page for wound region of interest (ROI) annotation (left: without annotation; right: with annotation).
Applsci 10 05353 g001
Figure 2. Annotation page for abnormal wound features. The images with abnormal wound features before and after annotation were shown (left: erythema; middle: necrosis; right: bruises).
Figure 2. Annotation page for abnormal wound features. The images with abnormal wound features before and after annotation were shown (left: erythema; middle: necrosis; right: bruises).
Applsci 10 05353 g002
Figure 3. Comparison between original residual network (ResNet) 50 architecture and our model architecture.
Figure 3. Comparison between original residual network (ResNet) 50 architecture and our model architecture.
Applsci 10 05353 g003
Figure 4. Step-by-step illustration of the automatic process and output of our model: prediction of wound type, identification of color card, color correction, and detection of abnormal region of interest (ROI), wound ROI, and seam ROI.
Figure 4. Step-by-step illustration of the automatic process and output of our model: prediction of wound type, identification of color card, color correction, and detection of abnormal region of interest (ROI), wound ROI, and seam ROI.
Applsci 10 05353 g004
Figure 5. Receiver operating characteristic curves of the detection of wound infection between the four models (SVM: blue; RF: yellow; GBC: green; our model: red). The receiver operating characteristic curve is a probability curve, and the area under the receiver operating characteristic curve (AUC) characterizes the degree of separability. One model with a higher AUC score is better at detecting the aimed outcome. AUC values in these four models range from 0.444 to 0.833; our model achieves the best performance.
Figure 5. Receiver operating characteristic curves of the detection of wound infection between the four models (SVM: blue; RF: yellow; GBC: green; our model: red). The receiver operating characteristic curve is a probability curve, and the area under the receiver operating characteristic curve (AUC) characterizes the degree of separability. One model with a higher AUC score is better at detecting the aimed outcome. AUC values in these four models range from 0.444 to 0.833; our model achieves the best performance.
Applsci 10 05353 g005
Table 1. Comparison of performance of our model, kernel support vector machines, random forest, and gradient-boosting classifier for detection of wound infection.
Table 1. Comparison of performance of our model, kernel support vector machines, random forest, and gradient-boosting classifier for detection of wound infection.
ModelAccuracyRecallPrecisionF1 ScoreSpecificityFalse PositiveFalse NegativeAUC
Our model79.5% ± 4.2%77.1% ± 6.1%82.7% ± 7.8%79.4% ± 3.9%82.2% ± 3.5%17.8% ± 6.5%22.9% ± 6.0%83.3% ± 2.8%
Kernel support vector machines41.1% ± 2.1%40.8% ± 8.7%48.2% ± 3.3%38.3% ± 3.5%56.2% ± 2.6%43.8% ± 7.3%59.2% ± 8.8%44.4% ± 4.5%
Random forest58.6% ± 5.9%64.8% ± 12.4%44.5% ± 13.6%43.1% ± 11.4%19.2% ± 7.5%80.8% ± 2.5%35.2% ± 12.3%67.1% ± 7.3%
Gradient boosting classifier63.4% ± 4.2%65.2% ± 7.4%54.5% ± 6.4%58.7% ± 3.3%65.2% ± 5.3%34.8% ± 4.6%34.8% ± 7.4%66.9% ± 4.9%
AUC: area under the receiver operating characteristic curve.

Share and Cite

MDPI and ACS Style

Wu, J.-M.; Tsai, C.-J.; Ho, T.-W.; Lai, F.; Tai, H.-C.; Lin, M.-T. A Unified Framework for Automatic Detection of Wound Infection with Artificial Intelligence. Appl. Sci. 2020, 10, 5353. https://doi.org/10.3390/app10155353

AMA Style

Wu J-M, Tsai C-J, Ho T-W, Lai F, Tai H-C, Lin M-T. A Unified Framework for Automatic Detection of Wound Infection with Artificial Intelligence. Applied Sciences. 2020; 10(15):5353. https://doi.org/10.3390/app10155353

Chicago/Turabian Style

Wu, Jin-Ming, Chia-Jui Tsai, Te-Wei Ho, Feipei Lai, Hao-Chih Tai, and Ming-Tsan Lin. 2020. "A Unified Framework for Automatic Detection of Wound Infection with Artificial Intelligence" Applied Sciences 10, no. 15: 5353. https://doi.org/10.3390/app10155353

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop