Next Article in Journal
Thoracic Endovascular Aortic Repair Using a Branched Endograft Versus Open Arch Surgery
Previous Article in Journal
Evaluation of Intracranial Arteriovenous Malformations Using Ischemic Stroke Color-Coded Maps Software, a New Rapid Post-Processing Tool in CT Angiography
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Accurate AI-Based Characterization of Wound Size and Tissue Composition in Hard-to-Heal Wounds

1
Department of Hand Surgery, Plastic Surgery and Burns, Linköping University Hospital, 581 85 Linkoping, Sweden
2
Department of Biomedical and Clinical Sciences, Linköping University, 581 85 Linköping, Sweden
3
Dermacut AB, 112 35 Stockholm, Sweden
4
Department of Emergency Medicine, Linköping University, 581 85 Linköping, Sweden
*
Author to whom correspondence should be addressed.
J. Clin. Med. 2025, 14(16), 5838; https://doi.org/10.3390/jcm14165838
Submission received: 8 July 2025 / Revised: 6 August 2025 / Accepted: 12 August 2025 / Published: 18 August 2025
(This article belongs to the Section General Surgery)

Abstract

Background: Detailed assessments, documentation, and evaluation of the wound characteristics in hard-to-heal wounds are essential for optimizing and individualizing wound care. However, the remaining challenge in clinical care includes the lack of high accuracy and precision tools for automated wound size (surface area and depth assessment) and a wound bed evaluation, i.e., a qualitative and quantification assessment of slough and necrosis. Objective/Methods: This study evaluates the accuracy and precision of the AI-powered technique, SeeWound© 2, compared to digital planimetry for a wound surface area and a wound bed characterization (slough and necrosis) in “in vitro” models and in patients, and a probe for depth, including diabetic foot ulcers, venous ulcers, pressure ulcers, and ischemic ulcers. Results: The data show that accuracy and precision (SeeWound© 2) for the wound surface area, the depth, and the wound bed characterization (slough and necrosis) were accuracy 96.28% and 90.00%, (CV 5.56%), respectively (wound size); 90.75% and 89.55%, (CV 3.07%), respectively (wound depth); 80.30% (slough) and 84.73% (necrosis) and 93.51% (slough) (CV 4.15%) and 82.35% (CV 8.34%) (necrosis). The precision for the digital planimetry was 88.61% (CV 7.00%) (slough) 85.74% (CV 7.54%) (necrosis). Conclusions: The overall accuracy and precision of the AI model in identifying wound size and depth were close to 90%, except for the accuracy and precision for slough and necrosis, where levels around 80% were achieved when compared to digital planimetry. The findings for the wound surface area and depth assessments, together with quantification of slough and necrosis, suggest that the SeeWound© 2 model can offer significant clinical benefits by improving documentation and supporting decision-making in wound management.

1. Introduction

A hard-to-heal wound is usually caused by an underlying disease where trauma might or might not be involved; examples are diabetic foot-, venous leg-, and pressure ulcers. These wounds heal by secondary wound healing and are classified as hard-to-heal when healing is delayed past certain rate thresholds [1]. The prevalence of hard-to-heal wounds is growing and currently estimated at 1–2% across developed countries [2]. The costs and resources demanded to take care of these wounds have been studied in various healthcare systems. In wounds that do not require hospitalization, there are still significant costs of primary care visits, nurse working time, drugs, other interventions as well as dressing materials [3]. One study estimated the US Medicare costs for wound care to be at least $32 billion annually [4]. Preventive measures that target the etiology of hard-to-heal wounds are carried out by various disciplines of medicine, yet the prevalence is growing, and therefore so is the need for more efficient wound management strategies. To refine treatments, an individualized treatment strategy is needed, and this necessitates an advanced wound bed evaluation.

Clinical Problem Addressed and Innovation

Health care professionals need to overall accurately assess the wound to be able to evaluate whether the healing process is adequate and implemented treatment strategies are functional. This includes identifying wound size, i.e., the wound surface area and depth, together with an adequate assessment of the wound bed characteristics, most commonly involving slough and necrosis.
The conventional clinical approach to measuring wound dimensions typically involves the utilization of disposable plastic rulers for surface area assessment. The evaluation of wound depth is comparatively less prevalent and less documented [5,6,7,8,9]. Wound depth measurements encompasses either (a) a subjective appraisal conducted without the aid of instruments, or (b) a method wherein healthcare practitioners uses a Q-tip for approximating depth by measuring the penetration of the Q-tip into the wound with a disposable ruler. Measurements of the wound surface area and depth acquired through these methods are well known to be imprecise and subjective [10]. Accurate estimations of wound surface area and depth are central for evaluating wound progression. Due to the absence of easy and practical clinical methods for assessing especially wound depth, there is low reliability (or even absence) of measurements documented in the electronic health records (EHRs) today. For the assessment of wound bed characteristics, such as slough and necrosis, this is evaluated by ocular inspection based on the skills of the observer. This evaluation then underlies the subsequent debridement, a very important intervention for the outcome of the healing process.
This study evaluates accuracy and precision of a mobile, automated, at the point of care device, SeeWound©2, which through AI methodology provides image-based wound surface area and depth assessments, together with a quantitative wound bed slough and necrosis characterization. A comparison to current conventional techniques, such as digital planimetry and best practices, is also undertaken.

2. Materials and Methods

SeeWound©2 (Dermacut AB, Norr Mälarstrand, 60 11235 Stockholm, Sweden)uses a LiDAR sensor to assess wound dimensions, see Figure 1 below. The LiDAR sensor emits light that underlies the estimate of the distance between camera and wound by calculating the time it takes for light to travel from the sensor to the wound and back. In a previous publication, the influence of light interference, such as the need to use of flash, light settings in the room, and evaluation of optimal distance between the wound and the camera, has been undertaken [11]. LIDAR technology (iPad 11 Pro) and a newly developed algorithm, (SeeWound© 1(Dermacut AB, Norr Mälarstrand, 60 11235 Stockholm, Sweden)) which also incorporates an angle adjustment function that ensures control over the angular distortion during image capture, is used [11]. If the wound can be seen in the images, the previous publication shows that the wound area size can be properly depicted. Further, the paper shows that, with normal ambient light, the flash function is not needed [11]. The development of the surface area measurement algorithm has been previously described (SeeWound© 1). In short, it involved a training, validation, and a test set according to general model producing practices and includes an angle adjustment function [11]. For the wound surface estimation in the present study, the SeeWound© 1 algorithm is identical to that of SeeWound© 2. For a comparison of wound size measurements using SeeWound© 1 and 2 on the same wound and using different mobile platforms, see Table 1. The current application (SeeWound©2) is currently available for research through both TestFlight and Google Play by a written request and regulatory approval.
Segmentation Process and Wound Depth Estimation
The SeeWound©2 uses two sequential convolutional neural networks based on the U-Net architecture to perform wound analysis. U-Net 1 is responsible for segmenting the wound bed. The output is subsequently used for wound area calculation, depth estimation, and serving as the input mask for the second U-Net model. U-Net 2 performs wound tissue segmentation, specifically identifying slough and necrotic tissue within the wound area previously delineated by U-Net 1. An illustration of this process is provided in Figure 2.

2.1. Wound Surface Area Determination

At the outpatient wound clinic of the National Burn Centre of Linköping University Hospital, wound specialists (nurses) examined the wound size in 11 patients with hard-to-heal wounds with the use of the SeeWound©2. For the accuracy estimate, wound images were captured in 11 patients (Table 2). These measurements were then compared to wound sizes obtained through manual pixel-by-pixel segmentation of a wound image performed by a wound specialist using digital planimetry, serving as the true size (digital planimetry) for the accuracy evaluation. Repeated images were also taken during the patient visit for the precision estimate in eight patients (Table 3). These paired measurements were used to assess the device’s reproducibility and precision.

2.2. Wound Depth Assessment

Wound depth assessment is conducted in two steps. First, the AI model is used to segment the wound area. In the second step, the boundary between healthy and wounded tissue as a reference surface is calculated, representing the estimated level of intact skin prior to injury. From this reconstructed surface, the vertical distance down to the bottom of the wound bed is measured by the LiDAR© system. This method enables estimation of wound depth by calculating the difference between the hypothetical healthy skin surface and the actual wound base (Figure 2): where D1 and D2 serve as clarification positions for the flip of the wound into portrait view, and where DF is the calculated depth between the reconstructed wound surface and the vertical distance down to actual wound bed.
The wound depth assessment functionality of SeeWound 2 was evaluated in an “in vitro” model (Figure 3 and Figure 4) (Part A) and in a clinical setting (Part B), the latter similar as described for the clinical wound surface evaluation, see above. Issues thought to affect the accuracy and precision of the wound depth estimate, such as the wound border specifics and absolute depth were examined.
The vitro model (Part A) was constructed to assess the effects of wound aperture and wound border effects on the wound depth estimate. The model includes an adjustable wound aperture in relation to the wound depth (see Figure 4). It is built on a stationary structure with two adjustable cubes, whereupon the in vitro wound model [12] (see Figure 3) is placed (in between). The structure then allowed the two adjustable objects to be moved towards each other, decreasing the wound aperture and at the same time increasing the depth of the wound. This model allows the examination of the effects of wound aperture and wound edge angle on the wound depth assessment. This is performed while the two blocks are adjusted manually and in accordance with the study protocol. It was realized early on that, given the resolution of the SeeWound© 2 depth measurement ± 3 mm, the shallowest wounds could not be differentiated from each other (see discussion section).
All in all, 51 wounds of different sizes, depth, and form were evaluated (Part A). Each wound was examined 8–10 times. The results were compared to the true depth of each (probe and ruler). Furthermore, limitations regarding technical constraints when undertaking a depth estimate were identified, and then possible constraints related to the specific dimensions of the wound (for example, wound size and depth particularly in relation to the size of the wound opening, angle of wound edges) were examined.
Clinical evaluation (Part B). In the outpatient clinic of the Burn Centre of Linköping University Hospital, wound nurse specialists first measured wound depth using a sterile plastic probe and a ruler. This manual measurement was documented, after which two images of the patient’s wound were captured using the SeeWound© 2 device. The images were captured with the camera held perpendicular to the wound surface, in standard luminance patient room. SeeWound© 2 automatically detects the wound contours and calculates the deepest point based on its segmentation (see Figure 5). The automatic depth measurements from SeeWound© 2 were then compared to the manual probe measurements, which served as the ground truth (probe). Two images and measurements were performed to determine the device’s accuracy against the probe and for the reproducibility in repeated measurements.

2.3. Development of AI Model for the Identification of Slough and Necrosis

The U-Net architecture is well suited for identifying small objects and local differences in images, making it ideal for segmenting slough and necrosis such as in a wound bed image. As common within AI model medical applications, there are often limited data available for training, which makes the U-Net a good choice due to its ability to train efficiently on smaller datasets [13,14]. The U-Net is divided into two parts: the left side (contracting path) and the right side (expanding path). In the contracting path, the image’s height and width decrease, while the number of filters increases. This is performed through pooling operations, which reduce the spatial dimensions of the image. The goal of the contracting path is to locate and extract higher level features, such as colors, edges, and textures, through convolutions. Once the image has been reduced in size and the key features have been extracted, the expanding path performs the opposite task by upsampling the image back to its original size The expanding path uses the features identified in the contracting path (via skip connections) to recreate the full-sized image and make pixel-by-pixel predictions of the classes slough and necrosis [13,14].
Skip connections are links between the contracting and expanding paths, shown by the gray arrows in Figure 6 below. These connections help the network use high-resolution information from the contracting path to improve upsampling and produce a more accurate output. This prevents the model from losing spatial information during the shrinking process when rebuilding the image to its original size [13,14].
The data used for the estimate of slough and necrosis consist of images, collected from clinics using the SeeWound 1 device, of hard-to-heal wounds. In these images (n = 3383), the ground truths of the wound, for slough- and necrotic, segmentation were made by an experienced wound physician. Thereafter, this data set was divided into three categories: training 70%, validation 20%, and test set of 10%. The training data are entirely separated from the test set. In the training process, we used a randomization for partitioning the training data into training and validation subsets.

2.4. Model Training Performance on Training Validation Set for Slough and Necrosis

The developed model demonstrated a consistent decrease in training loss over the epochs, which indicatively means that the model is successful in its learning of the task to segment slough and necrosis from in the wound bed. The training accuracy, dice coefficient, and intersection-over-union (iou) increase steadily throughout the model training process and reflect the model’s capability of learning to accurately classify pixels to the image’s designated classifications, and the iou reflects how good the ground truth resemblance is in the predictive and segmentation performance. Validation accuracy, dice coefficient, and the iou remained high and stable throughout the model training, indicating stable performance against the validation data set and continuous improvement in the training. See Figure 7, a and b below, for a visual representation of the training data per epoch and, respectively, for the validation performance of each epoch. The model training was stopped based on time at 10 epochs, as seen in the figure, meaning that when the model did not improve over 10 epochs, the training session was terminated to avoid over-fitting.

2.5. Clinical Evaluation

The final clinical evaluation of the accuracy and precision of the model was performed on two patient cohorts and included 466 and 25 patients, respectively, which were not included in the original training data. The patients included in this evaluation stem from units including vascular, dermatology, endocrinological, primary care, and municipal care. This implies a heterogeneous data set including, but not limited to, diabetic foot ulcers, venous ulcers, pressure ulcers, and ischemic ulcers. The output was evaluated by a wound care physician with ground truth segmentations (planimetry) to determine if there were correct identifications of slough and necrosis in the image. A binary comparison was also made, identifying if slough and necrosis were present or not in the wound as compared to ground truth (n = 466).

2.6. Ethics

Ethical approval was obtained from the relevant ethics committee (The National Swedish Ethics committee, Gothenburg, Dnr. 2021-04913) The ethics committee waived the need from informed consent from the patients as all the images examined did not contain any identifiable personal data and were considered de-identified/anonymized data. All patients were informed prior to image acquisition about the intended use of the images for research purposes.

2.7. Statistics

Data are presented as mean and standard deviation, standard error or confidence intervals. A p < 0.05 is considered statistically significant.

3. Results

3.1. Wound Area Measurements (SeeWound© 2) (Clinical)

Accuracy and precision for the SeeWound© 2 device were evaluated in 11 patients (see Table 1 and Table 2). Wound size ranged from 0.27 to 28.41 cm2 (true size). Accuracy and precision were found to be 96.53% and 92.26% (CV 5.57). In Table 2, true size and true size 2 are two measurements conducted with planimetry, and in Table 3 (see Wound 2) Measure 1 and 2 are a repeated measurement with the subject device.

3.2. SeeWound 2’s Wound Depth Measurements

Effects of Wound Opening Size and Wound Edge Angle (In Vitro)

SeeWound demonstrates high operability and performance when evaluated in an “in vitro” wound model with known depths, and under optimal settings, i.e., camera perpendicular to wound and on a wound with greater than 40 degrees wound edge steepness (Appendix A). For the 51 wounds (487 images) examined in the wound “in vitro” model, having depths ranging from 0 mm to 35 mm, SeeWound shows an average error from true depth of 1.13 mm (Figure 8). The largest deviation occurs when the adjustable wound module is set to have a depth of 35 mm. In this instance, SeeWound measures a depth of 27 mm, resulting in an 8 mm error (22.9% maximum error) and average precision of 88.42%. The Figure 8 below is a visual representation of all 487 measurements over the 51 wounds (x-axis) (Appendix B), showing the robustness of the solution and the high reproducibility and precision (measured depths circles/true depth triangle):

3.3. Clinical Evaluation

Wound depths were assessed in 10 patients having wounds with depths ranging from 4 mm to a maximum depth of 20 mm (measured by the probe technique), See Table 4. The corresponding wound depths registered by the SeeWound 2 device ranged from 6 to 20 mm. See Table 4. A negative depth, seen in patients 8 and 9, is where there is a hyper-granulation and therefore a growth rather than depth. SeeWound 2 measurements of +/−3 mm occur five out five times when the wound bed depth is smaller than 3 mm (see Methods section). The accuracy and precision, for depth assessment, registered for SeeWound 2 were 96.28%, and 90.00%, respectively, and the corresponding coefficient of variation was 5.56%, for the precision estimate (Table 4).

3.4. Artificial Intelligence Model for Tissue Segmentation

Classification for the Presence/Absence of Slough and/or Necrosis (Binary Outcome)

The structure of the first patient data set (466 patients including 495 images) and the performance of the model to predict the presence or absence of slough and necrosis, are shown in Table 5 below. Out of the 466 patients, 302 were found to have slough and 38 to have necrosis in the wound bed (SeeWound©2). The remaining 155 patients did have neither.
In the binary evaluation (slough/necrosis or not), the AI model correctly identifies slough in the wound bed 89.4% (accuracy) of the time, and necrosis 86.8% (accuracy) of the time. Furthermore, the AI model classifies 170 images without any slough or necrosis (actual number 155 patients). In total, the AI model correctly identifies necrotic and slough tissue in 89.12% of the patient/wound images.
Tissue segmentation (slough and necrosis) in clinical practice and compared to digital manual tracing in each wound was in a subset of the above cohort (n = 25).
The AI model estimation of slough and necrosis in 25 patients with wounds ranging from 0.3 to 35.42 cm2 in size was conducted /See Table 6). Slough and necrosis were found in ranges from 7.22 to 83.46% and 0 to 80.31%, respectively (physician planimetry). Overall accuracy and precision were accuracy 80.30% (slough) and 84.73% (necrosis); precision 93.51% (slough) (CV 4.15%) and 82.35% (CV 8.34%) (necrosis) (Ser Table 7). The precision for the digital planimetry was 88.61% (CV 7.00%) (slough) 85.74% (CV 7.54%) (necrosis).

4. Discussion

4.1. Summary of Findings

This study aimed to evaluate the accuracy and precision of the AI-powered tool SeeWound©2, for assessing the wound surface area, the wound depth, and the wound bed characteristics (slough and necrosis). The results show that SeeWound© 2, which is currently applying for a MDR certification, demonstrated robust performance metrics: for the wound surface area, the accuracy was 96.28% and the precision was 90.00%, with a coefficient of variation (CV) of 5.56%. For the wound depth, the accuracy was 90.75% and the precision was 89.55%, with a CV of 3.07%. For the wound bed characteristics, the AI model achieved accuracy values of 80.30% for slough and 84.73% for necrosis, with corresponding precision values of 88.61% (slough) and 85.74% (necrosis). These results demonstrate that SeeWound© 2 is a useful tool for wound assessment, documentation, and clinical decision support, particularly for wound size and characterization. Furthermore, comparisons with traditional methods, such as digital planimetry, indicate that SeeWound© 2 can be used as a decision support for wound size and characterization in clinical practice. The SeeWound© 2 device has an automatic export of the wound metrics to present to the major Swedish Clinical documentation system (COSMIC) by a facilitated capture algorithm, which also can be modified as needed to other specific clinical documentation systems.

4.2. Wound Surface Area

Among the parameters evaluated, the accuracy and precision for estimating the wound surface area were the highest. This was expected, given that the technical complexity of surface area estimation is relatively lower compared to the depth and the wound bed characteristics. The algorithm underlying the surface area model was developed in 2019 and has undergone continuous refinement, including adjustments to the camera-to-wound angle, significantly improving precision. This model, which is consistent across different mobile devices (see Table 1 above), produces results comparable to traditional planimetry. A comparison with the ASURA AI model [15] showed that SeeWound© 2 outperforms conventional ruler-based methods, which often overestimate the wound area due to simplified length × width calculations, with error margins ranging from 12% to 87% [8,16,17,18,19]. SeeWound© 2 achieved an accuracy of 96% and precision of 90%, surpassing the performance of both ASURA and traditional methods. Notably, ASURA’s model demonstrated an accuracy of 87.9% when measuring objects of known size [15], yet SeeWound© 2 was able to outperform these models in wound surface area estimations, confirming its high accuracy and precision.

4.3. Wound Depth

Assessing wound depth presents a greater challenge than surface measurements, primarily due to the inherent complexity of wound structures. Although high accuracy has been achieved with other systems, such as the WoundVue system (with 12.9% precision) and the method proposed by Virginia et al. (with 2% accuracy), these systems require additional hardware [20,21]. In contrast, SeeWound© 2 leverages LiDAR technology in modern smartphones, which eliminates the need for external equipment and simplifies its use in clinical settings. However, certain limitations remain. The LiDAR sensor’s depth map resolution (192 × 256 pixels) has limitations when capturing small depth variations, particularly for shallow wounds with depths under 5 mm. Additionally, the performance of the system weakens when the wound edges are steep or when the wound opening is less than 40 degrees. This is due to the light from the LiDAR sensor having difficulty reaching and reflecting on the respective wound surface. Despite these limitations, the system remains a valuable tool in clinical practice where precise depth measurements for very shallow wounds hold limited clinical value.
A review by Lasschuit et al. [10] examined available devices and methods for measuring wound depth, concluding that, despite the availability of several 3D imaging systems, depth measurements are still most reliably obtained using a sterile probe. This underscores the challenge in accurately assessing wound depth, as manual methods, such as ruler-based estimates, have demonstrated significant inaccuracies, with overestimations ranging from 12% to 87% [16,17,18,19], and inter-rater variability of estimating the wound size from −44% up to +75% [7]. These limitations are particularly pronounced in complex wound geometries or cases involving undermining [10].

4.4. Wound Characteristics (Slough and Necrosis)

Assessing wound bed characteristics such as slough and necrosis is particularly challenging due to the subjective nature of tissue identification, especially at the borders between tissue types. In binary evaluations, the AI model demonstrated high accuracy and precision in identifying the presence or absence of slough and necrosis. However, pixel-by-pixel evaluations of slough and necrosis were less accurate, with slough accuracy at 80.30% and necrosis accuracy at 84.73%. These results are consistent with the limitations inherent in the task of defining slough and necrosis, especially at their borders, where observer subjectivity plays a significant role. The precision for digital planimetry in slough and necrosis was 88.61% for slough and 85.74% for necrosis, with relatively high CV values (7.00% for slough and 7.54% for necrosis), suggesting that inter- and intra-rater variability, rather than the AI algorithm, is largely responsible for discrepancies in accuracy.
Mukherjee et al. [22] trained a Support Vector Machine (SVM) model using clinician-annotated images as the ground truth during the training phase. The model achieved an overall tissue segmentation accuracy of 86.13% across granulation, slough, and necrosis classes. Similarly, Morgado et al. [23] utilized a DeepLabV3-R50 segmentation model to classify wound tissue types, achieving mean absolute errors of 14.31% for slough and 8.84% for eschar. Zoppo et al. [24] evaluated several wound assessment tools and reported mean relative errors of approximately 14% for WoundViewer, Silhouette, and Visitrack, while the MOWA system exhibited a higher relative error of 23%. Fauzi et al. [25] evaluated a color-based segmentation approach using hue, saturation, and value (HSV) color space, reporting an overall accuracy of 75%, with class-specific accuracies of 63% for slough and 75.1% for eschar. They also reported inter-expert agreement scores of 67.4% and 84.3%, respectively. These studies demonstrate the ongoing challenges and variability in accurately segmenting wound tissue types, especially for slough and necrosis, underscoring the potential of AI in reducing inter-expert variability and improving wound assessment.
Interestingly, a study by Mohammed et al. [26] found that clinicians achieved 84% and 87% agreement for the presence of slough and necrosis, respectively, in nine wound images. This finding aligns with the current study, where the AI model showed a higher degree of consistency compared to clinician-based estimates. Further comparative studies by Ramachandram et al. [27] and Mukherjee et al. [22] also reported high variability among clinicians in quantifying tissue proportions, with Ramachandram et al.’s deep learning model achieving a mean intersection over union (IoU) of 86.44% for wound segmentation and 71.92% for tissue segmentation across a dataset of 58 wound images. Similarly, SeeWound© 2 demonstrated better precision and accuracy than digital planimetry, outperforming manual tracing and segmentations, which have average intra-rater ICCs of 81.02% and 78.16% for slough and necrosis respectively [27].
In clinical practice, ocular methods for estimating slough and necrosis, such as the quarter-based percentage estimation method, result in significant estimation errors. A typical absolute error in tissue estimations is ±12.5%, based on the 25% range used for proportion estimates of slough and necrosis (Bates–Jensen Wound Assessment Tool, BWAT) [28]. In contrast, SeeWound© 2 provides a more objective and consistent method for classifying tissue types, outperforming ocular assessments, which exhibit poor inter-rater agreement (ICC of 0.37–0.38) [26].

4.5. Limitations

4.5.1. Wound Surface Area

The primary challenge is that traditional methods, such as ruler-based techniques, often lead to overestimations, with error margins ranging from 12% to 87% and inter-variations between −44% to +75% on the same wounds [7,8,16,17,18], particularly when estimating wound areas with simple length × width calculations. However, these issues are mitigated with SeeWound©2, which provides accurate estimations without the limitations associated with traditional methods.

4.5.2. Wound Depth

The most significant and obvious limitation is when parts of the wound cavity are not visible to the Lidar camera. Another limitation for wound depth measurement lies in the resolution of the LiDAR depth map and the angle of image capture. The latter is due to the image depth map resolution provided by the LiDAR technology presently uses 192 × 256 pixels. Combined with its sensitivity to the distance between the camera and the object—requiring more than 30 cm for optimal performance—this results in a limited ability to accurately capture steep edges. Even when the depth map is morphed to a higher resolution (e.g., 512 × 683 pixels), the system still challenges the sensitivity needed to detect small surface variations or subtle depth differences. Additionally, steep wound edges and non-visible wound portions in undermined or deeply cavernous wounds result in underestimated depth measurements. The light from the LiDAR sensor has difficulty reaching and reflecting on such wounds, reducing its performance in these clinical scenarios (see Figure 9 for visual explanation of the challenges). Future work to facilitate a 3-D wound bed reconstruction is currently being planned.

4.5.3. Wound Bed Characteristics

Evaluating wound bed characteristics such as slough and necrosis remain challenging due to the subjective nature of tissue identification, especially at the borders between tissue types. However, SeeWound© 2 has demonstrated better consistency and higher accuracy than manual, ocular assessments, which are subject to significant observer variability. Furthermore, studies comparing AI-based segmentation with clinician-annotated images have shown that the AI model outperforms clinicians in both consistency and accuracy, highlighting the potential of AI-powered tools in wound care. We therefore suggest that the present accuracy is sufficient for clinical use. This, however, needs to be further documented in future studies.

5. Conclusions

The results of this study confirm that SeeWound© 2 is a highly accurate and precise tool for assessing the wound surface area, the depth, and the wound bed characteristics. The use of AI and LiDAR technology enables reliable and reproducible measurements without requiring additional external hardware, making it a practical and scalable solution for clinical practice. While there are limitations, particularly in measuring shallow and deeply undermined wounds, SeeWound© 2 outperforms traditional methods, providing clinicians with a more objective and consistent tool for wound assessment. Future advancements in algorithm training and sensor technology are expected to further improve the clinical utility of SeeWound© 2.

Author Contributions

Conceptualization: K.L. and Z.S.; methodology: K.L., F.S., Z.S., and M.K.; software: K.L.; validation: K.L.; formal analysis: K.L. and F.S.; investigation: K.L., A.K. and Z.S.; resources: K.L.; data curation: K.L. and Z.S.; writing—original draft: K.L., A.H., and F.S.; writing—review and editing: K.L. and F.S.; visualization: K.L.; supervision: M.K., M.E., F.S., and M.F.; project administration: K.L. and Z.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by Vinnova, (grant no: 2021-00179 and 2022-00383)—the Swedish innovation agency; This work was funded by FORSS, Medical Research Council of South East Sweden. (grant no: FORSS-977651).

Institutional Review Board Statement

Ethical approval was obtained from the relevant ethics committee [The National Swedish Ethics committee, Gothenburg, Cod: Dnr. 2021-04913, Date: 19 April 2021].

Informed Consent Statement

The ethics committee waived the need from informed consent from the patients as all images examined did not contain any identifiable personal data and were considered de-identified/Anonymized data. All patients were informed prior to image acquisition about the intended use of the images for research purposes.

Data Availability Statement

Data are available on a reasonable request to the authors.

Acknowledgments

The authors would like to thank Linköping University and Linköping University Hospital for their support of developing, implementing and utilizing the SeeWound© device.

Conflicts of Interest

Karl Lindborg, Zacharias Sjöberg and Folke Sjöberg are shareholders in DermaCut AB, the company responsible for developing the SeeWound© technique including this artificial intelligence model. The other authors do not have any conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
Spatial dimensionsRefers to the height and width of the image or feature map.
FiltersSmall matrices are used to detect specific patterns in the image (e.g., edges, textures).
ConvolutionsA mathematical operation that applies filters to an image to extract features.
Feature MapThe output produced by applying filters to an image, showing where certain features are present.
UpsamplingThe process of increasing the size of the image or feature map, to restore original dimensions.

Appendix A

Table A1. Cavity Opening.
Table A1. Cavity Opening.
Image Number Test Actual Depth [mm] Cavity Opening [mm] Measured Depth [mm]
11232017
21232015
31232015
41232021
51232017
61232020
71232017
81232014
91232018
101232012
111232020
121232020
131232017
142301221
152301218
162301221
172301221
182301224
192301223
202301226
212301230
222301224
232301226
242301222
25330526
26330530
27330526
28330531
29330529
30330529
31330526
32330531
33330517
34330529
35330516
36330526
37430226
38430218
39430213
40430214
41430217
42430222
43430220
44430227
45430216
46430225

Appendix B

Table A2. Performance Cross Different Depth Within Known Limitations.
Table A2. Performance Cross Different Depth Within Known Limitations.
Image number Actual (mm)Predicted (mm) Diff (mm)
1−10−100
2−10−100
3−10−122
4−10−91
5−10−91
6−10−100
7−10−100
8−10−111
9−10−111
10−10−91
11000
12000
13011
14011
15011
16011
17011
18000
19011
200−22
210−11
22011
23011
240−11
25011
260−22
27011
28000
29011
300−22
31000
32011
33011
34022
35022
36022
37022
38011
39033
40022
41011
42011
43011
44011
45022
46022
47022
48011
49011
50022
51022
52011
53011
54011
55011
56011
57011
58011
59011
60022
61000
62000
63011
64011
65011
66011
67011
68011
69011
70011
71011
72000
73011
74011
75011
76000
770−11
78000
79022
80033
81110
82132
83132
84110
85110
86110
87110
88110
89110
90110
91110
92110
93110
94110
95110
96110
97121
98110
99121
100110
101121
102121
103121
104110
105121
106121
107110
108110
109110
110110
111110
112110
113110
114110
115110
116110
117110
118121
119110
120121
121132
122121
123121
124121
125121
126110
127110
128110
129121
130110
131220
132220
133220
134220
135211
136211
137220
138231
139211
140211
141220
142220
143211
144211
145211
146211
147211
148211
149220
150211
151211
152220
153220
154220
155220
156220
157220
158220
159220
160220
161220
162220
163220
164220
165220
166211
167220
168220
169211
170220
171561
172550
173541
174561
175541
176541
177550
178541
179541
180550
181561
182550
183561
184561
185561
186550
187561
188550
189561
190561
191561
192550
193572
194541
195550
196541
197550
198572
199561
200561
201541
202541
203550
204550
205541
206541
207541
208541
209550
210550
211561
212572
213550
214550
215572
216561
217541
218541
219561
220572
22110100
22210133
22310111
22410100
22510111
2261091
22710133
22810122
22910100
2301082
23110100
2321082
2331082
23410100
23510111
23610122
23710100
2381091
23910100
2401082
24112120
24212111
24312131
24412131
24512120
24612131
24712131
24812131
24912111
25012131
25112131
25212131
25312120
25412120
25512120
25612120
25712120
25812120
25912111
26012131
26112120
26212102
26312102
26412111
26512102
26612120
26712120
26812120
26912111
27012131
27115150
27215141
27315150
27415141
27515141
27615141
27715150
27815141
27915141
28015141
28115132
28215150
28315150
28415150
28515141
28615132
28715150
28815141
28915150
29015150
29115141
29215141
29315123
29415141
29515141
29615132
29715141
29815150
29915123
30015132
30115141
30215161
30315132
30415161
30515150
30615161
30715161
30815150
30915161
31015150
31115161
31215141
31315132
31415141
31515161
31615172
31715150
31815172
31915123
32015150
32116151
32216151
32316160
32416160
32516124
32616171
32716160
32816133
32916142
33016124
33116160
33216151
33316171
33416133
33516171
33616142
33716160
33816160
33916171
34016142
34116171
34216151
34316142
34416160
34516160
34616171
34716160
34816142
34916142
35016160
35118202
35218180
35318171
35418202
35518162
35618191
35718202
35818202
35918180
36018202
36118162
36218191
36318162
36418162
36518171
36618202
36718202
36818180
36918191
37018180
37118171
37218180
37318191
37418162
37518191
37618191
37718191
37818162
37918180
38018191
38120211
38220200
38320211
38420211
38520211
38620222
38720200
38820211
38920211
39020211
39120200
39220200
39320182
39420191
39520191
39620191
39720222
39820191
39920182
40020173
40120173
40220173
40320173
40420182
40520173
40620173
40720173
40828271
40928262
41028280
41128271
41228313
41328291
41428302
41528262
41628271
41728291
41830291
41930291
42030291
42130282
42230291
42330264
42430300
42530291
42630282
42730291
42830291
42930291
43030282
43130264
43230282
43330291
43430282
43530282
43630282
43730264
43830264
43930282
44030282
44130273
44230282
44330264
44430273
44530273
44630282
44730282
44830291
44930273
45030282
45130282
45230282
45330264
45430282
45530282
45630273
45730273
45835332
45935350
46035323
46135332
46235323
46335314
46435332
46535341
46635332
46735323
46835341
46935341
47035332
47135323
47235323
47335341
47435296
47535332
47635332
47735332
47835323
47935314
48035314
48135323
48235323
48335278
48435332
48535350
48635350
48735341
48835341
48935287
49035332
49135314
49235332
49335341
49435323
49535332
49635341
49735350

References

  1. Atkin, L.; Bućko, Z.; Conde Montero, E.; Cutting, K.; Moffatt, C.; Probst, A.; Romanelli, M.; Schultz, G.S.; Tettelbach, W. Implementing TIMERS: The race against hard-to-heal wounds. J. Wound Care 2019, 28, S1–S50. [Google Scholar] [CrossRef]
  2. Heyer, K.; Herberger, K.; Protz, K.; Glaeske, G.; Augustin, M. Epidemiology of chronic wounds in Germany: Analysis of statutory health insurance data. Wound Repair Regen. 2016, 24, 434–442. [Google Scholar] [CrossRef] [PubMed]
  3. Olsson, M.; Järbrink, K.; Divakar, U.; Bajpai, R.; Upton, Z.; Schmidtchen, A.; Car, J. The humanistic and economic burden of chronic wounds: A systematic review. Wound Repair Regen. 2019, 27, 114–125. [Google Scholar] [CrossRef]
  4. Nussbaum, S.R.; Carter, M.J.; Fife, C.E.; DaVanzo, J.; Haught, R.; Nusgart, M.; Cartwright, D. An Economic Evaluation of the Impact, Cost, and Medicare Policy Implications of Chronic Nonhealing Wounds. Value Health 2018, 21, 27–32. [Google Scholar] [CrossRef]
  5. Shah, A.; Wollak, C.; Shah, J.B. Wound Measurement Techniques: Comparing the Use of Ruler Method, 2D Imaging and 3D Scanner. J. Am. Coll. Clin. Wound Spec. 2015, 3, 52–57. [Google Scholar] [CrossRef]
  6. Au, Y.; Beland, B.; Anderson, J.A.E.; Sasseville, D.; Wang, S.C. Time-Saving Comparison of Wound Measurement Between the Ruler Method and the Swift Skin and Wound App. J. Cutan. Med. Surg. 2019, 23, 226–228. [Google Scholar] [CrossRef]
  7. Peterson, N.; Stevenson, H.; Sahni, V. Size matters: How accurate is clinical estimation of traumatic wound size? Injury 2014, 45, 232–236. [Google Scholar] [CrossRef] [PubMed]
  8. Rogers, L.C.; Bevilacqua, N.J.; Armstrong, D.G.; Andros, G. Digital Planimetry Results in More Accurate Wound Measurements: A Comparison to Standard Ruler Measurements. J. Diabetes Sci. Technol. 2010, 4, 799–802. [Google Scholar] [CrossRef]
  9. Shetty, R.; Sreekar, H.; Lamba, S.; Gupta, A.K. A novel and accurate technique of photographic wound measurement. Indian J. Plast. Surg. 2012, 45, 425–429. [Google Scholar] [CrossRef]
  10. Lasschuit, J.W.J.; Featherston, J.; Tonks, K.T.T. Reliability of a Three-Dimensional Wound Camera and Correlation With Routine Ruler Measurement in Diabetes-Related Foot Ulceration. J. Diabetes Sci. Technol. 2020, 15, 1361–1367. [Google Scholar] [CrossRef] [PubMed]
  11. Lindborg, L.; Karlsson, M.; Kotorri, A.; Sjöberg, F.; Fredrikson, M.; Haglind, A.; Sjöberg, Z.; Elmasry, M. Enhancing Precision and usability in Wound Size Determination Using Artificial Intelligence with Angular Adjustment. J. Wound Care. (Under Review).
  12. “Wound Moulage Venous Leg Ulcer, Small, Granulation Phase | PW002B,” Erler-Zimmer. Available online: https://erler-zimmer.de/en/Wound-moulage-venous-leg-ulcer-small-granulation-phase/PW002B (accessed on 17 September 2024).
  13. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. arXiv 2015, arXiv:1505.04597. [Google Scholar] [CrossRef]
  14. Lu, H.; She, U.; Tie, J.; Xu, S. Half-UNet: A Simplified U-Net Architecture for Medical Image Segmentation. Front. Neuroinform. 2022, 16, 911679. [Google Scholar] [CrossRef] [PubMed]
  15. Chino, D.Y.T.; Scabora, L.C.; Cazzolato, M.T.; Jorge, A.E.S.; Traina, C., Jr.; Traina, A.J.M. Segmenting skin ulcers and measuring the wound area using deep convolutional networks. Comput. Methods Programs Biomed. 2020, 191, 105376. [Google Scholar] [CrossRef]
  16. Lucas, C.; Classen, J.; Harrison, D.; De, H. Pressure ulcer surface area measurement using instant full-scale photography and transparency tracings. Adv. Skin Wound Care 2002, 15, 17–23. [Google Scholar] [CrossRef]
  17. Kantor, J.; Margolis, D.J. A multicentre study of percentage change in venous leg ulcer area as a prognostic index of healing at 24 weeks. Br. J. Dermatol. 2000, 142, 960–964. [Google Scholar] [CrossRef]
  18. Sánchez-Jiménez, D.; Buchón-Moragues, F.F.; Escutia-Muñoz, B.; Botella-Estrada, R. SfM-3DULC: Reliability of a new 3D wound measurement procedure and its accuracy in projected area. Int. Wound J. 2022, 19, 44–51. [Google Scholar] [CrossRef]
  19. Sen, C.K. Human Wound and Its Burden: Updated 2020 Compendium of Estimates. Adv. Wound Care 2021, 10, 281–292. [Google Scholar] [CrossRef]
  20. Wu, Y.; Wu, L.; Yu, M. The clinical value of intelligent wound measurement devices in patients with chronic wounds: A scoping review. Int. Wound J. 2024, 21, e14843. [Google Scholar] [CrossRef]
  21. Virginia, M. Monitoring Wound Healing With Contactless Measurements and Augmented Reality. IEEE J. Transl. Eng. Health Med. 2020, 8, 2700412. [Google Scholar] [CrossRef]
  22. Mukherjee, R.; Manohar, D.D.; Das, D.K.; Achar, A.; Mitra, A.; Chakraborty, C. Automated Tissue Classification Framework for Reproducible Chronic Wound Assessment. BioMed Res. Int. 2014, 2014, 851582. [Google Scholar] [CrossRef]
  23. Morgado, A.C.; Carvalho, R.; Sampaio, A.F.; Vasconcelos, M.J.M. Enhancing chronic wound assessment through agreement analysis and tissue segmentation. Sci. Rep. 2025, 15, 22244. [Google Scholar] [CrossRef] [PubMed]
  24. Zoppo, G.; Marrone, F.; Pittarello, M.; Farina, M.; Uberti, A.; Demarchi, D.; Secco, J.; Corinto, F.; Ricci, E. AI technology for remote clinical assessment and monitoring. J. Wound Care 2020, 29, 692–706. [Google Scholar] [CrossRef]
  25. Fauzi, M.F.A.; Khansa, I.; Catignani, K.; Gordillo, G.; Sen, C.K.; Gurcan, M.N. Computerized segmentation and measurement of chronic wound images. Comput. Biol. Med. 2015, 60, 74–85. [Google Scholar] [CrossRef]
  26. Mohammed, H.T.; Bestavros, S.; Mohsen, S.; Liu, Z.; Wang, S.; Allport, J.; Cassata, A.; Fraser, R.D.J. Assessing Clinician Consistency in Wound Tissue Classification and the Value of AI-Assisted Quantification: A Cross-Sectional Study. Int. Wound J. 2025, 22, e70691. [Google Scholar] [CrossRef]
  27. Ramachandram, D.; Ramirez-GarciaLuna, J.L.; Fraser, R.D.J.; Martínez-Jiménez, M.A.; Arriaga-Caballero, J.E.; Allport, J. Fully Automated Wound Tissue Segmentation Using Deep Learning on Mobile Devices: Cohort Study. JMIR mHealth uHealth 2022, 10, e36977. [Google Scholar] [CrossRef] [PubMed]
  28. Bates-Jensen, B.M.; McCreath, H.; Patlan, A.; Harputlu, D. Reliability of the Bates-Jensen Wound Assessment Tool (BWAT) for Pressure Injury Assessment: The Pressure Ulcer Detection Study. Wound Repair Regen. 2019, 27, 386–395. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Display of a wound image in the SeeWound©2 application.
Figure 1. Display of a wound image in the SeeWound©2 application.
Jcm 14 05838 g001
Figure 2. U-Net Segmentation 1 and 2 with depth calculation.
Figure 2. U-Net Segmentation 1 and 2 with depth calculation.
Jcm 14 05838 g002
Figure 3. In vitro model. Artificial flat wound model used between adjustable cubes (see Figure 4).
Figure 3. In vitro model. Artificial flat wound model used between adjustable cubes (see Figure 4).
Jcm 14 05838 g003
Figure 4. Schematic scheme of the in vitro model. Two cubes that change wound opening and wound edge angle. The flat wound model (see Figure 3) is added in the wound opening above.
Figure 4. Schematic scheme of the in vitro model. Two cubes that change wound opening and wound edge angle. The flat wound model (see Figure 3) is added in the wound opening above.
Jcm 14 05838 g004
Figure 5. Schematic figure of detected wound contours and calculation of deepest point by SeeWound 2. Example made on in vitro model; see above for clarity.
Figure 5. Schematic figure of detected wound contours and calculation of deepest point by SeeWound 2. Example made on in vitro model; see above for clarity.
Jcm 14 05838 g005
Figure 6. Schematic illustration of a U-net architecture.
Figure 6. Schematic illustration of a U-net architecture.
Jcm 14 05838 g006
Figure 7. Classification of tissue, pixel by pixel on test set (training metrics over epochs) (a) and validation metrics over epochs (b).
Figure 7. Classification of tissue, pixel by pixel on test set (training metrics over epochs) (a) and validation metrics over epochs (b).
Jcm 14 05838 g007
Figure 8. SeeWound’s accuracy and precision in depth measurements on wounds with 40 degrees or greater wound edge steepness.
Figure 8. SeeWound’s accuracy and precision in depth measurements on wounds with 40 degrees or greater wound edge steepness.
Jcm 14 05838 g008
Figure 9. Conceptual wound edge challenges, for wound depth determinations, i.e., hidden wound parts and narrow wound edges (schematic drawing).
Figure 9. Conceptual wound edge challenges, for wound depth determinations, i.e., hidden wound parts and narrow wound edges (schematic drawing).
Jcm 14 05838 g009
Table 1. Measurements/device, divided equally on SeeWound 1 and 2.
Table 1. Measurements/device, divided equally on SeeWound 1 and 2.
DeviceMean Size (cm2) Standard DeviationDelta from Total AverageMean Size SeeWound©1 (cm2)SeeWound©1 Std (cm2)Mean Size SeeWound©2 (cm2)SeeWound©2 Std (cm2)
Samsung Galaxy Note 917.050.23532411798.9%17.000.2017.100.26
Samsung Galaxy Xcover 4S17.150.30706409899.5%16.950.2417.360.22
Samsung Galaxy A1417.460.253832149101.2%17.400.2817.510.21
iPhone 12 Pro17.530.235730015101.6%17.510.3217.540.31
iPhone 13 Pro17.120.31817454399.3%17.160.3317.070.25
iPad Pro (2gen)17.160.28135557699.5%17.080.2317.250.30
Std: Standard Deviationd.
Table 2. Accuracy assessment.
Table 2. Accuracy assessment.
PatientSeeWound2 (cm2)True Size (cm2)True Size 2 (cm2)Absolute ErrorAbsolute Error in %
15.455.665.680.210.039
26.516.376.480.140.022
38.58.668.530.160.019
43.833.854.140.020.005
57.918.218.60.30.038
614.514.3314.520.170.012
70.310.270.330.040.129
813.6514.5814.290.930.068
91.951.992.130.040.021
1027.2527.7928.410.540.020
1112.512.3712.790.130.010
Accuracy96.53%
Std0.036
Standard error of the mean0.010819869
Table 3. Precision assessment.
Table 3. Precision assessment.
PatientSeeWound2 Measure 1 (cm2)SeeWound2 Measure 2 (cm2)Diff (cm2)ReproducibilityCV
17.988.580.60.0750.051
21.792.160.370.2070.132
34.664.60.060.0130.009
415.7215.320.40.0260.018
511.5111.250.260.0230.016
61.781.510.270.1790.116
711.8512.110.260.0220.015
81.992.250.260.1310.087
Precision91.56%5.57%
Std0.078
Standard error of the mean0.027465
Table 4. Wound depth assessed by the probe and SeeWound©2 device.
Table 4. Wound depth assessed by the probe and SeeWound©2 device.
PatientSeeWound 2 Depth (mm)SeeWound 2 Depth (mm)CV SeeWound 2 Depth (%)Average SeeWound 2 Depth (mm)Probe Depth (mm)Absolute Error (mm)]Relative Error (%)Diff SeeWound 2 MeasurementsPrecision
14404310.250.000.00
2+/−3+/−3“True”“True”220.000.000.00
3540.1574.531.50.331.000.25
4+/−3+/−3“True”“True”0.20.20.000.000.00
6+/−3+/−3“True”“True”1.21.20.000.000.00
7+/−3+/−3“True”“True”220.000.000.00
8+/−3+/−3“True”“True”000.000.000.00
9−5−4−0.157−4.5−40.5−0.111.000.25
10−4−6−0.283−5−41−0.202.000.50
1120200202220.100.000.00
Accuracy96.28%Precision90.00%
CV5.56%
Table 5. Binary evaluation of slough and necrosis.
Table 5. Binary evaluation of slough and necrosis.
Number of ImagesTissue TypeCorrect PredictionsAccuracy [%]Total Predictions to Class
155Other13889.0%170
302Slough27089.4%287
38Necrosis3386.8%37
Total images: 495
Table 6. Accuracy (SeeWound©2) on slough and necrosis.
Table 6. Accuracy (SeeWound©2) on slough and necrosis.
AI Prediction
SeeWound 2
Digital Planimetry (Physician)Accuracy
PatientSlough %Necrosis %SloughNecrosisSloughNecrosis
132.1528.9939.1751.140.820.57
259.2318.8570.0920.770.850.91
341.281.4350.952.690.810.53
566.640.0083.460.000.801.00
734.940.0039.680.000.881.00
852.520.0346.288.170.870.00
932.340.0033.600.000.961.00
1058.3616.8662.7727.100.930.62
1119.480.6231.960.000.611.00
1239.8552.6114.2066.500.810.79
1330.030.0024.070.000.751.00
1438.0332.7936.1042.010.950.78
1551.670.3550.813.140.980.11
1720.2021.6328.0146.420.720.47
186.0762.167.2274.700.840.83
1931.587.6143.5711.120.720.68
2068.390.0077.790.000.881.00
2260.0715.9976.1915.780.790.99
2420.4723.2171.734.510.293.15
2547.9841.4817.0780.310.810.52
Accuracy80.30%84.73%
Table 7. Precision (SeeWound©2) on slough and necrosis.
Table 7. Precision (SeeWound©2) on slough and necrosis.
PatientAI SloughAI NecrosisDigital Planimetry SloughDigital Planimetry Necrosis
1 59.2318.8570.0920.77
1 58.3616.8662.7727.10
1 60.0715.9976.1915.78
2 70.080.3780.353.61
2 69.560.4591.103.15
3 66.640.0083.460.00
3 68.080.0087.730.00
4 32.1528.9939.1751.14
4 30.5528.5754.3537.73
5 10.730.0023.650.00
5 10.720.0027.490.00
633.29033.60
632.34032.410
739.4831.6536.142.01
738.0332.7932.8845.94
851.870.8750.813.14
851.670.3549.893.38
94.4664.47.2274.7
96.0762.16667.09
1030.19055.580
1027.54053.740
110000
110000
1220.44035.440
1218.99036.360
1318.6111.2951.6417.26
1320.988.5146.2723.33
Precision:93.51%82.35%88.61%85.74%
Average CV:4.15%8.34%7.00%7.54%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lindborg, K.; Karlsson, M.; Kotorri, A.; Sjöberg, F.; Fredrikson, M.; Haglind, A.; Sjöberg, Z.; Elmasry, M. Accurate AI-Based Characterization of Wound Size and Tissue Composition in Hard-to-Heal Wounds. J. Clin. Med. 2025, 14, 5838. https://doi.org/10.3390/jcm14165838

AMA Style

Lindborg K, Karlsson M, Kotorri A, Sjöberg F, Fredrikson M, Haglind A, Sjöberg Z, Elmasry M. Accurate AI-Based Characterization of Wound Size and Tissue Composition in Hard-to-Heal Wounds. Journal of Clinical Medicine. 2025; 14(16):5838. https://doi.org/10.3390/jcm14165838

Chicago/Turabian Style

Lindborg, Karl, Matilda Karlsson, Ana Kotorri, Folke Sjöberg, Mats Fredrikson, Axel Haglind, Zacharias Sjöberg, and Moustafa Elmasry. 2025. "Accurate AI-Based Characterization of Wound Size and Tissue Composition in Hard-to-Heal Wounds" Journal of Clinical Medicine 14, no. 16: 5838. https://doi.org/10.3390/jcm14165838

APA Style

Lindborg, K., Karlsson, M., Kotorri, A., Sjöberg, F., Fredrikson, M., Haglind, A., Sjöberg, Z., & Elmasry, M. (2025). Accurate AI-Based Characterization of Wound Size and Tissue Composition in Hard-to-Heal Wounds. Journal of Clinical Medicine, 14(16), 5838. https://doi.org/10.3390/jcm14165838

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop