Next Article in Journal
Intelligent Performance Evaluation in Rowing Sport Using a Graph-Matching Network
Previous Article in Journal
Data-Weighted Multivariate Generalized Gaussian Mixture Model: Application to Point Cloud Robust Registration
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automatic 3D Postoperative Evaluation of Complex Orthopaedic Interventions

by
Joëlle Ackermann
1,2,*,
Armando Hoch
3,
Jess Gerrit Snedeker
2,3,
Patrick Oliver Zingg
3,
Hooman Esfandiari
1 and
Philipp Fürnstahl
1
1
Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland
2
Laboratory for Orthopaedic Biomechanics, ETH Zurich, 8093 Zurich, Switzerland
3
Department of Orthopedics, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland
*
Author to whom correspondence should be addressed.
J. Imaging 2023, 9(9), 180; https://doi.org/10.3390/jimaging9090180
Submission received: 19 July 2023 / Revised: 21 August 2023 / Accepted: 27 August 2023 / Published: 31 August 2023
(This article belongs to the Section Medical Imaging)

Abstract

:
In clinical practice, image-based postoperative evaluation is still performed without state-of-the-art computer methods, as these are not sufficiently automated. In this study we propose a fully automatic 3D postoperative outcome quantification method for the relevant steps of orthopaedic interventions on the example of Periacetabular Osteotomy of Ganz (PAO). A typical orthopaedic intervention involves cutting bone, anatomy manipulation and repositioning as well as implant placement. Our method includes a segmentation based deep learning approach for detection and quantification of the cuts. Furthermore, anatomy repositioning was quantified through a multi-step registration method, which entailed a coarse alignment of the pre- and postoperative CT images followed by a fine fragment alignment of the repositioned anatomy. Implant (i.e., screw) position was identified by 3D Hough transform for line detection combined with fast voxel traversal based on ray tracing. The feasibility of our approach was investigated on 27 interventions and compared against manually performed 3D outcome evaluations. The results show that our method can accurately assess the quality and accuracy of the surgery. Our evaluation of the fragment repositioning showed a cumulative error for the coarse and fine alignment of 2.1 mm. Our evaluation of screw placement accuracy resulted in a distance error of 1.32 mm for screw head location and an angular deviation of 1.1° for screw axis. As a next step we will explore generalisation capabilities by applying the method to different interventions.

1. Introduction

The rapid technological advancements in recent years and their increased adoption in medical fields such as orthopaedic surgery, has led to numerous innovations in areas such diagnosis [1,2], surgical robotics [3] and intraoperative navigation [4,5,6,7,8,9,10,11,12,13,14]. Since three-dimensional (3D) preoperative planning is a fundamental requirement for surgical navigation systems and surgical robotics, significant amount of research has been dedicated to automating the planning process [15,16,17,18,19,20,21,22]. Analysing whether the preoperative plan was implemented successfully during the intervention is of equal importance. However, due to the lack of automated methods, 3D postoperative outcome evaluation is not yet used in clinical practise to date. State-of-the-art postoperative evaluation is based on 2D-imaging and patient related outcome measures. Although 3D outcome evaluation is a particularly powerful tool, it is technically demanding and can take up to 6 h due to the lack of adequate automatic methods [23].
We propose a fully automatic method for 3D postoperative quantification of the most common steps of an orthopaedic intervention: A: Cutting Bone (i.e., performing an osteotomy), B: Anatomy Manipulation and Repositioning and C: Implant Placement (e.g., screws, plates, prosthesis). To investigate the feasibility of the proposed approach, one of the most complex orthopaedic interventions called Periacetabular Osteotomy of Ganz (PAO) [24] was used as the target intervention in this study, which includes all of the aforementioned surgical steps (Figure 1).
PAO is a hip surgery, typically performed in young patients who suffer from a condition called residual hip dysplasia. Residual hip dysplasia is characterised by insufficient acetabular coverage of the femoral head, causing hip pain and possibly early onset of osteoarthritis. The PAO involves four pelvic osteotomies namely the supra- (1), retroacetabular (2), ischial (3) and pubic (4) osteotomy, which separate the acetabular fragment from the remaining pelvic bone (see Figure 1(A1–A4), respectively). The mobile fragment (Figure 1, in blue) is then rotated to its new position and fixated using 3 to 4 screws. Postoperative evaluation after PAO therefore entails quantifying all 4 osteotomy planes represented by a 3D point and 3D normal vector (Figure 1A) in a 4 × 4 transformation matrix encoding 3D orientation and position of the fragment (Figure 1B) and lastly, determining the screw positions represented as a 3D point and a 3D normal vector (Figure 1C). Although PAO is a complex intervention, conventional postoperative outcome evaluation are limited to mainly two radiographic (2D) parameters, being the center-edge (LCEA) angle of Wiberg and the acetabular index (AI) angle of Tonnis [25]. A LCEA angle of 23°–33° and AI angle of 2°–14° are typically considered healthy [26,27]. Figure 2 shows a postoperative X-ray after PAO (left hip), with an LCEA angle of 26.6° and AI angle of 13.4° on the healthy side.
Besides Hoch et al. [23], different 3D approaches for postoperative outcome evaluation have been presented but none were sufficiently comprehensive and automated to a level where they could be used in clinical practice. Manual 3D outcome evaluation was extensively used in studies on computer-assisted deformity correction to assess bone, implant and osteotomy parameters in the wrist [28,29], forearm [30,31,32], shoulder [33,34], knee [35,36,37,38,39] and foot [40,41,42]. Beside these manual approaches, automatic methods have also been developed. Murphy et al. [4] pusblished a clinical evaluation of a biomechanical guidance system for PAO in 2016, where they report preoperative planning, intraoperative navigation as well as the postoperative evaluation of the 3D acetabular realignment. To evaluate their system, the authors aligned pre- and postoperative CT scans through image registration using normalised mutual information (NMI) as metric. Kyo et al. [43] evaluated implant orientation in postoperative CT images after total hip arthroplasty, by overlaying a 3D model of the implant on a postoperative CT image of the implant. In 2022, Gubian et al. [44] evaluated CT-navigated pedicle screw placement, by comparing the preoperative trajectory plan with the corresponding postoperative screw position, determined by manual segmentation of the postoperative CT. Uozumi et al. [45] proposed an automatic 3D evaluation of screw placement after anterior cruciate ligament reconstruction using multidetector CT images. Their method consisted of thresholding the images to isolate the screw voxels, based on which the center of mass of the screws were found. The corresponding screw center lines were detected through Principal Component Analysis (PCA) [46]. In 2018, Esfandiari et al. [47] reported a deep learning based technique for screw position assessment on intraoperative X-rays of the spine anatomy. First, a convolutional neural network was utilised to classify every pixel into three distinct classes, namely the screw head, screw shaft, and background. Subsequently, a skeletonisation algorithm was applied to extract the central axis. To date, no studies have been reported in literature regarding the quantification of osteotomies, but the field of automatic fracture detection attempts to solve a similar problem. Several studies have investigated fracture detection using deep learning [48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65]. Studies on fracture detection have mainly been focusing on the classification task (fracture/no fracture), such as Tomita et al. [62] In 2018, Tomita et al. [62] published a pre-screening system to improve osteoporotic vertebral fractures (OVFs) diagnosis on chest, abdomen and pelvis CTs, flagging suspicious cases prior to review by radiologists. Cuts were localised by predicting bounding boxes around the area of fracture. Lindsey et al. [59], developed a deep learning approach for fracture detection and localisation on wrist radiographs. The model had two outputs, a binary classification for fracture detection and a probability map (heat map) showing the confidence at each pixel location for it to be part of a given fracture. They found that the detection of wrist fractures by clinicians improved significantly, when provided with the assistance of the trained model [59]. The localisation of fractures was not quantified, but served as an assistance for clinicians to find a potential fracture. In 2022, Joshi et al. [48] presented the first fracture detection and localisation method on wrist radiographs, which in addition to detection and localisation, also provided a segmentation mask of the fractures using instance segmentation with a modified version of mask R-CNN [66] architecture.
As per our knowledge, the approach presented in this study is the first work introducing a fully automatic method for 3D postoperative outcome evaluation. Our technical contributions are:
  • The first fully automatic 3D measurement method of bone cut accuracy is presented.
  • Thanks to our cut detection method, our combined segmentation and registration approach measures anatomy manipulation and repositioning automatically and accurately, even in the presence of bone in-growth and callus.
  • Lastly, an accurate and fully automatic 3D screw placement quantification method is presented.
Our approach was evaluated on 27 PAO interventions and compared against a manually performed 3D outcome evaluation method [23].

2. Materials and Methods

In the following, we first provide details on our data collection protocol in Section 2.1 and later describe the proposed automatic outcome evaluation method in Section 2.3. Section 2.3 is organised in three subsections: Osteotomy detection and quantification is discussed in Section 2.3.1, quantification of anatomy repositioning in Section 2.3.2 and implant quantification in Section 2.3.3. The following mathematical notations are used throughout this article: The global coordinate system of the pre- and postoperative CT images are denoted as C T p r e and C T p o s t respectively. The local coordinate system of the cropped pre- and postoperative CT images are denoted as F p r e and F p o s t respectively. A transformation from C T p r e to C T p o s t is denoted as C T p o s t T C T p r e . The relative repositioning of the fragment is described by the transformation from F p r e to F p o s t and denoted as F p o s t T F p r e .

2.1. Patient Selection and Imaging

This study included 27 patients (9 m, 18 f) who underwent PAO in our institution between March 2018 and May 2020. This study was approved by the responsible ethical committee (approval number: BASEC-Nr. 2018-01921). The mean age of the subjects was 25 years (14–33 y). 14 patients underwent surgery of the right hip joint, 13 of the left. Three patients had already undergone PAO on the contralateral side previous to this study. For two patients both hips were included, since their PAO interventions on both sides were performed during the mentioned time frame. Exclusion criteria were a previous hip surgery other than PAO (i.e., total hip replacement), a data set which is incomplete or does not comply to the CT imaging protocol. For all patients, pre- and postoperative computed tomography (CT) scans of the pelvis were acquired according to a standard protocol of the radiology department of Balgrist University Hospital. The radiographic assessment was performed pre- and 15.2 ± 3.4 weeks postoperatively, using a 64-detector row Somatom Edge CT® device (Siemens, Erlangen, Germany). The slice thickness was 1.0 mm and the in-plane resolution (x-y) was 0.4 × 0.4 mm. The images were resampled to shape [128,128,128] for all steps that involved Deep Learning. After resampling, the full pelvis CT images had a voxel size of 2.86 mm and the cropped images had a voxel size of 1 mm. 3D models of the pelves were extracted using the global thresholding and region growing functionalities of a commercial segmentation software (Mimics medical, Materialise NV, Leuven, Belgium) [8,34].

2.2. Manual 3D Postoperative Evaluation

Manual 3D postoperative evaluation is based on a pipeline that involves numerous manual steps. On the example of PAO, the overall process can be summarised as follows. The segmentation of the bone models was performed for both pre- and postoperative CT images. Segmentation of the post-operative CT images still relies on extensive manual refinement as the presence of metal artifacts make automatic segmentation methods less effective [67]. The 3D assessment of the osteotomies is the most challenging and inaccurate part due to the formation of callus resulting from the bone healing process. Callus is a visible irregularity on postoperative bone, similar to scarring in soft tissue. For each of the four osteotomies, a plane object was placed on the postoperative bone model at the location of the callus. Its position on the bone model was verified by looking at the model from multiple perspectives. Once the position of each osteotomy plane was determined, the cuts were simulated on the 3D postoperative bone model to free the mobile fragment in-silico. To identify the performed intraoperative anatomy repositioning (Figure 1B), the mobile fragment was aligned with the preoperative bone model using surface registration (ICP) [68] followed by manual fine-tuning. Finally, to determine the positions of the screws, manual selection of two points was carried out on the segmented screw models, one at the head and the other at the tip of each screw.

2.3. Computer-Assisted 3D Postoperative Evaluation

Figure 3 provides a high-level overview of our method, where A describes osteotomy detection and quantification, B shows the quantification of anatomy repositioning and C illustrates the three steps towards implant quantification. In A, we trained a network to specifically identify all voxels belonging to the cut region of each osteotomy (Section 2.3.1). A 3D plane was then fitted to each segmented cut region to quantify the osteotomy. In B, anatomy repositioning is quantified by two consecutive registrations, first a coarse alignment between pre- and postoperative CT, followed by a registration of the fragment from the post- to the preoperative position. We used two registration masks to specify the region of interest. The first registration mask for the pre-post alignment was inferred by a bone segmentation network to segment the full pelvis (A.1). Based on the plane positions predicted in A, the second registration mask for the fragment alignment was created (A.2), by isolating the area of the acetabular fragment in the full pelvis segmentation (A.1). The screw locations were determined in three steps, illustrated in C. First, the post-op CT was thresholded to obtain point clouds of the screws. Second, the screw center lines were determined by applying Hough Transform [69]. The last step was finding the screw head locations achieved by fast voxel traversal for ray tracing [70]. In the following chapters, the details on implementation are reported.

2.3.1. Osteotomy Detection and Quantification

The basis for calculating the planes was a pixel-wise identification of the osteotomy areas. For this purpose, a 3D multi-label segmentation network was used. The input of the network were the postoperative CT images, which were previously pre-processed by normalisation and cropping them around the hip joint, such that the full fragment was included and then resampled to [128,128,128]. The output tensor was of shape [ 128 , 128 , 128 , 5 ] consisting of the background [ 128 , 128 , 128 , 0 ] as well as one dimension per osteotomy [ 128 , 128 , 128 , i ] , i [ 1 , 4 ] , where channel i corresponds to the ith osteotomy plane of Figure 1A. The network was trained during 40 epochs and had a 3D UNet [71] structure consisting of five blocks of convolutional layers with 3 × 3 × 3 filters, followed by a max-pooling layer, where the number of filters were doubled for each convolutional block. We used 16 filters for the first convolutional block and doubled the number of filters for each block. Adam’s algorithm was used as the optimizer [72]. The activation function for all convolutional layers was leaky ReLU, except for the last layer which was softmax. The training data was manually annotated using pixel-wise segmentation and included the bony areas identified as part of the cut, using the callus formation. For epoch 0 to 10, the learning rate l r = 1 · e 4 , after that, it was changed to 1 · e 5 . The 27 postoperative images were augmented offline resulting in 405 input images. Augmentation consisted of at least one or a combination of vertical flipping, translation or rotation. To train this network, a weighted categorical cross-entropy loss L W C C E was used:
L W C C E = i = 1 i = N t i · l o g ( p i ) · w i
where N denotes the number of classes. The weights were empirically determined to be w = [ 10 , 270 , 260 , 270 , 260 ] .
The segmentation obtained from the network served as the basis for plane fitting, which was performed for each identified cut region using principal component analysis (PCA) [46], taking the smallest eigenvector as the plane normal and the center-of-mass as the plane center.

2.3.2. Quantification of Anatomy Repositioning

To quantify anatomy repositioning, we propose a masked multi-step registration approach which entailed a coarse alignment of the pre- and postoperative CT images followed by a fine fragment alignment of the repositioned anatomy.
Coarse alignment: The first registration calculates the transformation C T p o s t T C T p r e used to superimpose the pre- with the postoperative CT images (Figure 3B, Pre-Post Alignment). To this end, the pelvis bone was utilised as a common reference between pre- and postoperative CT images and used as registration mask for the coarse alignment (Figure 3(A.1)). The mask M c o a r s e was obtained by applying deep learning segmentation using the same network architecture as for the osteotomy detection described in Section 2.3.1, whereas learning rates, input/output images, activation function as well as loss function, differed. For epoch 1 to 20, the learning rate was set to 1 · e 4 , for epoch 20 to 30 it was 1 · e 5 and between 30 and 40 it was 1 · e 6 . Input and output size were both [128,128,128] and sigmoid was used as activation function for the final layer. The network was trained for 40 epochs. To make the network robust against the presence of implants and metal artefacts, the dataset consisted of not only 25 preoeprative CTs but also 27 postoperative CTs with implants. We augmented them offline to a total of 520 images and randomly applying either one or a combination of vertical flipping, rotation or translation to the images. Dice-CE Loss [73] L D C E was used as the loss function and was defined as:
L D C E = ( 1 α ) · L C E + α · L D i c e
where α = 0.5 , L C E is Cross-Entropy Loss [74] and L D i c e is Dice Loss [73]. We used the ITK toolbox [75,76] for implementing the coarse and fine registration algorithms to obtain C T p o s t T C T p r e and F p o s t T F p r e respectively. We used normalised correlation [77] as the image similarity metric and Regular Step Gradient Descent [78] as the optimizer and assumed a rigid-body transformation with 6 degrees of freedom as our underlying transformation. The hyperparameters for the coarse registration were the following: number of iterations = 200, translation scale = 1/2000, rotation scale = 1, maximum step length = 1 and minimum step length = 0.001.
Fine alignment: The goal of the fine registration was to obtain the transformation F p o s t T F p r e of the isolated bone fragment between pre- and postoperative position. A second registration mask was used to isolate the fragment and constrain the registration process (Figure 3(A.2)). This was achieved by first finding an approximation of the joint center C by computing the mean of all 4 plane centers P j , j [ 1 4 ] to then ensure the plane normals N j were pointing towards the approximated joint center C. We defined a voxel V i in M c o a r s e as part of the fragment, if for all planes the dot product P j V i · N j > 0, where P j V i is the vector pointing from the plane center P j to voxel V i . The final fragment alignment is expressed by the 3 degrees of freedom (DOF) rotation and 3 DOF translation encoded in the transformation F p o s t T F p r e obtained from the fragment alignment registration (Figure 3B, Fragment Alignment). The hyperparameters for the fine registration were the following: number of iterations = 200, translation scale = 1/500, rotation scale = 1, maximum step length = 0.7 and minimum step length = 0.0001.

2.3.3. Implant Quantification

In PAO, multiple screws are implanted in close proximity to each other. The segmented screw mask resulting from the CT images requires isolation of each screw, which is particularly challenging. Our method of implant quantification is based on the identification of simple geometric shape features of the implants, which allow us to subsequently determine their position. In our case, the shape features are lines corresponding to the screw threads. For other implants such as osteosynthesis plates or prostheses, the shape features would be circles or spheres corresponding to the plate holes or prosthesis heads, respectively. Breaking down implant quantification to simple shape features makes Hough transform the algorithm of choice. To quantify screw location, the center line and entry point for each screw were determined in three steps (Figure 4). For center line detection, we followed the Hough transform implementation for 3D line detection based on 3D point clouds published by Dalitz et al. [69] (Figure 4(2a)). Our input were the point clouds of the screws, which we obtained by thresholding the postoperative CTs at Hounsfield unit (HU) > 2500 to find the region corresponding to metal implants (Figure 4(1)). The method was applied for each patient individually. Each point cloud included 4 to 6 screws depending on the patient case (Figure 4(2b)). In the following, we briefly summarise their method: 3D Hough transform is applied to transform each point cloud into a voting array in the parameter space [79]. A common approach to find the object in this voting array, in our case a 3D line, is to search for local maxima, also known as a “non maximum suppression” [80] which can lead to the prediction of many nearby lines. To avoid that, the Hough transform is applied iteratively, while points of detected lines are removed after each iteration. The algorithm can be adjusted for best predictions by setting the following three parameters. (1) n l i n e s : the maximum number of lines to be detected, (2) m i n v o t e s : the minimum vote count to be detected as lines and (3) d x : the xy step width. Optimal results for our data set were found using the following parameters, which were determined experimentally: n l i n e s = 6 , m i n v o t e s = 50 and d x = 3 . With these settings, we achieved an accurate line prediction per screw (Figure 4(2a) directions and center points in pink). The algorithm returns the detected lines in a list including the following information: the number of points that have been assigned to the line, their center of mass and the line direction. In step three, screw entry points were identified using a fast voxel traversal method based on ray tracing [70] (Figure 4(3)). The center of mass per screw and the corresponding line direction, found through Hough transform, served as starting point and direction for the ray tracing algorithm within the thresholded segmentation masks. The anatomical coordinate system of the CT was used to ensure that the line direction was consistently pointing towards the screw head for all screws and patients. The ray travels through all foreground voxels along the predefined direction vector until it reaches the end of the segmentation mask, where the switch from foreground voxels to background happens, which in our case corresponds to the screw entry point.

3. Results

In the following sections, we report our results and compare them to the manual gold standard approach reported in Hoch et al. [23].

3.1. Osteotomy Location

To quantify the segmentation results for osteotomy detection, we conducted a comparison against manually defined planes. Please note, that clinical gold standard for manual definition of the osteotomy planes cannot be seen as a ground truth, as it involves many manual processes that can potentially dilute the accuracy of the assessment. The following metrics were introduced for this comparison (see Figure 5A) shows the pelvic 3D model with the starting points of the osteotomies P M 1 P M 5 (manual) and P A 1 P A 5 (automatic), which were defined as follows:
  • P M 1 , P A 1 : most superior point on intersection between supraacetabular osteotomy plane and the pelvic 3D model
  • P M 2 , P A 2 : most medial point on intersecting line between supraacetabular and retroacetabular osteotomy planes
  • P M 3 , P A 3 : most medial point on intersecting line between retroacetabular and ischial osteotomy planes
  • P M 4 , P A 4 : most anterior point on intersection between ischial osteotomy plane and the pelvic 3D model
  • P M 5 , P A 5 : most posterior point on intersection between pubic osteotomy plane and the pelvic 3D model
These points were projected to a plane P L defined by the best least-squares fit of P M 1 P M 4 .
Afterwards, the connecting vectors V M 1 V M 3 and V A 1 V A 3 between the projected starting points were calculated as well as the angles between them (Figure 5B). S R M (manual) and S R A (automatic) were formed by V M 1 , V M 2 and V A 1 , V A 2 , respectively. R I M and R I A were formed by V M 2 , V M 3 and V A 2 , V A 3 , respectively. In addition, we report the angle between the manual and automatic normal vectors N M and N A for the pubic osteotomy plane (Figure 5C). Table 1 reports the mean results over all patients, whereas individual results per patient can be found in Table A1. In Figure 6, a visualisation of the best (case 16) and worst (case 14) cut detection outcome is presented. The best and worst case were determined based on the deviation from the manual planning across all measures reported in Table A1.

3.2. Fragment Reorientation

We evaluated both registration processes the pre-post alignment C T p o s t T C T p r e and fragment alignment F p o s t T F p r e for the manual and automatic solution by calculating the mean absolute error (MAE) between corresponding mesh points in the end position. For the first stage of registration, the coarse alignment, we found an error e r r T 1 of 1.01 ± 0.46 mm. In the final stage of measuring the bone fragment repositioning, we found an error e r r T 2 of 2.10 ± 0.97 mm. In addition, we report the dice coefficient of the postoperative CT (cropped around the acetabulum) with the preoperative CT in its final position, after applying both transformations C T p o s t T C T p r e and F p o s t T F p r e . The mean dice coefficient across all patients for the automatic registration was D C a , m e a n 0.62 ± 0.07 and D C m , m e a n 0.60 ± 0.07 for the manual registration. In Table A2 we report the registration results for all patients, as well as the difference of dice coefficients for the automatic and manual solution D C d i f f = | D C a D C m | . As an ablation study, we performed 5-fold cross validation to evaluate the performance of the pelvic segmentation network. The mean dice coefficient (DC) across all folds was 0.93 ± 0.02. In Figure 7 we present the complete registration results for an example case (i.e., case 19). Figure 8 shows the 3D overlay visualisation of the registration results for four examples. Case 6 was found to be the best case ( D C a of 0.74 ) and case 17 the worst case ( D C a of 0.47 ). In addition case 22 and 15 are presented, which were determined to have the most and least similar dice scores for the automatic compared with the manual solution.

3.3. Implant Placement

We found that the mean error between manually and automatically detected screw head centers was 1.32 ± 0.49 mm. Similarly, the mean 3D angle between the screw center lines derived based on the automatic vs. the manual process was 1.10 ± 0.87°. Figure 9 shows an example of the screw head and center line prediction.

4. Discussion

In this study we proposed a fully automatic method for postoperative quantification of CT images after PAO interventions and compared it to manual state-of-the-art methods.
As per our knowledge, the presented method is the first to quantify osteotomy location in CT images. For cut detection, we found a mean error of projected plane starting points ( | P M i P A i | , i = 1 4 ) for planes 1–3 of 13 ± 3.6 mm and a mean difference between 2D angle of 11.9 ± 7°. For the pubic osteotomy (plane 4), we report a 8.7 mm mean error and a 29.3° angular deviation. Kulyk et al. [15] and Tschannen et al. [22] presented methods to detect the articular marginal plane (AMP) of the proximal humerus in CT images. Tschannen et al. [22] found a 2.40 mm error in estimating the AMP center and a 6.51° mean angular error for estimating the normal vector compared to the manually annotated ground truth. Kulyk et al. [15] reported a 1.30 ± 0.65 mm mean localisation error and a 4.68 ± 2.84° angular error. Our results are inferior to those found in both, Kulyk et al. [15] and Tschannen et al. [22]. However, our results were in line with the ones found in Ackermann et al. [8], who investigated the postoperative outcome compared to the planned osteotomies for PAO. During a typical PAO intervention, only plane 1 is cut with a surgical saw whereas plane 2 to 4 are performed with a chisel. Furthermore, the crossing between plane 2 and plane 3 (angle R I M and R I A ) is achieved by a controlled fracture, which makes a true plane fit, specifically the distinguishing between plane 2 and 3, more complex since it often presents as a curvature rather than two individual planes intersecting. Generally larger errors in cut prediction were found for planes with small surface areas (i.e., the pubic osteotomy, plane 3 and the iscial osteotomy, plane 4). One example is case 14, where the cut surfaces of plane 1 and 3 are forming a gap and only small areas of the bone fragments touch and form a callus, as can be seen in Figure 6. Moreover, the training data for the segmentation network was not annotated based on the manual plane detection but rather based on callus formation.
The evaluation of our fragment repositioning showed a 1.01 mm mean point-to-point distance error for the pre- to postoperative CT registration and 2.10 mm in the final position of the acetabular fragment. In 2011, Murphy et al. [81] compared 20 individual registration algorithms on a set of 30 intra-patient thoracic CT image pairs and found a mean error for landmark alignment of 0.83 mm for the best 6 algorithms. Their mean voxel size was 0.7 mm and a registration mask was used. Although the set of images used in Murphy et al. [81] had larger variety of information deviation between image pairs (due to respiratiory changes), we argue that our results are in a similar range and therefore acceptable. Moreover, e r r T 12 [mm] reports the combined error of both registration processes. It compares the starting position of the preoperative bone model to the end position of the acetabular fragment. Interestingly, although the largest deviation between meshes in the end position e r r T 12 [mm] lwas found for case 6, the best dice score D C a was achieved for that case, which suggest best overlap with the preoperative bone mesh. This can also be confirmed visually, shown in Figure 8. As reported in Table A2, the mean dice score for the automatic approach D C a was found to be slightly superior to the manual method D C m . Furthermore, no correlation was found between the performance of the segmentation network for the registration mask and the registration result.
For quantification of the screw positions, we found a 1.32 mm MAE for the screw head prediction and a 1.10° screw axis error. In 2013, Uozumi et al. [45] published an automatic 3D approach for screw placement evaluation where they first isolated the screw point clouds through thresholding the images and reported the center of mass. Then they applied PCA to find the screw axis. They found a 0.14 mm mean distance error and a 0.02° average angular error, which is superior to our results. Uozumi et al. [45] achieved higher accuracy with their method because their screw point clouds were isolated for each screw, therefore identifying the center of mass and direction was a straight forward process. In our case however, the screws are very close to each other, which results in point clouds combining multiple screws which is a more complex task, hence the lower accuracy. Moreover, our results are superior to the pedicle screw placement accuracy reported in Jiang et al. [82] and Gubian et al. [44], which were both considered clinically acceptable. Jiang et al. [82] evaluated robot-assisted pedicle screw placement and found a mean screw tip accuracy of 3.6 ± 2.3 mm and an angular deviation of 3.6 ± 2.8°. Gubian et al. [44] evaluated CT navigated pedicle screw placement by comparing the preoperative trajectory with the corresponding postoperative screw position and found a mean displacement of 5.2 ± 2.4 mm for the screw head points and a mean axis deviation of 6.3 ± 3.6°.
Our work has several limitations. A drawback of our method is the radiation exposure during CT acquisition. Moreover, the manual data labelling required for training is very time consuming. However, with the introduction of low-dose CTs less harmful imaging will replace conventional methods in the future [83]. Furthermore, recent AI based reconstruction algorithms can be leveraged, which are capable of generating accurate 3D models from X-ray or fluoroscopy data [13,84]. Confounders which are difficult to measure can be introduced at different steps of our pipeline. First of all, we compare our results to a manual approach, which may not correspond the ground truth measurements. Other potential sources might be the manual segmentation and the resampling of the images at several points throughout the pipeline. In addition, no inter-observer bias for the manual approach was investigated. Nevertheless, the large deviation of the automatic cut detection and the manual gold standard show the need for further validation studies that will be conducted in future work. An automated method would also not replace the radiology expert, but would instead represent a computer-assisted radiology approach that provides more information while saving time. The postoperative imaging for our data set was taken approximately 15 weeks post intervention, when bone healing has already occurred. We plan to investigate whether a higher standardised postoperative follow up, which takes place sooner after surgery, will result in a higher accuracy of our method. Future work will also include the generalisation of the proposed method to other orthopaedic interventions as well as being expanded to fracture detection.

5. Conclusions

Our method offers several advantages: Firstly, no manual input is required, which reduces the evaluation process by multiple hours per patient when compared to manual 3D analysis. Moreover, our method ensures objectivity in the assessment, providing reliable and consistent results and therefore contributes to enhancing the overall quality of treatment.

Author Contributions

Conceptualization, J.A. and P.F.; methodology, J.A., H.E. and P.F.; software, J.A.; formal analysis, J.A.; investigation, J.A. and A.H.; resources, P.F.; data curation, J.A. and A.H.; writing—original draft preparation, J.A.; writing—review and editing, H.E. and P.F.; visualization, J.A.; supervision, J.G.S., P.O.Z., H.E. and P.F.; project administration, P.F.; funding acquisition, P.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research was co-funded by Promedica foundation, Switzerland.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Zurich, and approved by the Ethics Committee of Kantonale Ethikkommission Zürich (BASEC-Nr. 2018-01921, 03.12.2018).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study is available upon request.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Results for the comparison of manual and automatic cut detection for all patients.
Table A1. Results for the comparison of manual and automatic cut detection for all patients.
Patient3D Distance [mm]2D Angle [°]abs. 2D Angle3D Angle [°]
Deviation [°]
Manual PM 1 PM 2 PM 3 PM 4 PM 5 VM 1 VM 2 VM 3 SRM RIM NM
Automatic PA 1 PA 2 PA 3 PA 4 PA 5 VA 1 VA 2 VA 3 SRA RIA NA
Mean 17.0 12.8 15.0 7.3 8.7 7.0 6.9 21.9 9.9 20.0 29.2
σ 13.0 7.4 12.1 6.3 6.7 5.4 5.4 17.7 8.5 15.2 11.2
032.425.89.96.42.011.610.50.61.111.129.5
13.35.710.98.35.12.28.91.011.19.921.6
211.27.39.42.16.28.83.53.25.20.331.2
320.610.26.71.28.212.12.38.59.86.220.0
412.48.214.28.717.70.42.014.01.616.048.5
56.610.515.05.921.22.92.914.70.011.849.1
626.77.817.37.41.32.42.839.70.436.919.2
714.527.813.15.63.43.018.919.115.90.325.3
83.318.87.02.616.97.81.917.65.915.754.4
910.916.511.825.521.11.27.043.18.236.049.3
1014.62.611.45.33.65.13.926.39.022.428.5
1116.826.116.56.710.17.617.115.99.51.236.9
1228.23.220.04.24.37.68.639.51.130.918.4
1314.010.86.01.215.49.45.112.014.517.128.3
14: worst54.910.971.417.323.513.123.473.336.549.932.9
152.91.815.76.514.51.65.745.87.340.138.2
16: best6.15.94.32.41.510.31.65.111.96.89.6
174.812.212.110.64.010.57.16.73.40.421.0
1814.911.113.12.92.30.37.98.98.216.827.9
195.810.813.324.011.41.71.720.43.418.735.5
2029.616.114.81.05.91.89.411.511.220.917.6
215.56.820.66.56.810.93.743.87.347.520.6
2230.911.97.85.23.111.24.46.615.611.022.0
2342.918.310.313.18.622.28.514.330.75.833.2
2423.422.216.99.05.70.88.220.87.429.021.4
2515.612.819.43.77.313.03.639.816.643.422.1
266.023.316.94.22.410.14.738.214.833.625.8
Table A2. Registration results for all patients.
Table A2. Registration results for all patients.
Case err T 1 [mm] err T 12 [mm] DC m DC a DC diff
m e a n a l l 1.012.100.600.620.02
σ 0.460.970.070.070.02
00.381.40.480.480.003
10.422.480.670.70.036
21.471.610.710.710.002
31.432.070.710.740.027
40.791.860.620.660.039
50.512.460.580.590.016
6: D C a , m a x 0.715.840.680.740.059
71.832.250.610.640.028
81.251.760.620.670.048
90.861.640.620.640.017
101.571.920.630.640.003
110.661.240.490.510.011
120.410.550.620.640.025
131.22.270.560.620.052
140.541.720.570.60.034
15: D C d i f f , m a x 1.722.730.580.650.068
161.693.270.710.70.004
17: D C a , m i n 0.270.330.440.470.023
181.312.450.690.690.004
19: e x a m p l e 0.881.60.530.550.017
200.742.250.530.540.008
210.9620.640.650.015
22: D C d i f f , m i n 1.192.040.570.570.001
231.6130.530.540.007
241.32.440.60.610.003
251.21.410.680.690.01
260.482.020.60.60.002

References

  1. Pham, D.L.; Xu, C.; Prince, J.L. Current methods in medical image segmentation. Annu. Rev. Biomed. Eng. 2000, 2, 315–337. [Google Scholar] [CrossRef] [PubMed]
  2. Sharma, N.; Aggarwal, L.M. Automated medical image segmentation techniques. J. Med. Phys./Assoc. Med. Phys. India 2010, 35, 3. [Google Scholar] [CrossRef] [PubMed]
  3. Hernandez, D.; Garimella, R.; Eltorai, A.E.; Daniels, A.H. Computer-assisted orthopaedic surgery. Orthop. Surg. 2017, 9, 152–158. [Google Scholar] [CrossRef] [PubMed]
  4. Murphy, R.J.; Armiger, R.S.; Lepistö, J.; Armand, M. Clinical evaluation of a biomechanical guidance system for periacetabular osteotomy. J. Orthop. Surg. Res. 2016, 11, 36. [Google Scholar] [CrossRef] [PubMed]
  5. Navab, N.; Blum, T.; Wang, L.; Okur, A.; Wendler, T. First deployments of augmented reality in operating rooms. Computer 2012, 45, 48–55. [Google Scholar] [CrossRef]
  6. Kok, E.N.; Eppenga, R.; Kuhlmann, K.F.; Groen, H.C.; van Veen, R.; van Dieren, J.M.; de Wijkerslooth, T.R.; van Leerdam, M.; Lambregts, D.M.; Heerink, W.J.; et al. Accurate surgical navigation with real-time tumor tracking in cancer surgery. NPJ Precis. Oncol. 2020, 4, 8. [Google Scholar] [CrossRef]
  7. Liebmann, F.; Roner, S.; von Atzigen, M.; Scaramuzza, D.; Sutter, R.; Snedeker, J.; Farshad, M.; Fürnstahl, P. Pedicle screw navigation using surface digitization on the Microsoft HoloLens. Int. J. Comput. Assist. Radiol. Surg. 2019, 14, 1157–1165. [Google Scholar]
  8. Ackermann, J.; Liebmann, F.; Hoch, A.; Snedeker, J.G.; Farshad, M.; Rahm, S.; Zingg, P.O.; Fürnstahl, P. Augmented reality based surgical navigation of complex pelvic osteotomies—A feasibility study on cadavers. Appl. Sci. 2021, 11, 1228. [Google Scholar] [CrossRef]
  9. Vlachopoulos, L.; Schweizer, A.; Graf, M.; Nagy, L.; Fürnstahl, P. Three-dimensional postoperative accuracy of extra-articular forearm osteotomies using CT-scan based patient-specific surgical guides. BMC Musculoskelet. Disord. 2015, 16, 336. [Google Scholar] [CrossRef]
  10. Akiyama, H.; Goto, K.; So, K.; Nakamura, T. Computed tomography-based navigation for curved periacetabular osteotomy. J. Orthop. Sci. 2010, 15, 829. [Google Scholar] [CrossRef]
  11. Hsieh, P.H.; Chang, Y.H.; Shih, C.H. Image-guided periacetabular osteotomy: Computer-assisted navigation compared with the conventional technique: A randomized study of 36 patients followed for 2 years. Acta Orthop. 2006, 77, 591–597. [Google Scholar] [CrossRef] [PubMed]
  12. Langlotz, F.; Bächler, R.; Berlemann, U.; Nolte, L.P.; Ganz, R. Computer assistance for pelvic osteotomies. Clin. Orthop. Relat. Res. (1976–2007) 1998, 354, 92–102. [Google Scholar] [CrossRef]
  13. Jecklin, S.; Jancik, C.; Farshad, M.; Fürnstahl, P.; Esfandiari, H. X23D—Intraoperative 3D Lumbar Spine Shape Reconstruction Based on Sparse Multi-View X-ray Data. J. Imaging 2022, 8, 271. [Google Scholar] [CrossRef] [PubMed]
  14. Langlotz, F.; Stucki, M.; Bächler, R.; Scheer, C.; Ganz, R.; Berlemann, U.; Nolte, L.P. The first twelve cases of computer assisted periacetabular osteotomy. Comput. Aided Surg. Off. J. Int. Soc. Comput. Aided Surg. (ISCAS) 1997, 2, 317–326. [Google Scholar] [CrossRef]
  15. Kulyk, P.; Vlachopoulos, L.; Fürnstahl, P.; Zheng, G. Fully automatic planning of total shoulder arthroplasty without segmentation: A deep learning based approach. In Proceedings of the Computational Methods and Clinical Applications in Musculoskeletal Imaging: 6th International Workshop, MSKI 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, 16 September 2018; Revised Selected Papers 6. Springer: Berlin/Heidelberg, Germany, 2019; pp. 22–34. [Google Scholar]
  16. Ackermann, J.; Wieland, M.; Hoch, A.; Ganz, R.; Snedeker, J.G.; Oswald, M.R.; Pollefeys, M.; Zingg, P.O.; Esfandiari, H.; Fürnstahl, P. A new approach to orthopedic surgery planning using deep reinforcement learning and simulation. In Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, 27 September–1 October 2021; Proceedings, Part IV 24. Springer: Berlin/Heidelberg, Germany, 2021; pp. 540–549. [Google Scholar]
  17. Shelton, T.J.; Shafagh, M.; Calafi, A.; Leshikar, H.B.; Haus, B.M. Preoperative 3D modeling and printing for guiding periacetabular osteotomy. Orthop. J. Sport. Med. 2021, 9, 2325967121S00026. [Google Scholar] [CrossRef]
  18. Vaishya, R.; Patralekh, M.K.; Vaish, A.; Agarwal, A.K.; Vijay, V. Publication trends and knowledge mapping in 3D printing in orthopaedics. J. Clin. Orthop. Trauma 2018, 9, 194–201. [Google Scholar] [PubMed]
  19. Belei, P.; Schkommodau, E.; Frenkel, A.; Mumme, T.; Radermacher, K. Computer-assisted single-or double-cut oblique osteotomies for the correction of lower limb deformities. Proc. Inst. Mech. Eng. Part H J. Eng. Med. 2007, 221, 787–800. [Google Scholar] [CrossRef]
  20. Carrillo, F.; Vlachopoulos, L.; Schweizer, A.; Nagy, L.; Snedeker, J.; Fürnstahl, P. A time saver: Optimization approach for the fully automatic 3D planning of forearm osteotomies. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention—MICCAI 2017: 20th International Conference, Quebec City, QC, Canada, 11–13 September 2017; Proceedings, Part II 20. Springer: Berlin/Heidelberg, Germany, 2017; pp. 488–496. [Google Scholar]
  21. Schkommodau, E.; Frenkel, A.; Belei, P.; Recknagel, B.; Wirtz, D.C.; Radermacher, K. Computer-assisted optimization of correction osteotomies on lower extremities. Comput. Aided Surg. 2005, 10, 345–350. [Google Scholar] [CrossRef]
  22. Tschannen, M.; Vlachopoulos, L.; Gerber, C.; Székely, G.; Fürnstahl, P. Regression forest-based automatic estimation of the articular margin plane for shoulder prosthesis planning. Med. Image Anal. 2016, 31, 88–97. [Google Scholar] [CrossRef]
  23. Hoch, A.; Grossenbacher, G.; Jungwirth-Weinberger, A.; Götschi, T.; Fürnstahl, P.; Zingg, P.O. The periacetabular osteotomy: Angulation of the supraacetabular osteotomy for quantification of correction. Hip Int. 2022, 11207000221103079. [Google Scholar] [CrossRef]
  24. Ganz, R.; Klaue, K.; Vinh, T.S.; Mast, J.W. A New Periacetabular Osteotomy for the Treatment of Hip Dysplasias Technique and Preliminary Results. Clin. Orthop. Relat. Res. (1976–2007) 1988, 232, 26–36. [Google Scholar] [CrossRef]
  25. Tönnis, D. Congenital Dysplasia and Dislocation of the Hip in Children and Adults; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  26. Tannast, M.; Hanke, M.S.; Zheng, G.; Steppacher, S.D.; Siebenrock, K.A. What are the radiographic reference values for acetabular under-and overcoverage? Clin. Orthop. Relat. Res. 2015, 473, 1234–1246. [Google Scholar] [CrossRef] [PubMed]
  27. Ibrahim, M.M.; Smit, K. Anatomical description and classification of hip dysplasia. Hip Dysplasia Underst. Treat. Instab. Nativ. Hip 2020, 23–37. [Google Scholar]
  28. Schweizer, A.; Mauler, F.; Vlachopoulos, L.; Nagy, L.; Fürnstahl, P. Computer-assisted 3-dimensional reconstructions of scaphoid fractures and nonunions with and without the use of patient-specific guides: Early clinical outcomes and postoperative assessments of reconstruction accuracy. J. Hand Surg. 2016, 41, 59–69. [Google Scholar] [CrossRef] [PubMed]
  29. Hirsiger, S.; Schweizer, A.; Miyake, J.; Nagy, L.; Fürnstahl, P. Corrective osteotomies of phalangeal and metacarpal malunions using patient-specific guides: CT-based evaluation of the reduction accuracy. Hand 2018, 13, 627–636. [Google Scholar] [CrossRef]
  30. Roner, S.; Schweizer, A.; Da Silva, Y.; Carrillo, F.; Nagy, L.; Fürnstahl, P. Accuracy and early clinical outcome after 3-dimensional correction of distal radius intra-articular malunions using patient-specific instruments. J. Hand Surg. 2020, 45, 918–923. [Google Scholar] [CrossRef] [PubMed]
  31. Roner, S.; Vlachopoulos, L.; Nagy, L.; Schweizer, A.; Fürnstahl, P. Accuracy and early clinical outcome of 3-dimensional planned and guided single-cut osteotomies of malunited forearm bones. J. Hand Surg. 2017, 42, 1031.e1–1031.e8. [Google Scholar] [CrossRef]
  32. Miyake, J.; Murase, T.; Oka, K.; Moritomo, H.; Sugamoto, K.; Yoshikawa, H. Computer-assisted corrective osteotomy for malunited diaphyseal forearm fractures. JBJS 2012, 94, e150. [Google Scholar] [CrossRef]
  33. Vlachopoulos, L.; Schweizer, A.; Meyer, D.C.; Gerber, C.; Fürnstahl, P. Computer-assisted planning and patient-specific guides for the treatment of midshaft clavicle malunions. J. Shoulder Elb. Surg. 2017, 26, 1367–1373. [Google Scholar] [CrossRef]
  34. Vlachopoulos, L.; Schweizer, A.; Meyer, D.C.; Gerber, C.; Fürnstahl, P. Three-dimensional corrective osteotomies of complex malunited humeral fractures using patient-specific guides. J. Shoulder Elb. Surg. 2016, 25, 2040–2047. [Google Scholar] [CrossRef]
  35. Fürnstahl, P.; Vlachopoulos, L.; Schweizer, A.; Fucentese, S.F.; Koch, P.P. Complex osteotomies of tibial plateau malunions using computer-assisted planning and patient-specific surgical guides. J. Orthop. Trauma 2015, 29, e270–e276. [Google Scholar] [CrossRef] [PubMed]
  36. Fucentese, S.F.; Meier, P.; Jud, L.; Köchli, G.L.; Aichmair, A.; Vlachopoulos, L.; Fürnstahl, P. Accuracy of 3D-planned patient specific instrumentation in high tibial open wedge valgisation osteotomy. J. Exp. Orthop. 2020, 7, 7. [Google Scholar] [CrossRef]
  37. Victor, J.; Premanathan, A. Virtual 3D planning and patient specific surgical guides for osteotomies around the knee: A feasibility and proof-of-concept study. Bone Jt. J. 2013, 95, 153–158. [Google Scholar] [CrossRef] [PubMed]
  38. Munier, M.; Donnez, M.; Ollivier, M.; Flecher, X.; Chabrand, P.; Argenson, J.N.; Parratte, S. Can three-dimensional patient-specific cutting guides be used to achieve optimal correction for high tibial osteotomy? Pilot study. Orthop. Traumatol. Surg. Res. 2017, 103, 245–250. [Google Scholar] [CrossRef] [PubMed]
  39. Chaouche, S.; Jacquet, C.; Fabre-Aubrespy, M.; Sharma, A.; Argenson, J.N.; Parratte, S.; Ollivier, M. Patient-specific cutting guides for open-wedge high tibial osteotomy: Safety and accuracy analysis of a hundred patients continuous cohort. Int. Orthop. 2019, 43, 2757–2765. [Google Scholar] [CrossRef] [PubMed]
  40. Viehöfer, A.F.; Wirth, S.H.; Zimmermann, S.M.; Jaberg, L.; Dennler, C.; Fürnstahl, P.; Farshad, M. Augmented reality guided osteotomy in hallux Valgus correction. BMC Musculoskelet. Disord. 2020, 21, 438. [Google Scholar] [CrossRef]
  41. Weigelt, L.; Fürnstahl, P.; Hirsiger, S.; Vlachopoulos, L.; Espinosa, N.; Wirth, S.H. Three-dimensional correction of complex ankle deformities with computer-assisted planning and patient-specific surgical guides. J. Foot Ankle Surg. 2017, 56, 1158–1164. [Google Scholar] [CrossRef]
  42. Wirth, S.; Viehöfer, A.; Laurenz, J.; Zimmermann, S.; Dennler, C.; Fürnstahl, P.; Farshad, M. Augmented Reality Guided Osteotomy in Hallux Valgus Surgery. Foot Ankle Orthop. 2018, 3, 2473011418S00518. [Google Scholar] [CrossRef]
  43. Kyo, T.; Nakahara, I.; Kuroda, Y.; Miki, H. Effects of coordinate-system construction methods on postoperative computed tomography evaluation of implant orientation after total hip arthroplasty. Comput. Aided Surg. 2015, 20, 52–60. [Google Scholar] [CrossRef]
  44. Gubian, A.; Kausch, L.; Neumann, J.O.; Kiening, K.; Ishak, B.; Maier-Hein, K.; Unterberg, A.; Scherer, M. CT-Navigated Spinal Instrumentations–Three-Dimensional Evaluation of Screw Placement Accuracy in Relation to a Screw Trajectory Plan. Medicina 2022, 58, 1200. [Google Scholar] [CrossRef]
  45. Uozumi, Y.; Nagamune, K.; Nishizawa, Y.; Araki, D.; Hoshino, Y.; Matsushita, T.; Kuroda, R.; Kurosaka, M. An automatic three-dimensional evaluation of screw placement after anterior cruciate ligament reconstruction using mdct images. J. Adv. Comput. Intell. Intell. Inform. 2013, 17, 818–827. [Google Scholar] [CrossRef]
  46. Hotelling, H. Analysis of a complex of statistical variables into principal components. J. Educ. Psychol. 1933, 24, 417. [Google Scholar] [CrossRef]
  47. Esfandiari, H.; Newell, R.; Anglin, C.; Street, J.; Hodgson, A.J. A deep learning framework for segmentation and pose estimation of pedicle screw implants based on C-arm fluoroscopy. Int. J. Comput. Assist. Radiol. Surg. 2018, 13, 1269–1282. [Google Scholar] [CrossRef] [PubMed]
  48. Joshi, D.; Singh, T.P.; Joshi, A.K. Deep learning-based localization and segmentation of wrist fractures on X-ray radiographs. Neural Comput. Appl. 2022, 34, 19061–19077. [Google Scholar] [CrossRef]
  49. Guan, B.; Yao, J.; Wang, S.; Zhang, G.; Zhang, Y.; Wang, X.; Wang, M. Automatic detection and localization of thighbone fractures in X-ray based on improved deep learning method. Comput. Vis. Image Underst. 2022, 216, 103345. [Google Scholar] [CrossRef]
  50. Wei, J.; Yao, J.; Zhanga, G.; Guan, B.; Zhang, Y.; Wang, S. Semi-supervised object detection based on single-stage detector for thighbone fracture localization. arXiv 2022, arXiv:2210.10998. [Google Scholar]
  51. Dupuis, M.; Delbos, L.; Veil, R.; Adamsbaum, C. External validation of a commercially available deep learning algorithm for fracture detection in children. Diagn. Interv. Imaging 2022, 103, 151–159. [Google Scholar] [CrossRef]
  52. Hendrix, N.; Hendrix, W.; van Dijke, K.; Maresch, B.; Maas, M.; Bollen, S.; Scholtens, A.; de Jonge, M.; Ong, L.L.S.; van Ginneken, B.; et al. Musculoskeletal radiologist-level performance by using deep learning for detection of scaphoid fractures on conventional multi-view radiographs of hand and wrist. Eur. Radiol. 2023, 33, 1575–1588. [Google Scholar] [CrossRef]
  53. Yu, J.; Yu, S.; Erdal, B.; Demirer, M.; Gupta, V.; Bigelow, M.; Salvador, A.; Rink, T.; Lenobel, S.; Prevedello, L.; et al. Detection and localisation of hip fractures on anteroposterior radiographs with artificial intelligence: Proof of concept. Clin. Radiol. 2020, 75, 237.e1–237.e9. [Google Scholar] [CrossRef]
  54. Blüthgen, C.; Becker, A.S.; de Martini, I.V.; Meier, A.; Martini, K.; Frauenfelder, T. Detection and localization of distal radius fractures: Deep learning system versus radiologists. Eur. J. Radiol. 2020, 126, 108925. [Google Scholar] [CrossRef]
  55. Brett, A.; Miller, C.G.; Hayes, C.W.; Krasnow, J.; Ozanian, T.; Abrams, K.; Block, J.E.; van Kuijk, C. Development of a clinical workflow tool to enhance the detection of vertebral fractures: Accuracy and precision evaluation. Spine 2009, 34, 2437–2443. [Google Scholar] [CrossRef] [PubMed]
  56. Olczak, J.; Fahlberg, N.; Maki, A.; Razavian, A.S.; Jilert, A.; Stark, A.; Sköldenberg, O.; Gordon, M. Artificial intelligence for analyzing orthopedic trauma radiographs: Deep learning algorithms—Are they on par with humans for diagnosing fractures? Acta Orthop. 2017, 88, 581–586. [Google Scholar] [CrossRef] [PubMed]
  57. Chung, S.W.; Han, S.S.; Lee, J.W.; Oh, K.S.; Kim, N.R.; Yoon, J.P.; Kim, J.Y.; Moon, S.H.; Kwon, J.; Lee, H.J.; et al. Automated detection and classification of the proximal humerus fracture by using deep learning algorithm. Acta Orthop. 2018, 89, 468–473. [Google Scholar] [CrossRef] [PubMed]
  58. Kim, D.; MacKinnon, T. Artificial intelligence in fracture detection: Transfer learning from deep convolutional neural networks. Clin. Radiol. 2018, 73, 439–445. [Google Scholar] [CrossRef]
  59. Lindsey, R.; Daluiski, A.; Chopra, S.; Lachapelle, A.; Mozer, M.; Sicular, S.; Hanel, D.; Gardner, M.; Gupta, A.; Hotchkiss, R.; et al. Deep neural network improves fracture detection by clinicians. Proc. Natl. Acad. Sci. USA 2018, 115, 11591–11596. [Google Scholar] [CrossRef]
  60. Adams, M.; Chen, W.; Holcdorf, D.; McCusker, M.W.; Howe, P.D.; Gaillard, F. Computer vs. human: Deep learning versus perceptual training for the detection of neck of femur fractures. J. Med. Imaging Radiat. Oncol. 2019, 63, 27–32. [Google Scholar] [CrossRef]
  61. Urakawa, T.; Tanaka, Y.; Goto, S.; Matsuzawa, H.; Watanabe, K.; Endo, N. Detecting intertrochanteric hip fractures with orthopedist-level accuracy using a deep convolutional neural network. Skelet. Radiol. 2019, 48, 239–244. [Google Scholar] [CrossRef]
  62. Tomita, N.; Cheung, Y.Y.; Hassanpour, S. Deep neural networks for automatic detection of osteoporotic vertebral fractures on CT scans. Comput. Biol. Med. 2018, 98, 8–15. [Google Scholar] [CrossRef]
  63. Kolanu, N.; Silverstone, E.J.; Ho, B.H.; Pham, H.; Hansen, A.; Pauley, E.; Quirk, A.R.; Sweeney, S.C.; Center, J.R.; Pocock, N.A. Clinical utility of computer-aided diagnosis of vertebral fractures from computed tomography images. J. Bone Miner. Res. 2020, 35, 2307–2312. [Google Scholar] [CrossRef]
  64. Zhou, Q.Q.; Tang, W.; Wang, J.; Hu, Z.C.; Xia, Z.Y.; Zhang, R.; Fan, X.; Yong, W.; Yin, X.; Zhang, B.; et al. Automatic detection and classification of rib fractures based on patients’ CT images and clinical information via convolutional neural network. Eur. Radiol. 2021, 31, 3815–3825. [Google Scholar] [CrossRef]
  65. Warin, K.; Limprasert, W.; Suebnukarn, S.; Paipongna, T.; Jantana, P.; Vicharueang, S. Maxillofacial fracture detection and classification in computed tomography images using convolutional neural network-based models. Sci. Rep. 2023, 13, 3434. [Google Scholar] [CrossRef]
  66. He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2961–2969. [Google Scholar]
  67. Lewis, M.; Reid, K.; Toms, A.P. Reducing the effects of metal artefact using high keV monoenergetic reconstruction of dual energy CT (DECT) in hip replacements. Skelet. Radiol. 2013, 42, 275–282. [Google Scholar] [CrossRef]
  68. BESL, P. A Method for Registration of 3-D Shapes. Trans. PAMI 1992, 1611, 586–606. [Google Scholar] [CrossRef]
  69. Dalitz, C.; Schramke, T.; Jeltsch, M. Iterative Hough transform for line detection in 3D point clouds. Image Process. Line 2017, 7, 184–196. [Google Scholar] [CrossRef]
  70. Amanatides, J.; Woo, A. A fast voxel traversal algorithm for ray tracing. Eurographics 1987, 87, 3–10. [Google Scholar]
  71. Çiçek, Ö.; Abdulkadir, A.; Lienkamp, S.S.; Brox, T.; Ronneberger, O. 3D U-Net: Learning dense volumetric segmentation from sparse annotation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2016: 19th International Conference, Athens, Greece, 17–21 October 2016; Proceedings, Part II 19. Springer: Berlin/Heidelberg, Germany, 2016; pp. 424–432. [Google Scholar]
  72. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  73. Milletari, F.; Navab, N.; Ahmadi, S. V-net: Fully convolutional neural networks for volumetric medical image segmentation. In Proceedings of the 2016 Fourth International Conference on 3D Vision (3DV), Stanford, CA, USA, 25–28 October 2016; pp. 565–571. [Google Scholar]
  74. Zhang, Z.; Sabuncu, M. Generalized cross entropy loss for training deep neural networks with noisy labels. Adv. Neural Inf. Process. Syst. 2018, 31. [Google Scholar]
  75. Yoo, T.S.; Ackerman, M.J.; Lorensen, W.E.; Schroeder, W.; Chalana, V.; Aylward, S.; Metaxas, D.; Whitaker, R. Engineering and algorithm design for an image processing API: A technical report on ITK-the insight toolkit. In Medicine Meets Virtual Reality 02/10; IOS Press: Amsterdam, The Netherlands, 2002; pp. 586–592. [Google Scholar]
  76. Johnson, H.J.; McCormick, M.; Ibanez, L. The ITK Software Guide: Design and Functionality; Kitware Inc.: Clifton Park, NY, USA, 2015. [Google Scholar]
  77. Rao, Y.R.; Prathapani, N.; Nagabhooshanam, E. Application of normalized cross correlation to image registration. Int. J. Res. Eng. Technol. 2014, 3, 12–16. [Google Scholar]
  78. Mambo, S. Optimisation and Performance Evaluation in Image Registration Technique. Ph.D. Thesis, Université Paris-Est, Tshwane University of Technology, Pretoria, South Africa, 2018. [Google Scholar]
  79. Mukhopadhyay, P.; Chaudhuri, B.B. A survey of Hough Transform. Pattern Recognit. 2015, 48, 993–1010. [Google Scholar] [CrossRef]
  80. Burger, W.; Burge, M. Principles of Digital Image Processing: Core Algorithms; Springer: London, UK, 2010. [Google Scholar]
  81. Murphy, K.; Van Ginneken, B.; Reinhardt, J.M.; Kabus, S.; Ding, K.; Deng, X.; Cao, K.; Du, K.; Christensen, G.E.; Garcia, V.; et al. Evaluation of registration methods on thoracic CT: The EMPIRE10 challenge. IEEE Trans. Med Imaging 2011, 30, 1901–1920. [Google Scholar] [CrossRef]
  82. Jiang, B.; Pennington, Z.; Zhu, A.; Matsoukas, S.; Ahmed, A.K.; Ehresman, J.; Mahapatra, S.; Cottrill, E.; Sheppell, H.; Manbachi, A.; et al. Three-dimensional assessment of robot-assisted pedicle screw placement accuracy and instrumentation reliability based on a preplanned trajectory. J. Neurosurg. Spine 2020, 33, 519–528. [Google Scholar] [CrossRef]
  83. Stern, C.; Sommer, S.; Germann, C.; Galley, J.; Pfirrmann, C.W.; Fritz, B.; Sutter, R. Pelvic bone CT: Can tin-filtered ultra-low-dose CT and virtual radiographs be used as alternative for standard CT and digital radiographs? Eur. Radiol. 2021, 31, 6793–6801. [Google Scholar] [CrossRef]
  84. Kasten, Y.; Doktofsky, D.; Kovler, I. End-to-end convolutional neural network for 3D reconstruction of knee bones from bi-planar X-ray images. In Proceedings of the Machine Learning for Medical Image Reconstruction: Third International Workshop, MLMIR 2020, Held in Conjunction with MICCAI 2020, Lima, Peru, 8 October 2020; Proceedings 3. Springer: Berlin/Heidelberg, Germany, 2020; pp. 123–133. [Google Scholar]
Figure 1. Overview of the three common surgical steps in orthopaedic interventions on the example of the Periacetabular Osteotomy (PAO): (A) Cutting Bone: Four osteotomies are performed, namely the ischial (1), pubic (2), supra- (3) and retroacetabular (4) osteotomy, to mobilise the acetabular fragment (in blue). (B) Anatomy Manipulation and Repositioning: Repositioning of the acetabular fragment (in blue) to restore the physiological anatomy. The transformation F p o s t T F p r e represents the relative repositioning of the fragment from F p r e to F p o s t , where F p r e and F p o s t are the respective local coordinate systems of the cropped pre- and postoperative CT images. (C) Implant Placement: Fixing the acetabular fragment in its new position using screw implants (red). The bottom row shows slices of a postoperative CT: In (A) the supraacetabular (in yellow) and pubic (in orange) osteotomies are highlighted. An overlap of pre- and postoperative CTs is shown in (B), to indicate the acetabular transformation. Finally, in (C), the cross section of a screw is visible in red.
Figure 1. Overview of the three common surgical steps in orthopaedic interventions on the example of the Periacetabular Osteotomy (PAO): (A) Cutting Bone: Four osteotomies are performed, namely the ischial (1), pubic (2), supra- (3) and retroacetabular (4) osteotomy, to mobilise the acetabular fragment (in blue). (B) Anatomy Manipulation and Repositioning: Repositioning of the acetabular fragment (in blue) to restore the physiological anatomy. The transformation F p o s t T F p r e represents the relative repositioning of the fragment from F p r e to F p o s t , where F p r e and F p o s t are the respective local coordinate systems of the cropped pre- and postoperative CT images. (C) Implant Placement: Fixing the acetabular fragment in its new position using screw implants (red). The bottom row shows slices of a postoperative CT: In (A) the supraacetabular (in yellow) and pubic (in orange) osteotomies are highlighted. An overlap of pre- and postoperative CTs is shown in (B), to indicate the acetabular transformation. Finally, in (C), the cross section of a screw is visible in red.
Jimaging 09 00180 g001
Figure 2. Example of a postoperative AP pelvic radiograph after PAO, showing the lateral center-edge angle (LCEA) and acetabular index (AI) measurement. On the left hip joint, the area where the supraacetabular oteotomy has been performed is shown in yellow. The area of the pubic osteotomy is marked in orange.
Figure 2. Example of a postoperative AP pelvic radiograph after PAO, showing the lateral center-edge angle (LCEA) and acetabular index (AI) measurement. On the left hip joint, the area where the supraacetabular oteotomy has been performed is shown in yellow. The area of the pubic osteotomy is marked in orange.
Jimaging 09 00180 g002
Figure 3. An overview of the proposed pipeline consisting of three main components: (A) Osteotomy Detection and Quantification, where the four osteotomy planes (1–4) are shown in yellow, green, blue and red. (A.1,A.2) represent how the two registration masks, used in (B), were created. (B) Quantification of Anatomy Repositioning and (C) Implant Quantification.
Figure 3. An overview of the proposed pipeline consisting of three main components: (A) Osteotomy Detection and Quantification, where the four osteotomy planes (1–4) are shown in yellow, green, blue and red. (A.1,A.2) represent how the two registration masks, used in (B), were created. (B) Quantification of Anatomy Repositioning and (C) Implant Quantification.
Jimaging 09 00180 g003
Figure 4. An overview of the three steps towards implant quantification. (1) thresholding the postoperative CT to find the screw point clouds. (2a) The point clouds from (1) are the input for the Hough transform algorithm, which finds the screw axis and center point. In (2b) the vector directions are verified to point towards the screw entry points using the CT coordinate system. (3) Entry points are determined by fast voxel traversal based on ray tracing. The center point found through Hough transform (shown in pink) is the starting point for ray tracing along the screw axis (green) until reaching the end of the screw (blue).
Figure 4. An overview of the three steps towards implant quantification. (1) thresholding the postoperative CT to find the screw point clouds. (2a) The point clouds from (1) are the input for the Hough transform algorithm, which finds the screw axis and center point. In (2b) the vector directions are verified to point towards the screw entry points using the CT coordinate system. (3) Entry points are determined by fast voxel traversal based on ray tracing. The center point found through Hough transform (shown in pink) is the starting point for ray tracing along the screw axis (green) until reaching the end of the screw (blue).
Jimaging 09 00180 g004
Figure 5. Illustration of the measures to evaluate the location of each osteotomy plane. (A) shows the starting point of the osteotomies P M 1 P M 5 (manual) and P A 1 P A 5 (automatic). (B) shows the projected plane P L and the connecting vectors V M 1 V M 3 and V A 1 V A 3 between projected points. (C) represents the most posterior points on plane 4, P M 5 and P M 4 , as well as the normal vectors of plane 4, N M and N A .
Figure 5. Illustration of the measures to evaluate the location of each osteotomy plane. (A) shows the starting point of the osteotomies P M 1 P M 5 (manual) and P A 1 P A 5 (automatic). (B) shows the projected plane P L and the connecting vectors V M 1 V M 3 and V A 1 V A 3 between projected points. (C) represents the most posterior points on plane 4, P M 5 and P M 4 , as well as the normal vectors of plane 4, N M and N A .
Jimaging 09 00180 g005
Figure 6. Visualisation of the best (case 16) and worst (case 14) cut detection outcome. The manual plane placement is shown in red and the automatic solution in blue.
Figure 6. Visualisation of the best (case 16) and worst (case 14) cut detection outcome. The manual plane placement is shown in red and the automatic solution in blue.
Jimaging 09 00180 g006
Figure 7. Results of quantifying anatomy repositioning for a typical case (i.e., case 19). In (1a2b), the preoperative CT (pelvis highlighted in green) and postoperative CT images are superimposed before and after the either registration step (i.e., (1a): before pre-post alignment and (1b): after pre-post alignment). (1): Pre-Post Alignment. (1a) shows the overlay of the pre- and postoperative CTs after initialisation, which is the starting position for the pre-post alignment. (1b) shows the end position of the pre-post alignment, where the pre- and postoperative CTs are aligned. (2): For Fragment Alignment, the CT images are cropped around the fragment. (2a) shows the starting position for the second registration, which is found in 1. (2b) shows the final position, where the pre and post fragment are aligned. (3) shows the end position for the manual and automatic fragment alignment. The 3D postoperative bone model is shown in violet, the automatic solution is shown in blue and the manual result in red.
Figure 7. Results of quantifying anatomy repositioning for a typical case (i.e., case 19). In (1a2b), the preoperative CT (pelvis highlighted in green) and postoperative CT images are superimposed before and after the either registration step (i.e., (1a): before pre-post alignment and (1b): after pre-post alignment). (1): Pre-Post Alignment. (1a) shows the overlay of the pre- and postoperative CTs after initialisation, which is the starting position for the pre-post alignment. (1b) shows the end position of the pre-post alignment, where the pre- and postoperative CTs are aligned. (2): For Fragment Alignment, the CT images are cropped around the fragment. (2a) shows the starting position for the second registration, which is found in 1. (2b) shows the final position, where the pre and post fragment are aligned. (3) shows the end position for the manual and automatic fragment alignment. The 3D postoperative bone model is shown in violet, the automatic solution is shown in blue and the manual result in red.
Jimaging 09 00180 g007
Figure 8. Registration results for four example cases, the postoperative bone is shown in violet, the automatically transformed fragment is shown in blue and the manually registered fragment is displayed in red. Case 6 with the highest overall dice score of 0.74 , case 17 with the lowest dice score for the automatic solution of 0.47 , case 22 with the most similar dice of 0.57 for both solutions and case 15 with the least similar dice scores for the automatic and manual solution, 0.65 and 0.58 respectively.
Figure 8. Registration results for four example cases, the postoperative bone is shown in violet, the automatically transformed fragment is shown in blue and the manually registered fragment is displayed in red. Case 6 with the highest overall dice score of 0.74 , case 17 with the lowest dice score for the automatic solution of 0.47 , case 22 with the most similar dice of 0.57 for both solutions and case 15 with the least similar dice scores for the automatic and manual solution, 0.65 and 0.58 respectively.
Jimaging 09 00180 g008
Figure 9. Example of a manual (red) and automatic (blue) prediction of the screw head location and center line.
Figure 9. Example of a manual (red) and automatic (blue) prediction of the screw head location and center line.
Jimaging 09 00180 g009
Table 1. Mean difference between the manual and automatic cut detection across all patients. The measurements were calculated according to Section 2.3.1.
Table 1. Mean difference between the manual and automatic cut detection across all patients. The measurements were calculated according to Section 2.3.1.
MeasureManualAutomaticMean σ MinMax
3D distance [mm] P M 1 P A 1 17.013.02.954.9
P M 2 P A 2 12.87.41.827.8
P M 3 P A 3 15.012.14.371.4
P M 4 P A 4 7.36.31.025.5
P M 5 P A 5 8.76.71.323.5
2D angle [°] V M 1 V A 1 7.05.40.322.2
V M 2 V A 2 6.95.41.623.4
V M 3 V A 3 21.917.70.673.3
Abs. 2D angle deviation [°] S R M S R A 9.98.50.036.5
R I M R I A 20.015.20.349.9
3D angle [°] N M N A 29.211.29.654.4
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ackermann, J.; Hoch, A.; Snedeker, J.G.; Zingg, P.O.; Esfandiari, H.; Fürnstahl, P. Automatic 3D Postoperative Evaluation of Complex Orthopaedic Interventions. J. Imaging 2023, 9, 180. https://doi.org/10.3390/jimaging9090180

AMA Style

Ackermann J, Hoch A, Snedeker JG, Zingg PO, Esfandiari H, Fürnstahl P. Automatic 3D Postoperative Evaluation of Complex Orthopaedic Interventions. Journal of Imaging. 2023; 9(9):180. https://doi.org/10.3390/jimaging9090180

Chicago/Turabian Style

Ackermann, Joëlle, Armando Hoch, Jess Gerrit Snedeker, Patrick Oliver Zingg, Hooman Esfandiari, and Philipp Fürnstahl. 2023. "Automatic 3D Postoperative Evaluation of Complex Orthopaedic Interventions" Journal of Imaging 9, no. 9: 180. https://doi.org/10.3390/jimaging9090180

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop