Next Article in Journal
Reduced Radiation Exposure Protocol during Computer Tomography of the Left Atrium Prior to Catheter Ablation in Patients with Atrial Fibrillation
Next Article in Special Issue
Advanced MR Imaging for Knee Osteoarthritis: A Review on Local and Brain Effects
Previous Article in Journal
Post-Acquisition Hyperpolarized 29Silicon Magnetic Resonance Image Processing for Visualization of Colorectal Lesions Using a User-Friendly Graphical Interface
Previous Article in Special Issue
Biomechanical Changes in Gait Patterns of Patients with Grade II Medial Gonarthritis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Comprehensive Survey on Bone Segmentation Techniques in Knee Osteoarthritis Research: From Conventional Methods to Deep Learning

by
Sozan Mohammed Ahmed
1 and
Ramadhan J. Mstafa
1,2,*
1
Department of Computer Science, Faculty of Science, University of Zakho, Duhok 42002, Iraq
2
Department of Computer Science, College of Science, Nawroz University, Duhok 42001, Iraq
*
Author to whom correspondence should be addressed.
Diagnostics 2022, 12(3), 611; https://doi.org/10.3390/diagnostics12030611
Submission received: 6 January 2022 / Revised: 8 February 2022 / Accepted: 23 February 2022 / Published: 1 March 2022
(This article belongs to the Special Issue Knee Osteoarthritis: Current Challenges in Diagnosis and Management)

Abstract

:
Knee osteoarthritis (KOA) is a degenerative joint disease, which significantly affects middle-aged and elderly people. The majority of KOA is primarily based on hyaline cartilage change, according to medical images. However, technical bottlenecks such as noise, artifacts, and modality pose enormous challenges for an objective and efficient early diagnosis. Therefore, the correct prediction of arthritis is an essential step for effective diagnosis and the prevention of acute arthritis, where early diagnosis and treatment can assist to reduce the progression of KOA. However, predicting the development of KOA is a difficult and urgent problem that, if addressed, could accelerate the development of disease-modifying drugs, in turn helping to avoid millions of total joint replacement procedures each year. In knee joint research and clinical practice there are segmentation approaches that play a significant role in KOA diagnosis and categorization. In this paper, we seek to give an in-depth understanding of a wide range of the most recent methodologies for knee articular bone segmentation; segmentation methods allow the estimation of articular cartilage loss rate, which is utilized in clinical practice for assessing the disease progression and morphological change, ranging from traditional techniques to deep learning (DL)-based techniques. Moreover, the purpose of this work is to give researchers a general review of the currently available methodologies in the area. Therefore, it will help researchers who want to conduct research in the field of KOA, as well as highlight deficiencies and potential considerations in application in clinical practice. Finally, we highlight the diagnostic value of deep learning for future computer-aided diagnostic applications to complete this review.

1. Introduction

KOA is the most common form of arthritis and cause of activity limitation and physical disability in older adults [1]. Clinically, KOA is characterized by the gradual wear down of the protective cartilage that cushions the ends of the bones, and structural changes in joint tissues, including twisted bone and cartilage [2]. The main symptoms of knee OA include osteophyte formation, joint space narrowing (JSN), and subchondral sclerosis. In addition, pain is the main symptom of KOA, which drives patients to seek medical treatment and reduces the quality of life [3].
Moreover, KOA appears mostly in people over 55 years of age, with a higher prevalence in people over 65 years of age [4,5]. Indeed, according to global population research, it is considered one of the leading causes of disability, affecting 3.8 million people worldwide [6]. Furthermore, researchers estimate that at least 130 million people will suffer from KOA by the year 2050, along with the rising global number of aging people [7]. Nonetheless, early detection and treatment can help reduce KOA progression in the elderly and enhance their quality of life.
Furthermore, based on the literature, there is an urgent need for clinical tools that are will be able to diagnose and potentially predict KOA in relation to the recognized clinical and biological heterogeneity of KOA. Because of the rising incidence of KOA and its impact on functional limitations, health-related quality of life, health-care consumption, and total joint arthroplasty, clinical and scientific techniques that may accurately identify KOA early in its development are in high demand. The diagnosis of knee KOA is currently based on reported patient symptoms and X-ray imagery [8]. Additionally, various techniques are available for more advanced imaging modalities such as CT and MRI, which are associated with the 3D structure of the knee joint. However, these models are only accessible at large medical facilities, and the cost of the scan renders it unsuitable for regular KOA diagnosis. Therefore, osteoarthritis imaging by radiography remains the gold standard for KOA screening due to the low acquisition, safety, cost-effectiveness, speed, and wide accessibility.
In addition, there are techniques of segmentation in knee joint research and clinical practice which have an important role in KOA diagnosis and classification [9]. Especially, machine learning and deep learning approaches have been extensively used in medical imaging to address problems of classification, detection, and related issues without the involvement of a radiologist [10,11]. Figure 1 presents the knee bone taxonomy methods.
Therefore, our study has the following contributions, which are summarized as follows:
  • This paper provides a comprehensive survey and analysis of a wide range of state-of-the-art recent methodologies for knee bone segmentation. Moreover, we present quantitative results and the findings of other studies, in order to evaluate their potential and limitations;
  • We perform an extended analysis of knee bone segmentation methods, taking it to the next level of depth by breaking the approaches down into their building pieces and emphasizing the algorithmic aspects;
  • Unlike other studies, we not only investigate the existing methods, but also provide recommendations and future directions to enhance them;
  • Finally, we highlight deep learning’s diagnostic value as the key to future computer-aided diagnosis applications to conclude this review.
The rest of this paper is divided into four sections. Section 2 presents knee bone segmentation techniques, Section 3 presents the approaches that have been used to construct this review, Section 4 provides discussion and recommendations, and Section 5 describes the conclusion.

2. Knee Bone Segmentation

Osteoarthritis of the knee results in a constant loss of mineralization, causing its sensitivity to structural deformation [13]. Some structural changes can be seen on X-rays, including bone marrow lesions (BMLs), subchondral bone attrition (SBA), and osteoblasts, which are some of the radiologically apparent indicators of OA-related clinical studies. According to a study reported by (Hunter et al., 2006), subchondral BMLs are more evident across knee regions with increased biomechanical loading [14]. In contrast, other studies showed that the development of BMLs was associated with the loss of cartilage [13,15,16]. So, bone segmentation will be required for the detection and characterization of these biomarkers. Consequently, the following applications represent the goals of segmentation of the knee bone: firstly, to compute a bone model to investigate the effect of biomechanical stress at different localized knee sites; secondly, to quantify and monitor the changes of bone shape and surface associated with structural deformations; and finally, to produce a bone–cartilage interface (BCI) in order to extract cartilage tissue from the bone surface [15,17,18,19]. Figure 2 shows an example of a 2D bone structure and segmentation result.
Furthermore, prior knowledge of knee joint anatomy and completely automated segmentation algorithms are required for accurate bone surface detection. In the following subsection, a complete list of knee bone segmentation models is provided, from traditional models to deep learning.

2.1. Deformable Model-Based

Deformable models are semi-automated and extensively utilized approaches in clinical applications [20]. The mathematical basis of deformable models is an intersection of geometry, physics, and theory of approval. Geometry helps to describe the shape of objects; physics constrains the way the shape may vary across time and space and the theory of optimum approximation gives the formal basis on which the models adapt to observed data. Moreover, deformable models are different in terms of developing curves and representations of surfaces, for instance, geometric deformable models represent implicitly evolving curves and surfaces as a function of level set, while parametric deformable models explicitly represent curves and surfaces in their parametric form as energy reduction formulations and dynamic force formulations [21].
Furthermore, a primary deformable model was expanded using a previous shape information. Some notable expansions are as follows: active shape models (ASMs) [22], statistical shape models (SSMs) [23], and active appearance models (AAMs) [24]. Concretely, these deformable models generally need training in order to get information on the shape variability or appearance of the target item. Manual interaction allows the integration of previous information, such as a collection of landmarks to build a point distribution model (PDM). However, the most attractive deformable model nowadays is commonly called “snakes”, where “deformable contour models” or “Snakes” represent a particular instance of the general multidimensional deformable model theory [21,25]. Snakes are planar outlines that are helpful for several image analysis purposes, e.g., they are commonly used to estimate object frontier positions and forms in pictures based on the plausible assumption that borders are partly continuous or smooth [26]
For instance, in the case of using an active shape model for segmenting an image, the following steps should be employed [27]:
Step 1: Fitting a model to an image is possible given a rough approximation, and a model can be applied to an image. By selecting a set of shape parameters, we can determine, b the shape of the object in an object-centered coordinate frame;
Step 2: By using Equation (1), we can define the position, orientation, and scale of an instance A of the model in the image frame.
A = T A t , Y t , s , o ( A ¯ + Pb )
A Euclidean transformation defines the position (At, Yt), oriented (o), and scale (s) of the model as it appears in the image; and P represents the corresponding point in the rotated space. The following is an iterative approach to improving the fit between an instance, A, and an image:
Active Shape Model Algorithm
  • Finding the closest nearby match for the point A′i by analyzing a region of the image around each point Ai.
  • The parameters (At, Yt, s, o, b) should be updated to fit the newly discovered points A.
  • Set constraints on the parameters, b, to ensure plausible shapes (e.g., limit so [bi] < 3 λ i ).
  • Continue until convergence is reached.
Figure 3 demonstrates the search with ASM for the face in two cases. Moreover, the deformable model has been widely utilized with knee bone segmentation because of its consistency in form and size benefit. In addition, these models attempt to recognize low-level image data, such as borders and intensity areas, wherein MRI these characteristics are not easily recognizable and the complete segmentation of images may be defective [27,28]. These deformable models contain the following: SSM [17], active contour model (ACM) [29,30], and AAM [31].
In 2007, Fripp et al. [32] presented a study that represented a significant step towards the automated precise segmentation of cartilage; specifically, the automatic segmentation of bones and bone–cartilage interface (BCI) extraction in the knee. The segmentation was performed utilizing 3D ASM, which was initialized through a precise atlas registration. Then, the BCI was extracted from image information and prior knowledge about the probability that each point was part of the interface. Moreover, the patella surfaces, tibia, and femoral bones were taken from the database for manually segmented images, using the surface extraction technique of shrink-wrap. In addition, the proposed method used 2562 points, 10,242 points, and 10,242 points, correspondingly. Moreover, those surfaces were then utilized to create SSMs for each knee bone. The propagating surface’s pose and shape parameters were then trained to predict pose and shape variation inside the SSM. On the other hand, in 2010, Vincent et al. [33] proposed a complete automated model-based approach that divided bone and cartilage in the MR images of the knee. Besides, the presented segmentation approach was based on AAM from the Osteoarthritis Initiative data source of hand-selected samples. Moreover, the model was based on 3D DESS Sagittal water excitement images of the OAI database (available at https://oai.ucsf.edu/ (Date Accessed 21 July 2021) for public access). They utilized 80 individuals for the model from this dataset. The following are the number of correspondence points generated by this process: femur 60,457, tibia 39,239, femoral cartilage 37,249, and tibial cartilage 20,459. The results without adjusting to the protocol of the grand challenge were highly promising, and showed the strength of the system.
A similar concept to Fripp et al. [32] was found of automated bone segmentation through using the AAM model in 2010, by Seim et al. [34], where SSMs were produced for the tibia and femur. The SSM can provide a strong bone model, and was often used to remove BCI from the bone’s surface, in addition to forecasting the occurrence of radiological KOA. In addition, in 2011, Bindernagel et al. [35] introduced an articulated statistical shape model (ASSM) of the human knee. The model included the fluctuation of statistical form and the explicit freedom of the physiological joint movements model. Moreover, they presented a model for a knee joint segmentation technique based on medical image data. The capability of the model was evaluated via a collection of 40 clinical MRI data sets with accessible manual expert segments. Furthermore, in 2013, Neogi et al. [15] trained the AAM with 96 knees to understand the changes in the form and gray levels of femur, tibia, and patella texture. These data were then encoded as main components. Moreover, the AAM models were developed using a total of 69, 66, and 59 main components for the femur, tibia, and patella bone, respectively. Typically, deformable models provide robustness to both image noise and boundary gaps, and allow boundary elements to be combined into a coherent and consistent mathematical description. In this context, deformable models, particularly extension techniques, are capable of segmenting the knee joint [36].

2.2. Graph-Based Methods

The term “graph-based segmentation methods” refers to a group of algorithms in which pixels or voxels in an image sequence, as well as the relationships between them, are represented as a weighted undirected graph. Suppose G = (V, E) be a graph, where the pixel is represented as a node, v € V, and the interaction between two surrounding nodes is represented as edge, e € E ⊊ V × V [37]. Every edge is assigned by weight [38]. For instance, the edge between the nodes vA and vB has the associated weight w (vA, vB), reflecting a similarity measure between the nodes. The similarity criterion is calculated from color, texture, spatial distribution, intensity, hue, or any other characteristic between two vertices [39]. Retroactively, after the normalized cut, graph segmentation achieves more attention. Therefore, in this context, the graph separation is called a “cut”. A typical binary graph segment divides the graph into two subsections, i.e., Gm and Gn, where GmGn = V and Gm ∩ Gn = ∅, by reducing the degree of difference between Gm and Gn. We calculate the difference as the weight of the deleted edges:
C u t ( G m , G n ) = p G m , q G n w ( p , q )
where Gm and Gn are vertices in two separate subgraphs, and the total weights of the edges are called cut. Reducing this cut makes subgraphs different. However, cutting the graph into sub-graphs in a perfect way is not easy. Thus, a possible solution to this problem is to minimize the cut in Equation (2) through optimization methods, where a comprehensive review of the graph-based segmentation method is provided in [40,41].
In 1993, Wu and Leahy [42] implemented graph cuts (gcuts) for image segmentation and developed a cost function, i.e., a minimal reduction, as shown in Equation (1), to frequently search for every possible piece, splitting the two knots at the lowest cost to achieve the optimal solution. However, the algorithm was biased to split a small part of the node. So, Jianbo and Malik [43] suggested a normalized reduction to tackle this issue. Boykov and Jolly also proposed graph cuts (Gcuts) in 2001. Graph cuts (gcuts) can be used interactively or automatically for image segmentation algorithms. Interactive gcuts are widely used to segment biological images, which include past user information into the local and border characteristics of images. Therefore, various studies [19,44,45,46,47] have performed gcuts for extracting the knee bone from the MR image. Furthermore, a content-based refinement operation is used to improve the segmentation output of the GC algorithm. Figure 4 illustrates an example of this operation [45]. Yin et al. [19] introduced a new method for simultaneous segmentation of many interactive surfaces belonging to multiple interacting objects, called LOGISMOS (Optimized Layered Graph Image Segmentation of Multiple Objects and Surfaces). The technique was based on algorithmic inclusion in a single n-dimensional graph of various spatial interrelationships, and followed by graph optimizations that resulted in a globally optimized s solution. In addition, the usefulness and performance of the LOGISMOS technique were proven for the segmentation of bone and cartilage. Although this system was trained on only a few examples, it reached a good performance. In addition, the approach for the simultaneous segmentation of the bone and cartilage consists of three steps:
  • Pre-segmentation of bones;
  • Mesh generation and optimization by Gcuts;
  • Co-segmentation of knee bone and cartilage surfaces.
In the LOGISMOS framework, the optimum surface segmentation was defined in order to find a net surface of a guided graph with a minimal weight cost for every node. Furthermore, Park et al. (2009), and Ababneh et al. (2011) have suggested segmentation models for automated graph cuts, and the additional previous information was necessary in order to substitute manual seed. Ababneh et al. [45] proposed a novel automated knee bone segmentation system (MRI) which included a content-based two-pass discrete block discovery mechanism; also, it was designed to support automation, post-processing, and segmentation initialization. The proposed method was implemented as follows: block detection by categorizing the content of the image according to its similarity to the categories in training information gathered from the usual structures of the bone. Then, categorized blocks were utilized to construct an efficient divisional algorithm based on graphs. The result showed that the proposed segmentation approach did not need any interaction with users and could differentiate bone from extremely similar surrounding components such as high-precision adipose tissue.
Park et al. [44] introduced a fully automated method for the segmentation of bone chambers on the (MR) images of knee joints. The suggested technique efficiently used pre-segmented data for both form and intensity through using branch and mincut in an iterative manner to a limited subset of form templates configurations. Moreover, the optimum in each iteration was dissected and individually calculated between the whole sub-set of translation, rotation, and scale parameters, and movement was gently picked with the least amount of energy. Experimental results showed the enhanced accuracy and efficiency compared to when branch and mincut were applied to the whole range of parameters at once and when only shape priors were applied, respectively. Sufyan et al. [46] presented a novelty method for segmentation without the requirement of any user input, using efficient content characteristics based on graph cuts. Experimental findings on real MR images in the knee showed the efficient use of the Zijdenbos similarity indices of 95% of the scheme with average accuracy. Somasundar et al. [47] presented a graphic cut technique for segmenting the tibia and femoral bone from MRI/CT knee images. The proposed method used a median filter for the removal of noise first able. Then, the 3D model of a tibia and femoral bone was generated with segmented pictures for volume rendering. Finally, the 3D model of the tibia femoral bone was terminated in order to generate meshed elements. They concluded that this model may be used for analysis and pre-operative knee joint planning. Moreover, conventional semiautomatic graph cutting is mostly dependent on seeds that start and refine segmentation leading to considerable manual intervention. However, the DSCs 0.958 [44], 0.941 [45], and Zijdenbos Similarity Index (ZSI) 0.95 [46] are enhanced automated graph cut models.

2.3. Atlas-Based Methods

The atlases act as a regulator of shape allowing deformations within a reasonable range of variance derived from molds of anatomical shape, as well as appearance. Atlas includes three essential steps for segmentation (registration, selection, and propagation). Rohlfing et al. (2005) defined atlas as “incorporates prior anatomical data (i.e., locations and shapes of an anatomical structure), and distinguishes spatial relationships to other anatomical structures”. Therefore, the atlas techniques are designed to identify anatomical structures by mapping the coordinates of a specific image to a pre-constructed atlas. This step is called the registration process, which assigns the label of each voxel image to the appropriate label in the atlas by searching the label of its structure. For instance, in Equation (3), I refer to the correspondence between an image and an atlas referring to A. Where T refers to coordinate transformation that translates any specified image coordinates and ¥ describes the domain of I onto the atlas, A [48]. The mapping is as follows:
I(¥)→A(T(¥))
In general, there are four ways to choose an atlas:
  • Single atlas: Utilizes a separate segmented image; also, the selection might be random or based on particular criteria, for example, image quality;
  • Probabilistic atlas (average shape atlas): Plots all of the original individual images on a common reference to produce a median image. Then, the original images are correlated with the first average to produce a new average. The mapping process occurs frequently until convergence;
  • Best atlas: Used to determine the optimal segmentation from the results of the different atlases; one can check the similarity of the image using standardized mutual information and the size of the distortion after registration.
  • Multiple atlases: This method applies various atlases to a raw image. Then, the segments are combined into a final hash based on the merging of the “voting rule” decision. This method applies various atlases to a raw image. Then, the segments are combined into a final hash based on the merging of the “voting rule” decision. This can be implemented as labeling cost C in Equation (4) per label l in {FB; BG. TB} (“FB”,”BG” and “TB” stand for femur, background and tibia bone in succession) is determined by the probability of recording each of the labels given image I in voxel Y site:
E ( u ) = D g | | Y u | | + C | l u | dYdl , D = × ε , u [ 0 , 1 ] , u ( Y , 0 ) = 0 , ( Y , L ) = 1
C ( Y , l ) = log ( P ( l | I ( Y ) ) ) = log ( P ( I ( Y ) | l ) . P ( l ) P ( I ( Y ) ) ) ,
where u refers to the multi-label image and Y u is the spatial gradient of u,   Y u = ( u Y , u Y , u Z   ) T and l u is the gradient in the direction of the label, l u = u l ; g controls the normalization of the properties and sets C labeling cost. Moreover, it is important to note that the background label “BG” is placed in order between the femur label “FB” and the tibia label “TB”, in order to achieve a symmetric formulation. The probability conditions P ( I ( Y ) | FB ) and P ( I ( Y ) | TB )   are calculated from the intensity of the image. Since bones appear dark in T1-weighted MR images, we assume a simple Equation (6) to estimate bone probabilities:
P ( I ( Y ) | FB ) = P ( I ( Y ) | TB ) = exp ( β I ( Y ) )
where β is set to 0.02 in implementation assuming I(Y) [ 0 , 100 ] . To compute the previous terms P(FB)and P(TB) in Equation (5), we use a multi-atlas registration technique followed by label fusion. Assume we have M atlases A j and their bone segmentations S j FB and S j TB , where ( j = 1 , 2 M ) . The registration from an atlas A j to a query picture I is an affine registration T j affline followed by a B-Spline registration T j bSpline . A spatial prior of femur and tibia for the query image is obtained by averaging all M propagated atlas labels.
P ( FB ) = 1 M j = 1 M ( T j bSpline °   T j affline ° S j FB )
P ( FB ) = 1 M j = 1 M ( T j bSpline °   T j affline ° S j TB )
After computing the spatial priors and local likelihoods, we integrate them into Equation (5) and solve Equation (4) to get the three-label bone segmentation. The bone segmentation will aid in the location of the cartilage in atlas-based cartilage segmentation [49]. Figure 5 shows an overview of the process of bone segmentation using a multi-atlas.
Several groups of study have used several templates for segmenting the knee based on atlas such as (Lee et al., 2014; Dam et al., 2015). In 2014, Lee et al. [50] developed a completely automated approach for segmenting knee magnetic resonance cartilage (MR) images, and assessed the method performance through a public open dataset. The presented segmentation system comprised three procedures: multi-atlas construction, implementing local weighted voting (LWV), and regional adaptation. All training instances were recorded in a goal image using a non-astringent registration system and the best matching atlases were picked for the atlas construction process. The result shows that it avoided the low precision caused by magnetic field inhomogeneity. Dam et al. [51] introduced a fully automated segmentation framework for knee MRI; the frame combined a rigid multi-atlas registry before the KNN-based classification of cartilage voxels and was manually trained in different bone, meniscal, and cartilage combinations. Validation comprised high- and low-field knee MRI cohorts from the center including the osteoarthritis initiative (OAI), and knee segmentation (SK I10). Empirical results were equivalent and equal to or better than previously reported automatic approaches to the manual radiologist segmentation.

2.4. Miscellaneous Segmentation Approaches

These approaches include additional models for knee bone segmentation involving ray casting, level set, edge and thresholding, and region growing.
Region growing is a method of segmenting anatomical structures that involves two key concepts: a seed voxel point located within the structure to be segmented, and a range of probable voxel grey-scale intensity levels that the region can achieve. Thus, the following steps are required to implement region growing in an image. In the set of definite areas R t , n = 1, 2, 3, …, m, the low degree segmentation must require the given properties:
  • The region r in Yj is connected to the region Yi if there exists a sequence (Yj … Yi). for instance, Yt and Yt+1 are connected to R;
  • R is a continuous region if x and R are connected;
  • Whole image,
( I ) = t = 1 m R t
4.
Equation (10)
R j R i =   for   j i
If these conditions are met, H ( R t ) is true for every t, whereas H ( R j R i ) is false for j ≠ i. H denotes the homogeneity property, while R denotes the area. If H(R) is false, divide the region into sub-regions. If H ( R t 1   R t 2 R t 3 ) is true, then combine them into one area up to zero for further splitting. Figure 6 shows that the results of the general region-growing algorithm are not good because fixed parameters limit the algorithm’s ability to handle growing regions [52].
Lee and Chung [54] suggested multi-phase segmentation of the knee bone model that would improve the contrast of the bone edges and the extraction bone boundary information, using a sequence of edge detection, thresholding, and contrast enhancements. In order to accomplish the final segmentation, the information was included in the region growing algorithm. Then, 40 knees were used to assess the model; however, the results were not obvious. Dodin et al. [55] developed a fully automated bone segmentation method based on the ray casting approach from MR images. The proposed method relied on MR image breakdown into many surface layers to locate the limits of bones and the automatic fusion of numerous partial segmentation objects to achieve the final full bone segmentation. Moreover, validation analyses were performed on 161 MR images from patients with knee osteoarthritis and the DSC reported 0.94 for the femur and 0.92 for the tibia. Similarly, Gandhamal et al. (2017) and Dalvi et al. (2007) applied their knee bone segmentation by using level set models. In particular, the region growing method was utilized by Dalvi et al. [56] to segment the knee bone, and then the segmentation refinement used the set algorithm of the Laplacian level. The proposed method was verified by the measurement of specificity (Spec) and sensitivity (Sens) in two healthy people. Gandhamal et al. [57] suggested a fully automated approach for the segmentation of the subchondral bone from knee MR images. According to the framework proposed, the preprocessing steps consisted of the following: image contrast optimization and automatic seed point selection, which were performed on all knee MRI images in datasets, followed by bone area extraction, borderline leakage detection, and correction of boundaries. The performance of advanced technology was evaluated by measuring sensitivity, specificity, dice similarity coefficient (DSC), average surface distance (Avgd), and root mean squared distance (RMSD).

2.5. Machine Learning Based

Machine learning is the study of how computer algorithms (i.e., machines) may “learn” complicated connections or patterns from experimental data, resulting in (mathematical) models that link a large number of variables to target variables of interest [58]. As mentioned earlier, the ability to analyze complex cases with a huge amount of data and the maximum possible outcomes makes machine learning a valuable tool for KOA. It is noteworthy that machine learning (ML) has been applied in fields such as medicine, robotics, bioinformatics, biochemistry, meteorology, economics, agriculture, and economics. In 2019, the importance of implementing ML techniques to KOA was documented by Jamshidi et al. [59] and Klozyk and Matte [60]. The traditional systems of ML are applied in two phases, as shown in Figure 7: (1) cleaning of data to reduce noise, inconsistent examples, and missing data; (2) data integration where various sources of information are available; (3) data transformation includes normalization and discretization. The feature extraction/selection unit (also referred to as the feature engineering unit) attempts to generate and/or identify the most informative feature subset in which the learning model will be subsequently applied during the training phase [61].
The feedback loop enables changes to the pre-processing and feature extraction/selection units, which will help the learning model perform even better. During the testing phase, the trained model shows previously unseen samples (represented by images or trait vectors) that must be classified. Based on the characteristics contained in each sample, the model makes an appropriate decision (classification or regression). Moreover, the general machine learning framework includes data algorithms and prediction. Data is a set of observations used during training and testing, while a prediction algorithm learns metadata patterns to perform certain classification tasks. Classical machine learning employs a collection of hand-made discriminatory characteristics to characterize the object and to assign the most likely label image pixel to a classifier. In addition, the machinery learning family is large and includes supervised learning, semi-supervised learning, unsupervised learning, and reinforcement learning. Interestingly, supervised and unsupervised learning are the two major learning algorithms that may be used for machine learning.
Supervised learning examines the link between the input space x and the output label y, and it is most commonly applied to regression and classification problems. Common supervised learning algorithms include:
  • Decision Tree: The algorithm is structured like a tree, with branches and nodes. Each branch indicates the outcome, whereas each leaf node represents a class label. The method will sort characteristics in a hierarchical order from the root of the tree to the leaf node [62];
  • Naïve Bayes: The technique is based on the Bayes theorem, which assumes characteristics are statistically independent. The classification is based on the conditional likelihood that a result is produced from the probabilities imposed by the input variables [63];
  • Support Vector Machine: The algorithm aims to draw the most appropriate margins in which the distance between each category is maximized to the nearest margin. A margin is defined as the distance between two hyperplane support vectors. A bigger margin involves minor mistakes in categorization [64];
  • Ensemble Learning: A method of grouping multiple weak classifiers to build a strong classifier. It is known that aggregation methods can be used to improve prediction performance. Boosting and bagging are important ensemble learning techniques [65].
In unsupervised learning there are no labeled data. As a result, the unsupervised model infers from the input data based on similarity and redundancy reduction during training. Moreover, unsupervised learning is divided into two types: clustering and association rule. The following are some of the most often used unsupervised learning algorithms:
  • K-Means: This algorithm groups data into k-clusters based on their homogeneity, where the center of each cluster is an individual mean value. Moreover, the data values are allocated based on their closeness to the nearest average with the least possible error function during implementation [66];
  • Principal Component Analysis: This method aims to reduce the dimensionality of the data by finding a set of uncorrelated low dimensional linear data representations that have greater variance. This linear dimensional technique is useful for exploring the latent interaction of a variable in an unsupervised environment [67].
In the last decade, a new generation of frameworks has been introduced to solve challenges connected to knee joint segmentation utilizing learning-based methodologies or machine learning algorithms. The goal of the learning-based method is to find the features of each pixel I j from the data and assign a hash label ( l j ∈ {1, 2, …, K}) to I j . From the probabilistic situation, these learning-based methods predict a training set of labeled pixels by calculating the conditional probability P ( l j | l j ). Figure 8 demonstrates segmentation results for all three cartilage compartments using the joint support vector machine (SVM)-discriminative random field (DRF) model with FV1, FV2, FV3, FV4, and FV5. Feature vector 1 (FV1) is made up of four-dimensional normalized intensity data from multi-contrast MR images (from all four MR sequences); Feature vector 2 (FV2) consists of one-dimensional normalized intensity values of single-contrast MR images from the FS SPGR series and six-dimensional local image structure-based characteristics; Feature vector 3 (FV3) consists of 4 natural dimension intensity values of multi-contrast MR images and 24 dimension features based on the local image structure; Feature Vector 4 (FV4) consists of four natural dimension density values of multi-contrast and 3D MRI images of geometric information for multi-contrast MR images; Feature vector 5 (FV5) consists of 4 natural dimension intensity values of multi-contrast MR images, with a local 24 after features based on image structure and 3D geometry features of multi-contrast MR images.
Furthermore, several studies have used machine learning techniques for the diagnosis and predictions of knee osteoarthritis. Brahim et al. [69] presented a computer-aided diagnostic method for early knee osteoarthritis identification utilizing knee X-ray imaging and machine-learning algorithms. The proposed approaches were implemented as follows: first, preprocessing of the X-ray pictures in the Fourier domain was performed using a circular Fourier transform; then, MLR (multivariate linear regression) was used on the data to decrease the variability between patients with OA and healthy participants; for the feature extraction/selection stage, an independent component analysis (ICA) was used to reduce the dimensionality; finally, random forest and Naïve Bayes classifier were used for the classification task. Furthermore, the 1024 knee X-ray images from the public database Osteoarthritis Initiative were used to test this innovative image-based method (OAI). The results demonstrated that the suggested method had a high predictive classification rate for OA detection (accuracy of 82.98 percent, sensitivity of 87.15 percent, and specificity of up to 80.65 percent). Kubkaddi and Ravikumar [70] presented an automated diagnosis of knee osteoarthritis using a classifier based on support vector machines. Various textural and statistical characteristics were taken into account along with thickness when training the algorithm. The results showed that the SVM with RBF kernel, SVM with linear kernel, and SVM with the polynomial kernel were 95.45 percent, 95.45 percent, and 87.8 percent, respectively. In addition, Du et al. [71] studied to look for hidden biological information in knee MR images that could be used to predict osteoarthritis (OA). The presented study calculated the Cartilage Damage Index (CDI) information from 36 informative sites on the tibia and femoral cartilage compartment using 3D MR imaging and processed the feature set using PCA analysis.
Four machine learning methods (support vector machine (SVM), artificial neural network (ANN), Naïve Bayes, and random forest) were employed to predict the progression of OA, which was measured by the change of Kellgren and Lawrence (KL) grade, Joint Space Narrowing on Lateral compartment (JSL) grade, and Joint Space Narrowing on Medial compartment (JSM) grade. The findings of the experiments indicated that the medial feature set created a higher prediction performance than the lateral feature set and that the 36-dimensional total feature set generated the greatest prediction performance of all the feature sets. Kashyap et al. [72] developed a novelty method through hierarchical RF classifiers to learn the appearance of cartilage regions and their boundaries. The neighborhood approximation forest was used first to provide a contextual feature for a second-level RF classifier, which additionally analyzed local features and generated location-specific costs for the layered optimum graph image segmentation of multiple objects and surface (LOGISMOS) framework. The data were prepared using the just-enough interaction (JEI) approach, which provides fast and accurate post-processing. Halilaj et al. [73] presented a model for the longitudinal progression of KOA and built a prognostic tool that used data collected in one year to predict disease progression over eight years. The proposed model used a mixed-effects model and data of eight years from the Osteoporosis Initiative, specifically. Moreover, the presented method built LASSO regression models based on clinical data gathered within the first year to predict the likelihood of belonging to each cluster. Depending on the narrowing of the common space, topics were grouped as progressing or not progressing. In addition, based on pain scores, they were grouped as stable, improving, or getting worse.

2.6. Deep Learning-Based

Deep learning is a branch of machine learning that deals with algorithms inspired by the structure and function of the brain to create new architecture by transferring feature engineering (the process of converting raw data into features) onto the underlying learning system [74]. Moreover, this is a sophisticated machine learning model with automated hierarchical feature representation learning capability. Its general architecture consists of an input layer, hidden (feature extraction) layers, and an output (classification) layer [75]. From this standpoint, feature extraction and selection are discarded to achieve a completely trainable system that starts with raw or pre-processed input (e.g., image pixels or time-series) and ends with the final output of recognized objects or predicted values. Deep learning has recently received a lot of interest because of its huge analog power, ability for machine-learning characteristics, and best-in-class performance in handling challenging issues. Figure 9 shows a comparison between traditional machine learning and deep learning.
Furthermore, deep NNs make use of deep architectures, expandable hidden modules, and nonlinear activation functions to model complex data, while one of their most attractive aspects is that they automate feature engineering, thus mitigating the need for domain expertise and powerful feature extraction. Deep learning uses many hidden layers to build nonlinearity map drawings. As simplified in Figure 10b, one hidden layer with three neurons is added and the nonlinear mapping between input and output is modeled by the following equations [77]. Moreover, Figure 11 describes the process of using deep learning for image segmentation [78].
h 1 = w 11 × x 1 + w 21 × x 2 + w 31 × x 3 + b 1
h 2 = w 12 × x 1 + w 22 × x 2 + w 32 × x 3 + b 2
h 3 = w 13 × x 1 + w 23 × x 2 + w 33 × x 3 + b 3
y 1 = w 11 × h 1 + w 21 × h 2 + w 31 × h 3 + b 1
y 2 = w 12 × h 1 + w 22 × h 2 + w 32 × h 3 + b 2
The linear regression model is presented in Figure 10a. The input variables are three, and output variables are two, and they can be expressed by the following equations.
y 1 = w 11 × x 1 + w 21 × x 2 + w 31 × x 3 + b 1
y 2 = w 12 × x 1 + w 22 × x 2 + w 32 × x 3 + b 2
Currently, DL models have greatly improved the latest technology in many different sectors and industries, including healthcare [79]. DL forms can be moderated, partially supervised, or even unsupervised, where the major deep learning architectures are as follows: convolutional neural networks (CNN) are one of the most well-known deep learning (DL) networks, in which feature maps are extracted via image convolutions. A detailed overview of CNNs is given in [80]. Other typical constructs of DL architectures that belong to the family of undirected probabilistic graphical models are deep Boltzmann machines and deep belief networks. Recurrent neural networks (RNN) are another significant family of DL models, which establish unique topological connections between their neurons to encode temporal information in sequential input [81]. Moreover, auto-encoder is another unsupervised DNN whose basic concept is to encode high-dimensional data into a low-dimensional latent vector and then try to reconstruct the input data as perfectly as possible using just its coding [82]. During model training, the value of each node is evaluated by parameterizing weights using convolutional filters, and the objective function is then improved via backpropagation.
The following are some examples of deep learning-based knee bone segmentation for the early detection of osteoarthritis [83,84,85,86,87,88,89,90]. In general, CNN architecture is used in knee bone segmentation models, with minor changes. The notion of slice-wise segmentation was taken by [84] from [83]. Liu et al. [83] constructed a 10-layer SegNet framework and got rid of its completely connected layer after the decoder network, to perform semantic labeling on a two-dimensional knee image. The marching cube method was used to create a 3D simplex mesh using the processed labels. The simplex mesh was then transferred through a 3D simplex deformable process, where each segmentation object was fine-tuned individually depending on the original image. Figure 12 presents Liu’s method.
Ambellan et al. [84] presented a technique for automatically segmenting knee bones and cartilage using magnetic resonance imaging (MRI) that integrates anatomical form knowledge with convolutional neural networks (CNNs). The proposed technique included 3D Statistical Shape Models (SSMs) as well as 2D and 3D CNNs to accomplish the robust and accurate segmentation of even severely diseased knee components. Data from the Osteoarthritis Initiative (OAI) and the MICCAI grand challenge “Segmentation of Knee Images 2010” were used to train the shape models and neural networks (SKI10). The experimental setup was open to the public to advance research in the field of medical image segmentation. The result showed that integrating localized classification with CNNs and statistical anatomical information with SSMs yielded a cutting-edge segmentation technique for knee bones and cartilage using MRI data. Despite this, good performance was achieved to calculate huge computer resources and local training. For example, general-purpose graphics cards with smaller memory were not able to support 3D convolution, so it would not be easy to expand the model to process larger datasets without appropriate graphics cards. Moreover, the 3D model was trained on small subsections of 64 × 64 × 16 voxels along the osteoclastic lines to reduce the computational burden and compensate for the inability of SSM to provide osteoblast details. However, the choice of training compromised the voxel intensity and surrounding texture characteristics.
In light of the aforementioned restrictions, Cheng et al. [85] introduced a simplified CNN model known as a holistically nested network (HNN) for femur and patella segmentation. HNN eliminates the decoding path to create a forward-feeding network, reducing the computational size of the graphic card. Furthermore, the network was trained on a complete knee picture using a 1 × 1 convolution at the first layer (to create fine features such as edges) up to a 32 × 32 convolution at the fifth layer (to produce coarse details such as bone structure); therefore, it learnt both local and global contextual information. Finally, a weighted fusion layer was created to average the probability map at each layer and compute the final prediction in a sequential manner. Although the authors attempted a complete validation against current state-of-the-art methods, they were hampered by the kind of bone selection (immature versus mature bone, and distinct bone compartment) and the lack of public ground truth. Furthermore, despite its superior resilience, deep learning model training is computationally intensive. Thus, according to Ambellan et al. (2019), implementing a deep learning model on 50,000 large-scale data images would take 43 weeks on a single computer node, emphasizing the significant cost of computation. Although some academics have reduced CNN design in order to minimize complexity, the problem still needs further investigation. On the other hand, there are a lot of models in deep learning that have been presented for the early detection of osteoarthritis, such as that by Lim et al. [86], who offered an automated osteoarthritis prediction by using a deep learning algorithm with a scaled PCA, based on medical usage and health behavior data (from 5749 patients) without any hand-crafted features, verified in a large population. A principal component analysis with quantile transformer scaling was used to identify osteoarthritis in the patients’ basic medical data. In addition, the proposed model was able to achieve an AUC of 76.8 percent while minimizing the effort required to create features. Moreover, they concluded that patients and physicians may use this method to prescreen for osteoarthritis and save money and time in the hospital. Tiulpin and Saarakkala [87] established an automated technique for predicting KL and OARSI grades from knee radiographs. The proposed approach was based on deep learning and employed a 50-layer ensemble of residual networks and applied ImageNet transfer learning with fine-tuning. The empirical result showed that cross-validation transfer learning was beneficial for automatic OARSI grading; however, that simultaneous prediction of KL and OARSI grades leads to poor performance. Figure 13 presents the introduced method.
Antony et al. [88] introduced a new technique to automatically evaluate the severity of KOA using X-ray images. The presented approaches implemented as follows, in order to calculate the severity of KOA automatically: firstly, they located the knee joints automatically; secondly, localized knee joint pictures were categorized. Furthermore, the introduced method used a fully convolutional neural network to recognize the knee joints automatically (FCN), and convolutional neural networks (CNN) were trained from scratch to automatically quantify the KOA severity optimizing a weighted ratio of two-loss functions: mean-squared loss and categorical cross-entropy. The benefit of this joint training was providing multi-class classification and regression outputs.
Both OAI and MOST datasets were utilized to test the proposed technique. The findings were highly encouraging and exceeded previous approaches. Tiulpin et al. [89] presented a new computer-aided diagnostic technique based on the deep Siamese convolutional neural network to automatically quantify KOA severity according to the Kellgren–Lawrence grading system. The proposed approach was trained only on data from the Multicenter Osteoarthritis Study, and it was verified on 3000 individuals (5960 knees) from the Osteoarthritis Initiative dataset. The empirical result demonstrated that emphasizing radiological characteristics influences network decisions. Such information makes the decision-making process more transparent for the practitioner, which increases trust in automated approaches. Furthermore, according to the annotations provided by a committee of clinical experts, the presented approach produced a quadratic Kappa coefficient of 0.83 and average multiclass accuracy of 66.71 percent. Tiulpin et al. [91] presented a multi-modal machine learning-based OA progression prediction model that takes into account raw radiography data, clinical exam results, and the patient’s past medical history. An independent test set of 3918 knee pictures from 2129 individuals was used to validate this method. The area under the ROC curve (AUC) for the proposed approach was 0.79 (0.78–0.81), and the average precision (AP) was 0.68 (0.66–0.70). Moreover, they mentioned that the proposed technique might assist in generating tailored treatment strategies by considerably improving the subject selection procedure for OA medication development studies. Christodoulou et al. [90] studied new efficient research through using deep neural networks as a novel machine learning technique for classification problems, taking into consideration a vast number of medical variables that influence OA. The suggested methodology’s potential was proven by categorizing distinct subgroups of control participants based on self-reported clinical data and assigning a knee OA diagnostic category. Moreover, age, gender, and obesity were used to divide the studies into subgroups. To validate the proposed deep learning approach, a comparative study between the proposed DNN and other benchmark machine learning techniques recommended for classification was performed, and the results revealed the efficiency of deep learning in the diagnosis of KOA. Furthermore, the majority of DL approaches used for musculoskeletal structural segmentation are 2D CNNs that use 2D convolutions on a sagittal orthogonal image in a slice-wise segmentation procedure [83]. The fundamental reason for this is that GPU memory is restricted, which means that 3D patch-based CNN techniques have limited spatial context.

3. Approaches

3.1. Research Approach to Literature

This survey was based on research publications found using the Google Scholar, PubMed, and Scopus search engines between 1991 and 2021. During our investigation, we identified articles that used segmentation techniques, machine learning, and deep learning to study KOA. In particular, the terms machine learning, deep learning, and knee osteoarthritis were used. The presence of one of the three terms indicated as keywords, either in the title or in the abstract of each article, was a requirement for inclusion in our study. Moreover, the bar chart in Figure 14 describes the distribution of the number of papers that were reviewed for each of the taxonomies of knee bone in KOA studies.

3.2. Estimated Results

The studies reported in this article can be divided into six categories, namely: (1) deformable models (5 studies), (2) graph-based models (5 studies), (3) classical machine learning techniques (5 studies), (4) miscellaneous (4 studies), (5) deep learning-based models (9 studies), and (6) atlas-based models (2 studies). Then, after separating the articles, the following information was extracted from each article: year of publication, author, region of interest, segmentation method, sequence type, data (X-ray, MRI, clinical data), feature engineering approaches, learning algorithm methods, validation, and empirical results (performance evaluation).

3.3. Data Sources

The majority of advanced analytical models to forecast knee osteoarthritis based on knee bone segmentation and knee articular cartilage morphology used imaging technology (either MRI or X-ray). Recently, the combination of multimodal data (medical images, clinical data) has formed the basis for more powerful and efficient models. OAI, SKI10, and MOST were the most frequently used databases to check the performance of the aforementioned hashing approach. Validation was performed using the LOOV, k-fold CV, random or expert manual assessment. An overview of all KOA fragmentation studies identified for our survey is presented in Table 1 and Table 2. Moreover, a variety of complex methods have been described to improve the quality of accessible raw data, or to overcome the dimensionality curse, including: (i) topological data analysis, ICA, PCA for dimensionality reduction; and (ii) CNN to extract new, more informative deep features for images.

4. Discussion and Recommendations

Our literature survey outlined several methods for creating segment knee bones in MR images, the current usage of machine learning methods in KOA diagnosis, and prediction challenges. In conclusion, there were five main points. First, segmentation of the knee bone can be performed by adopting various levels of automation, from manual to fully automated. Moreover, the development of a segmentation model based on MRI scans may lead to the adoption of hypothetical surgical procedures for planning real surgery, and the improvement of virtual surgery solutions could improve the patient’s anatomic structure. Second, unlike shape, atlas, graph, and machine learning approaches, segmentation models in this category were not dependent on any training dataset or user input. Instead, a number of preprocessing and image property learning processes were used to bridge the learning gap, ensuring the model remained automated. Third, to achieve the final bone segmentation based on updated image attributes, a variety of methodologies have been used. While these models were able to overcome basic anatomical aspects of bone, their applicability is be largely dependent on the tissue and image properties. In addition, some models require predetermined threshold values. Thus, it may be difficult to generalize these models to datasets of larger sizes compared to modern machine learning techniques, particularly deep learning. Four, the advancement of artificial intelligence technology has led to the emergence of new machine learning techniques in that can: (i) improve our understanding of the disease’s onset and development; (ii) offer new data-driven techniques that could help diagnose and forecast KOA in the early stages; (iii) play a crucial role in the direction of these models by extracting important knowledge from many types of clinical data (biomechanical parameters, pictures, and kinematics) and developing innovative solutions that incorporate data from as many different sources as feasible. This is inspired by the promising accuracy of the results shown by deep learning-based segmentation models. Deep learning has been applied to a wide range of computer-aided diagnosis applications, such as detection [83,86] and classification [88,91,92] in MRI and radiographs. Therefore, the main goal for these models is to diagnose and prevent knee OA at an early stage when cartilage deterioration is still reversible. Related applications for KOA include OA classification by radiographic such as [69,73,87,88,89], predicting knee pain by MRI [92], or using radiography to forecast OA progression [91]. Classification is the process of determining the likelihood of a label for a given input image using an algorithm. Furthermore, according to the literature, end-to-end deep neural network quantification of OA severity is critical for providing more precise computer-aided diagnoses to assist physicians in evaluating the severity of OA patients. Even before the implementation of deep learning, Shamir et al. [93] used an open-source classical machine learning software dedicated to biological image analysis, in order to categorize normal and diseased knee images. Moreover, Ashinsky et al. [94] used the weighted neighbor distance utilizing the compound hierarchy of algorithms representing the morphology (WND-CHRM) algorithm. Finally, the advantage of DL algorithms is that they automatically learn contextual information without requiring any high-computing spatial structure modeling, such as in model-based or atlas-based techniques, as this is computationally expensive. In this study, we reviewed different methods, focusing on the use of DL in knee articular bone segmentation. Nonetheless, the following recommendations and future research trends are suggested to offer appropriate methods for managing KOA:
  • The development of a useful tool based on CNNs for assessing morphological and structural changes in the musculoskeletal system could be an interesting research field for assisting clinical applications, particularly for longitudinal assessments;
  • More research is needed to improve current methods to address issues such as a lack of full assessment for intensity inhomogeneity and clinical practices;
  • Combining DL strategies with other machine learning approaches such as KNN, SVM, and so on, can achieve an acceptable result;
  • The design and development of a 3D CNN learning-based framework for a graph representation of knee joints that can accommodate both edge and shape information for the graph.

5. Conclusions

This survey provides six segmentation methods for KOA diagnosis, beginning with conventional methods such as a deformable, graph, miscellaneous, atlas, and state-of-the-art ML and DL, especially those that have been presented in the last few years. As the summary tables demonstrate, the comparison of methodologies is not an easy undertaking. The key obstacles were the lack of uniform databases and standardized benchmarks. Furthermore, due to varied testing datasets, comparing and evaluating the methodologies based on their published experimental outcomes was difficult. Consequently, these findings should be interpreted with caution, due to the fact that the segmentation method’s accuracy was highly dependent on the dataset. Therefore, we chose three publicly available datasets (MOST, SKI10, and OAI) with differing degrees of KOA severity to illustrate this issue. Knee joint segmentation approaches have been utilized alone or in combination with other procedures in a vast number of publications, and hybrid techniques have yielded positive outcomes [44,46,51,55,56,84]. As a result, learning-based methods can be combined with other methods to improve segmentation results. On the other hand, according to this survey, learning-based approaches dominate the field of knee bone segmentation. At present, artificial intelligence has transformed the direction of knee OA research towards prediction and early detection. Deep learning has proven considerable potential in terms of generalizability, robustness, and versatility, and innovative diagnostic apps are gradually becoming the state-of-the-art technology of the future. More research is needed to confirm the clinical application of deep learning technologies in order to meet future difficulties.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chen, P.; Gao, L.; Shi, X.; Allen, K.; Yang, L. Fully Automatic Knee Osteoarthritis Severity Grading Using Deep Neural Networks with a Novel Ordinal Loss. Comput. Med. Imaging Graph. 2019, 75, 84–92. [Google Scholar] [CrossRef] [PubMed]
  2. Guan, B.; Liu, F.; Mizaian, A.H.; Demehri, S.; Samsonov, A.; Guermazi, A.; Kijowski, R. Deep Learning Approach to Predict Pain Progression in Knee Osteoarthritis. Skelet. Radiol. 2022, 51, 363–373. [Google Scholar] [CrossRef] [PubMed]
  3. Neogi, T. The Epidemiology and Impact of Pain in Osteoarthritis. Osteoarthr. Cartil. 2013, 21, 1145–1153. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Jaul, E.; Barron, J. Age-Related Diseases and Clinical and Public Health Implications for the 85 Years Old and over Population. Front. Public Health 2017, 5, 335. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Briggs, A.M.; Shiffman, J.; Shawar, Y.R.; Åkesson, K.; Ali, N.; Woolf, A.D. Global Health Policy in the 21st Century: Challenges and Opportunities to Arrest the Global Disability Burden from Musculoskeletal Health Conditions. Best Pract. Res. Clin. Rheumatol. 2020, 34, 101549. [Google Scholar] [CrossRef] [PubMed]
  6. Cross, M.; Smith, E.; Hoy, D.; Carmona, L.; Wolfe, F.; Vos, T.; Williams, B.; Gabriel, S.; Lassere, M.; Johns, N. The Global Burden of Rheumatoid Arthritis: Estimates from the Global Burden of Disease 2010 Study. Ann. Rheum. Dis. 2014, 73, 1316–1322. [Google Scholar] [CrossRef]
  7. Migliore, A.; Gigliucci, G.; Alekseeva, L.; Avasthi, S.; Bannuru, R.R.; Chevalier, X.; Conrozier, T.; Crimaldi, S.; Damjanov, N.; de Campos, G.C. Treat-to-Target Strategy for Knee Osteoarthritis. International Technical Expert Panel Consensus and Good Clinical Practice Statements. Ther. Adv. Musculoskelet. Dis. 2019, 11, 1759720X19893800. [Google Scholar] [CrossRef] [Green Version]
  8. Wang, Y.; Wang, X.; Gao, T.; Du, L.; Liu, W. An Automatic Knee Osteoarthritis Diagnosis Method Based on Deep Learning: Data from the Osteoarthritis Initiative. J. Healthc. Eng. 2021, 2021, 5586529. [Google Scholar] [CrossRef]
  9. Hayashi, D.; Roemer, F.W.; Jarraya, M.; Guermazi, A. Imaging in Osteoarthritis. Radiol. Clin. N. Am. 2017, 55, 1085–1102. [Google Scholar] [CrossRef]
  10. Lundervold, A.S.; Lundervold, A. An Overview of Deep Learning in Medical Imaging Focusing on MRI. Z. Med. Phys. 2019, 29, 102–127. [Google Scholar] [CrossRef]
  11. Choy, G.; Khalilzadeh, O.; Michalski, M.; Do, S.; Samir, A.E.; Pianykh, O.S.; Geis, J.R.; Pandharipande, P.V.; Brink, J.A.; Dreyer, K.J. Current Applications and Future Impact of Machine Learning in Radiology. Radiology 2018, 288, 318–328. [Google Scholar] [CrossRef]
  12. Aprovitola, A.; Gallo, L. Knee Bone Segmentation from MRI: A Classification and Literature Review. Biocybern. Biomed. Eng. 2016, 36, 437–449. [Google Scholar] [CrossRef]
  13. Goldring, S.R. Cross-Talk between Subchondral Bone and Articular Cartilage in Osteoarthritis. Arthritis Res. Ther. 2012, 14, A7. [Google Scholar] [CrossRef] [Green Version]
  14. Hunter, D.J.; Zhang, Y.; Niu, J.; Goggins, J.; Amin, S.; LaValley, M.P.; Guermazi, A.; Genant, H.; Gale, D.; Felson, D.T. Increase in Bone Marrow Lesions Associated with Cartilage Loss: A Longitudinal Magnetic Resonance Imaging Study of Knee Osteoarthritis. Arthritis Rheum. 2006, 54, 1529–1535. [Google Scholar] [CrossRef] [PubMed]
  15. Neogi, T.; Bowes, M.A.; Niu, J.; De Souza, K.M.; Vincent, G.R.; Goggins, J.; Zhang, Y.; Felson, D.T. Magnetic Resonance Imaging-Based Three-Dimensional Bone Shape of the Knee Predicts Onset of Knee Osteoarthritis: Data from the Osteoarthritis Initiative. Arthritis Rheum. 2013, 65, 2048–2058. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Davies-Tuck, M.L.; Wluka, A.E.; Forbes, A.; Wang, Y.; English, D.R.; Giles, G.G.; O’Sullivan, R.; Cicuttini, F.M. Development of Bone Marrow Lesions Is Associated with Adverse Effects on Knee Cartilage While Resolution Is Associated with Improvement—A Potential Target for Prevention of Knee Osteoarthritis: A Longitudinal Study. Arthritis Res. Ther. 2010, 12, R10. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Bourgeat, P.; Fripp, J.; Stanwell, P.; Ramadan, S.; Ourselin, S. MR Image Segmentation of the Knee Bone Using Phase Information. Med. Image Anal. 2007, 11, 325–335. [Google Scholar] [CrossRef]
  18. Kashyap, S.; Zhang, H.; Rao, K.; Sonka, M. Learning-Based Cost Functions for 3-D and 4-D Multi-Surface Multi-Object Segmentation of Knee MRI: Data from the Osteoarthritis Initiative. IEEE Trans. Med. Imaging 2017, 37, 1103–1113. [Google Scholar] [CrossRef]
  19. Yin, Y.; Zhang, X.; Williams, R.; Wu, X.; Anderson, D.D.; Sonka, M. LOGISMOS-Layered Optimal Graph Image Segmentation of Multiple Objects and Surfaces: Cartilage Segmentation in the Knee Joint. IEEE Trans. Med. Imaging 2010, 29, 2023–2037. [Google Scholar] [CrossRef] [Green Version]
  20. Becker, M.; Magnenat-Thalmann, N. Deformable Models in Medical Image Segmentation. In 3D Multiscale Physiological Human; Springer: London, UK, 2014; pp. 81–106. [Google Scholar] [CrossRef]
  21. Mcinerney, T.; Terzopoulos, D. Deformable models in medical image analysis: A survey. Med. Image Anal. 1996, 1, 91–108. [Google Scholar] [CrossRef]
  22. Cootes, T.F.; Taylor, C.J. Active Shape Models—‘Smart Snakes’ BT—BMVC92; Hogg, D., Boyle, R., Eds.; Springer: London, UK, 1992; pp. 266–275. [Google Scholar]
  23. Heimann, T.; Meinzer, H.P. Statistical Shape Models for 3D Medical Image Segmentation: A Review. Med. Image Anal. 2009, 13, 543–563. [Google Scholar] [CrossRef]
  24. Sarkalkan, N.; Weinans, H.; Zadpoor, A.A. Statistical Shape and Appearance Models of Bones. Bone 2014, 60, 129–140. [Google Scholar] [CrossRef] [PubMed]
  25. Terzopoulos, D. On Matching Deformable Models to Images. Top. Meet. Mach. Vis. Tech 1987, 12, 160–167. [Google Scholar]
  26. Kass, M.; Witkin, A.; Terzopoulos, D. Snakes: Active Contour Models. Int. J. Comput. Vis. 1988, 1, 321–331. [Google Scholar] [CrossRef]
  27. Cootes, T.; Baldock, E.R.; Graham, J. An Introduction to Active Shape Models. Image Processing Anal. 2000, 243657, 223–248. [Google Scholar]
  28. Cootes, T.; Taylor, C.; Cooper, D.; Graham, J. Active Shape Models-Their Training and Application. Comput. Vis. Image Underst. 1995, 61, 38–59. [Google Scholar] [CrossRef] [Green Version]
  29. Guo, Y.; Jiang, J.; Hao, S.; Zhan, S. Distribution-Based Active Contour Model for Medical Image Segmentation. In Proceedings of the 6th International Conference on Image and Graphics, ICIG 2011, Hefei, China, 12–15 August 2011; pp. 61–65. [Google Scholar] [CrossRef]
  30. Lorigo, L.M.; Faugeras, O.; Grimson, W.E.L.; Antipolis, S. Segmentation of Bone in Clinical Knee MRI Using Texture—Bas Ed Geodesic Active Contours. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Cambridge, MA, USA, 11–13 October 1998. [Google Scholar]
  31. Cheng, R.; Roth, H.R.; Lu, L.; Wang, S.; Turkbey, B.; Gandler, W.; McCreedy, E.S.; Agarwal, H.K.; Choyke, P.; Summers, R.M.; et al. Active Appearance Model and Deep Learning for More Accurate Prostate Segmentation on MRI. In Medical Imaging 2016: Image Processing; SPIE: Bellingham, WA, USA, 2016; Volume 9784, p. 97842I. [Google Scholar] [CrossRef]
  32. Fripp, J.; Crozier, S.; Warfield, S.K.; Ourselin, S. Automatic Segmentation of the Bone and Extraction of the Bone-Cartilage Interface from Magnetic Resonance Images of the Knee. Phys. Med. Biol. 2007, 52, 1617–1631. [Google Scholar] [CrossRef] [PubMed]
  33. Vincent, G.; Wolstenholme, C.; Scott, I.; Bowes, M. Fully Automatic Segmentation of the Knee Joint Using Active Appearance Models. Med. Image Anal. Clin. A Grand Chall. 2010, 1, 224–230. [Google Scholar]
  34. Seim, H.; Kainmueller, D.; Lamecker, H.; Bindernagel, M.; Malinowski, J.; Zachow, S. Model-Based Auto-Segmentation of Knee Bones and Cartilage in MRI Data. In Proceedings of the 13th International Conference on Medical Image Computing and Computer Assisted Intervention, Beijing, China, 24 September 2010; pp. 215–223. [Google Scholar]
  35. Bindernagel, M.; Kainmueller, D.; Seim, H.; Lamecker, H.; Zachow, S.; Hege, H.C. An Articulated Statistical Shape Model of the Human Knee. In Bildverarbeitung für die Medizin; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar] [CrossRef]
  36. Tamez-Pena, J.G.; Farber, J.; Gonzalez, P.C.; Schreyer, E.; Schneider, E.; Totterman, S. Unsupervised Segmentation and Quantification of Anatomical Knee Features: Data from the Osteoarthritis Initiative. IEEE Trans. Biomed. Eng. 2012, 59, 1177–1186. [Google Scholar] [CrossRef]
  37. Boykov, Y.Y.; Jolly, M.P. Interactive Graph Cuts for Optimal Boundary & Region Segmentation of Objects in N-D Images. In Proceedings of the IEEE International Conference on Computer Vision, Vancouver, BC, Canada, 7–14 July 2001; Volume 1, pp. 105–112. [Google Scholar] [CrossRef]
  38. Bourmaud, G.; Mégret, R.; Giremus, A.; Berthoumieu, Y. Global Motion Estimation from Relative Measurements Using Iterated Extended Kalman Filter on Matrix LIE Groups. In Proceedings of the 2014 IEEE International Conference on Image Processing, ICIP 2014, Paris, France, 27–30 October 2014; Volume 22, pp. 3362–3366. [Google Scholar] [CrossRef]
  39. Toennies, K.D. Guide to Medical Image Analysis; Springer: Berlin/Heidelberg, Germany, 2017. [Google Scholar] [CrossRef]
  40. Camilus, K.S.; Govindan, V.K. A Review on Graph Based Segmentation. Int. J. Image Graph. Signal Process. 2012, 4, 1–13. [Google Scholar] [CrossRef] [Green Version]
  41. Peng, B.; Zhang, L.; Zhang, D. A Survey of Graph Theoretical Approaches to Image Segmentation. Pattern Recognit. 2013, 46, 1020–1038. [Google Scholar] [CrossRef] [Green Version]
  42. Leahy, Z.W. and R. An Optimal Graph Theoretic Approach to Data Clustering: Theory and Its Application to Image Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 1993, 15, 1101–1113. [Google Scholar] [CrossRef] [Green Version]
  43. Shi, J.; Malik, J. Normalized Cuts and Image Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 888–905. [Google Scholar] [CrossRef] [Green Version]
  44. Park, S.H.; Lee, S.; Shim, H.; Yun, I.D.; Lee, S.U.; Lee, K.H.; Kang, H.S.; Han, J.K. Fully Automatic 3-D Segmentation of Knee Bone Compartments by Iterative Local Branch-And-Mincut on Mr Images from Osteoarthritis Initiative (OAI). In Proceedings of the 2009 16th IEEE International Conference on Image Processing (ICIP), Cairo, Egypt, 7–10 November 2009; pp. 3381–3384. [Google Scholar]
  45. Ababneh, S.Y.; Prescott, J.W.; Gurcan, M.N. Automatic Graph-Cut Based Segmentation of Bones from Knee Magnetic Resonance Images for Osteoarthritis Research. Med. Image Anal. 2011, 15, 438–448. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  46. Ababneh, S.Y.; Gurcan, M.N. An Efficient Graph-Cut Segmentation for Knee Bone Osteoarthritis Medical Images. In Proceedings of the 2010 IEEE International Conference on Electro/Information Technology, EIT2010, Normal, IL, USA, 20–22 May 2010. [Google Scholar] [CrossRef]
  47. Somasundar, M.K.A.; Somashekar, B.H.; Somasundar, R.A. Segmentation of Tibia Femoral Bone Using Graph Cut Method and 3D Rendering for FEA. Int. J. Inf. Technol. 2020, 12, 1435–1441. [Google Scholar] [CrossRef]
  48. Rohlfing, T.; Brandt, R.; Menzel, R.; Russakoff, D.B.; Maurer, C.R. Quo Vadis, Atlas-Based Segmentation? In Handbook of Biomedical Image Analysis; Springer: Boston, MA, USA, 2005; pp. 435–486. [Google Scholar]
  49. Shan, L.; Zach, C.; Charles, C.; Niethammer, M. Automatic Atlas-Based Three-Label Cartilage Segmentation from MR Knee Images. Med. Image Anal. 2014, 18, 1233–1246. [Google Scholar] [CrossRef] [Green Version]
  50. Lee, J.G.; Gumus, S.; Moon, C.H.; Kwoh, C.K.; Bae, K.T. Fully Automated Segmentation of Cartilage from the MR Images of Knee Using a Multi-Atlas and Local Structural Analysis Method. Med. Phys. 2014, 41, 092303. [Google Scholar] [CrossRef]
  51. Dam, E.B.; Lillholm, M.; Marques, J.; Nielsen, M. Automatic Segmentation of High-and Low-Field Knee MRIs Using Knee Image Quantification with Data from the Osteoarthritis Initiative. J. Med. Imaging 2015, 2, 24001. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  52. Anshad, P.Y.M.; Kumar, S.S.; Shahudheen, S. Segmentation of Chondroblastoma from Medical Images Using Modified Region Growing Algorithm. Clust. Comput. 2019, 22, 13437–13444. [Google Scholar] [CrossRef]
  53. Pan, Z.; Lu, J. A Bayes-Based Region-Growing Algorithm for Medical Image Segmentation. Comput. Sci. Eng. 2007, 9, 32–38. [Google Scholar] [CrossRef]
  54. Lee, J.-S.; Chung, Y.-N. Integrating edge detection and thresholding approaches to segmenting femora and patellae from magnetic resonance images. Biomed. Eng. Appl. Basis Commun. 2005, 17, 1–11. [Google Scholar] [CrossRef] [Green Version]
  55. Dodin, P.; Martel-Pelletier, J.; Pelletier, J.-P.; Abram, F. A Fully Automated Human Knee 3D MRI Bone Segmentation Using the Ray Casting Technique. Med. Biol. Eng. Comput. 2011, 49, 1413–1424. [Google Scholar] [CrossRef] [PubMed]
  56. Dalvi, R.; Abugharbieh, R.; Wilson, D.C.; Wilson, D.R. Multi-Contrast MR for Enhanced Bone Imaging and Segmentation. In Proceedings of the 2007 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Lyon, France, 22–26 August 2007; IEEE: Piscataway, NJ, USA, 2007; pp. 5620–5623. [Google Scholar]
  57. Gandhamal, A.; Talbar, S.; Gajre, S.; Razak, R.; Hani, A.F.M.; Kumar, D. Fully Automated Subchondral Bone Segmentation from Knee MR Images: Data from the Osteoarthritis Initiative. Comput. Biol. Med. 2017, 88, 110–125. [Google Scholar] [CrossRef]
  58. Cabitza, F.; Locoro, A.; Banfi, G. Machine Learning in Orthopedics: A Literature Review. Front. Bioeng. Biotechnol. 2018, 6, 75. [Google Scholar] [CrossRef] [Green Version]
  59. Jamshidi, A.; Pelletier, J.-P.; Martel-Pelletier, J. Machine-Learning-Based Patient-Specific Prediction Models for Knee Osteoarthritis. Nat. Rev. Rheumatol. 2019, 15, 49–60. [Google Scholar] [CrossRef] [PubMed]
  60. Kluzek, S.; Mattei, T.A. Machine-Learning for Osteoarthritis Research. Osteoarthr. Cartil. 2019, 27, 977–978. [Google Scholar] [CrossRef] [PubMed]
  61. Zheng, A.; Casari, A. Feature Engineering for Machine Learning: Principles and Techniques for Data Scientists; O’Reilly Media, Inc.: Newton, MA, USA, 2018. [Google Scholar]
  62. Quinlan, J.R. Induction of Decision Trees. Mach. Learn. 1986, 1, 81–106. [Google Scholar] [CrossRef] [Green Version]
  63. Rish, I. An Empirical Study of the Naive Bayes Classifier. In IJCAI 2001 Workshop on Empirical Methods in Artificial Intelligence; IBM: New York, NY, USA, 2001; Volume 3, pp. 41–46. [Google Scholar]
  64. Farhat, N.H. Photonic Neural Networks and Learning Machines. IEEE Expert 1992, 7, 63–72. [Google Scholar] [CrossRef]
  65. Rokach, L. Ensemble-Based Classifiers. Artif. Intell. Rev. 2010, 33, 1–39. [Google Scholar] [CrossRef]
  66. MacQueen, J. Some Methods for Classification and Analysis of Multivariate Observations. In Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability; University of California Press: Berkeley, CA, USA, 1967; Volume 1, pp. 281–297. [Google Scholar]
  67. Jolliffe, I.T. Generalizations and Adaptations of Principal Component Analysis. In Principal Component Analysis; Springer: New York, NY, USA, 2002; pp. 373–405. [Google Scholar]
  68. Zhang, K.; Lu, W.; Marziliano, P. Automatic Knee Cartilage Segmentation from Multi-Contrast MR Images Using Support Vector Machine Classification with Spatial Dependencies. Magn. Reson. Imaging 2013, 31, 1731–1743. [Google Scholar] [CrossRef]
  69. Brahim, A.; Jennane, R.; Riad, R.; Janvier, T.; Khedher, L.; Toumi, H.; Lespessailles, E. A Decision Support Tool for Early Detection of Knee OsteoArthritis Using X-Ray Imaging and Machine Learning: Data from the OsteoArthritis Initiative. Comput. Med. Imaging Graph. 2019, 73, 11–18. [Google Scholar] [CrossRef] [PubMed]
  70. Kubkaddi, S.; Ravikumar, K.M. Early Detection of Knee Osteoarthritis Using SVM Classifier. IJSEAT 2017, 5, 259–262. [Google Scholar]
  71. Du, Y.; Almajalid, R.; Shan, J.; Zhang, M. A Novel Method to Predict Knee Osteoarthritis Progression on MRI Using Machine Learning Methods. IEEE Trans. Nanobiosci. 2018, 17, 228–236. [Google Scholar] [CrossRef] [PubMed]
  72. Kashyap, S.; Oguz, I.; Zhang, H.; Sonka, M. Automated segmentation of knee MRI using hierarchical classifiers and just enough interaction based learning: Data from osteoarthritis initiative. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Cham, Switzerland, 2016; pp. 344–351. [Google Scholar] [CrossRef] [Green Version]
  73. Halilaj, E.; Le, Y.; Hicks, J.L.; Hastie, T.J.; Delp, S.L. Modeling and Predicting Osteoarthritis Progression: Data from the Osteoarthritis Initiative. Osteoarthr. Cartil. 2018, 26, 1643–1650. [Google Scholar] [CrossRef] [Green Version]
  74. Lecun, Y.; Bengio, Y.; Hinton, G. Deep Learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  75. Geetharamani, G.; Pandian, A. Identification of Plant Leaf Diseases Using a Nine-Layer Deep Convolutional Neural Network. Comput. Electr. Eng. 2019, 76, 323–338. [Google Scholar] [CrossRef]
  76. O’Mahony, N.; Campbell, S.; Carvalho, A.; Harapanahalli, S.; Hernandez, G.V.; Krpalkova, L.; Riordan, D.; Walsh, J. Deep Learning vs. Traditional Computer Vision. In Science and Information Conference; Springer: Cham, Switzerland, 2019; pp. 128–144. [Google Scholar]
  77. Wang, Z. Deep Learning for Image Segmentation: Veritable or Overhyped? arXiv 2019, arXiv:1904.08483. [Google Scholar]
  78. Wang, G.; Ye, J.C.; Mueller, K.; Fessler, J.A. Image Reconstruction Is a New Frontier of Machine Learning. IEEE Trans. Med. Imaging 2018, 37, 1289–1296. [Google Scholar] [CrossRef]
  79. Zhang, L.; Lin, J.; Liu, B.; Zhang, Z.; Yan, X.; Wei, M. A Review on Deep Learning Applications in Prognostics and Health Management. IEEE Access 2019, 7, 162415–162438. [Google Scholar] [CrossRef]
  80. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT press: Cambridge, MA, USA, 2016. [Google Scholar]
  81. Jozefowicz, R.; Zaremba, W.; Sutskever, I. An Empirical Exploration of Recurrent Network Architectures. In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 7–9 July 2015; Volume 3, pp. 2332–2340. [Google Scholar]
  82. Vincent, P.; Larochelle, H.; Lajoie, I.; Bengio, Y.; Manzagol, P.A. Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion. J. Mach. Learn. Res. 2010, 11, 3371–3408. [Google Scholar]
  83. Liu, F.; Zhou, Z.; Jang, H.; Samsonov, A.; Zhao, G.; Kijowski, R. Deep Convolutional Neural Network and 3D Deformable Approach for Tissue Segmentation in Musculoskeletal Magnetic Resonance Imaging. Magn. Reson. Med. 2018, 79, 2379–2391. [Google Scholar] [CrossRef] [PubMed]
  84. Ambellan, F.; Tack, A.; Ehlke, M.; Zachow, S. Automated Segmentation of Knee Bone and Cartilage Combining Statistical Shape Knowledge and Convolutional Neural Networks: Data from the Osteoarthritis Initiative. Med. Image Anal. 2019, 52, 109–118. [Google Scholar] [CrossRef]
  85. Cheng, R.; Alexandridi, N.A.; Smith, R.M.; Shen, A.; Gandler, W.; McCreedy, E.; McAuliffe, M.J.; Sheehan, F.T. Fully Automated Patellofemoral MRI Segmentation Using Holistically Nested Networks: Implications for Evaluating Patellofemoral Osteoarthritis, Pain, Injury, Pathology, and Adolescent Development. Magn. Reson. Med. 2020, 83, 139–153. [Google Scholar] [CrossRef] [PubMed]
  86. Lim, J.; Kim, J.; Cheon, S. A Deep Neural Network-Based Method for Early Detection of Osteoarthritis Using Statistical Data. Int. J. Environ. Res. Public Health 2019, 16, 1281. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  87. Tiulpin, A.; Saarakkala, S. Automatic Grading of Individual Knee Osteoarthritis Features in Plain Radiographs Using Deep Convolutional Neural Networks. Diagnostics 2020, 10, 932. [Google Scholar] [CrossRef] [PubMed]
  88. Antony, J.; McGuinness, K.; Moran, K.; O’Connor, N.E. Automatic Detection of Knee Joints and Quantification of Knee Osteoarthritis Severity Using Convolutional Neural Networks. In Proceedings of the International Conference on Machine Learning and Data Mining in Pattern Recognition, New York, NY, USA, 15–20 July 2017; Springer: Cham, Switzerland, 2017; pp. 376–390. [Google Scholar]
  89. Tiulpin, A.; Thevenot, J.; Rahtu, E.; Lehenkari, P.; Saarakkala, S. Automatic Knee Osteoarthritis Diagnosis from Plain Radiographs: A Deep Learning-Based Approach. Sci. Rep. 2018, 8, 1727. [Google Scholar] [CrossRef]
  90. Christodoulou, E.; Moustakidis, S.; Papandrianos, N.; Tsaopoulos, D.; Papageorgiou, E. Exploring Deep Learning Capabilities in Knee Osteoarthritis Case Study for Classification. In Proceedings of the 2019 10th International Conference on Information, Intelligence, Systems and Applications (IISA), Patras, Greece, 15–17 July 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–6. [Google Scholar]
  91. Tiulpin, A.; Klein, S.; Bierma-Zeinstra, S.M.A.; Thevenot, J.; Rahtu, E.; van Meurs, J.; Oei, E.H.G.; Saarakkala, S. Multimodal Machine Learning-Based Knee Osteoarthritis Progression Prediction from Plain Radiographs and Clinical Data. Sci. Rep. 2019, 9, 20038. [Google Scholar] [CrossRef]
  92. Chang, G.H.; Felson, D.T.; Qiu, S.; Capellini, T.D.; Kolachalama, V.B. Assessment of knee pain from MR imaging using a convolutional Siamese network. Eur. Radiol. 2020, 30, 3538–3548. [Google Scholar] [CrossRef]
  93. Shamir, L.; Orlov, N.; Eckley, D.M.; Macura, T.; Johnston, J.; Goldberg, I.G. Wndchrm–an Open Source Utility for Biological Image Analysis. Source Code Biol. Med. 2008, 3, 13. [Google Scholar] [CrossRef] [Green Version]
  94. Ashinsky, B.G.; Coletta, C.E.; Bouhrara, M.; Lukas, V.A.; Boyle, J.M.; Reiter, D.A.; Neu, C.P.; Goldberg, I.G.; Spencer, R.G. Machine Learning Classification of OARSI-Scored Human Articular Cartilage Using Magnetic Resonance Imaging. Osteoarthr. Cartil. 2015, 23, 1704–1712. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Methods of classifying the knee bones [12].
Figure 1. Methods of classifying the knee bones [12].
Diagnostics 12 00611 g001
Figure 2. Knee bone segmentation has benefits over other tissues because of its location and anatomical size. (a) Illustrates the MR image of a knee joint—patella, femur, and tibia bones, readily apparent with the accompanying cartilage surfaces. TB = tibia bone, FB = femoral bone, PC = patellar cartilage, FC = femoral cartilage, TC= tibia cartilage. (b) Shows segmented tibia (TB) and femur (FB), which usually have better demarcation [19].
Figure 2. Knee bone segmentation has benefits over other tissues because of its location and anatomical size. (a) Illustrates the MR image of a knee joint—patella, femur, and tibia bones, readily apparent with the accompanying cartilage surfaces. TB = tibia bone, FB = femoral bone, PC = patellar cartilage, FC = femoral cartilage, TC= tibia cartilage. (b) Shows segmented tibia (TB) and femur (FB), which usually have better demarcation [19].
Diagnostics 12 00611 g002
Figure 3. Illustrates the search with ASM for the face. (a) In the case of a point being near the target; (b) shows how the ASM can break down if the starting position is too far from the target [27].
Figure 3. Illustrates the search with ASM for the face. (a) In the case of a point being near the target; (b) shows how the ASM can break down if the starting position is too far from the target [27].
Diagnostics 12 00611 g003
Figure 4. An example illustrates both ROI detection failures/recoveries and leak detection and correction. (a) An MRI image; (b) the ROI block is detected with only the femur bone detected, not the tibia; (c) after lowering the ROI detection threshold, both bones are detected; (d) mask for the GC output; (e) after morphological processes; (f) the resulting two potential skeletons, with a leak seen in the tibia bone; (g) the tibia bone has a leak that connects fat and other tissues to the tibia; (h) initial step in detecting a leak is to use a morphological opening; (i) residual content resulting from subtracting (h) from (g). (j) Following an examination of the remains in (i), the leak detection method identifies a leak and decides that only the pixels in the leak are affected; (j) are relevant to the tibia (k) after adding the appropriate pixels in (j) to (h), resulting in a leak-free tibia (i), the femoral mask (l) and (m). After applying the morphological aperture to check for leakage (n) the remaining pixels after subtracting (m) from (i). On this basis, it is concluded that there is no leak, and the pixels are reinserted (o). (o) femur and tibia masks as a result (p) GC segmentation in white and manual segmentation in yellow determined with DICE = 0.95 and 0.96 resolution for femur and tibia bones, respectively [45].
Figure 4. An example illustrates both ROI detection failures/recoveries and leak detection and correction. (a) An MRI image; (b) the ROI block is detected with only the femur bone detected, not the tibia; (c) after lowering the ROI detection threshold, both bones are detected; (d) mask for the GC output; (e) after morphological processes; (f) the resulting two potential skeletons, with a leak seen in the tibia bone; (g) the tibia bone has a leak that connects fat and other tissues to the tibia; (h) initial step in detecting a leak is to use a morphological opening; (i) residual content resulting from subtracting (h) from (g). (j) Following an examination of the remains in (i), the leak detection method identifies a leak and decides that only the pixels in the leak are affected; (j) are relevant to the tibia (k) after adding the appropriate pixels in (j) to (h), resulting in a leak-free tibia (i), the femoral mask (l) and (m). After applying the morphological aperture to check for leakage (n) the remaining pixels after subtracting (m) from (i). On this basis, it is concluded that there is no leak, and the pixels are reinserted (o). (o) femur and tibia masks as a result (p) GC segmentation in white and manual segmentation in yellow determined with DICE = 0.95 and 0.96 resolution for femur and tibia bones, respectively [45].
Diagnostics 12 00611 g004
Figure 5. One sample slice’s bone segmentation in coronal view. (a) Original image; (b) multi-atlas-based spatial prior; (c) segmentation result; (d) expert segmentation [49].
Figure 5. One sample slice’s bone segmentation in coronal view. (a) Original image; (b) multi-atlas-based spatial prior; (c) segmentation result; (d) expert segmentation [49].
Diagnostics 12 00611 g005
Figure 6. Analysis of the results of a general region-growing algorithm: (a) Original image (red arrow points to things (teeth and auris) to be segmented); (b) segmentation results from region growing using parameters r = 30; (c) results obtained using the robust split-and-merge algorithm; (d) the results showed that the edges of the images are more exact and smoother [53].
Figure 6. Analysis of the results of a general region-growing algorithm: (a) Original image (red arrow points to things (teeth and auris) to be segmented); (b) segmentation results from region growing using parameters r = 30; (c) results obtained using the robust split-and-merge algorithm; (d) the results showed that the edges of the images are more exact and smoother [53].
Diagnostics 12 00611 g006
Figure 7. A typical system of machine learning [61].
Figure 7. A typical system of machine learning [61].
Diagnostics 12 00611 g007
Figure 8. Segmentation findings for MR images using the hybrid SVM-DRF model with five types of feature vectors [68].
Figure 8. Segmentation findings for MR images using the hybrid SVM-DRF model with five types of feature vectors [68].
Diagnostics 12 00611 g008
Figure 9. Knee bone segmentation using (a) classical machine learning and (b) deep learning. Classic machine learning feature architecture includes hand-picked representations and mapping, while deep learning uses multiple hidden layers to extract representations of hierarchical features [76].
Figure 9. Knee bone segmentation using (a) classical machine learning and (b) deep learning. Classic machine learning feature architecture includes hand-picked representations and mapping, while deep learning uses multiple hidden layers to extract representations of hierarchical features [76].
Diagnostics 12 00611 g009
Figure 10. Explanation of the difference between a linear regression model and a simple learning model: (a) linear model regression model; (b) simplified deep learning model [77].
Figure 10. Explanation of the difference between a linear regression model and a simple learning model: (a) linear model regression model; (b) simplified deep learning model [77].
Diagnostics 12 00611 g010
Figure 11. Process of segmenting medical images using deep learning [78].
Figure 11. Process of segmenting medical images using deep learning [78].
Diagnostics 12 00611 g011
Figure 12. The SegNet CNN architecture is depicted in this diagram. SegNet is made up of two networks: an encoder and a decoder. This network’s final output is high-resolution pixel-by-pixel tissue categorization [83].
Figure 12. The SegNet CNN architecture is depicted in this diagram. SegNet is made up of two networks: an encoder and a decoder. This network’s final output is high-resolution pixel-by-pixel tissue categorization [83].
Diagnostics 12 00611 g012
Figure 13. Schematic representation of the workflow of [87] approach.
Figure 13. Schematic representation of the workflow of [87] approach.
Diagnostics 12 00611 g013
Figure 14. Demonstration of the number of papers reviewed for each method in KOA studies.
Figure 14. Demonstration of the number of papers reviewed for each method in KOA studies.
Diagnostics 12 00611 g014
Table 1. Summary of automatic knee bone segmentation based on deformable, graph, atlas, miscellaneous models.
Table 1. Summary of automatic knee bone segmentation based on deformable, graph, atlas, miscellaneous models.
Ref.YearSegmentation TechniqueNo. of SamplesSequence TypeRegion of InterestMetric
[32]2007ASM-SSM20 health samplesFS SPGRFemur/Tibia
/Patella
DSC: 0.96(FB); 0.96(TB) and 0.89 (PB)
[33]2010AAM80 subjectsDESSFemur/Tibia
/Cartilage
AvgD:0.88 (±0.24) (FB); 0.74 (±0.21) (TB), RMSD: 1.49 (±0.44) (FB); 1.21 (±0.34) (TB)
AvgD: 36.3 (±5.3) (FC); 34.6 (±7.9) (TC), RMSD: −25.2 (±10.1) (FC); −9.5 (±18.8) (TC)
[34]2010ASM-AAM40 clinical MRI samplesT1 weighted SPGRFemur/Tibia
/Cartilage
AvgD:1.02 (±0.22) (FB); 0.84 (±0.19) (TB), RMSD: 1.54 (±0.30) (FB); 1.24 (±0.28) (TB)
AvgD: 34.0 (±12.7) (FC); 29.2 (±8.6) (TC), RMSD: 7.7 (±19.2) (FC); −2.7 (±18.2) (TC)
[35]2011SSM40 clinical samplesCTFFemur/TibiaFor single-object (SSM) DICE: 0.94 (±0.02) (FB); 0.86 (±0.10) (TB)
[15]2013AAM178 samplesSagittal 3-D double-echoFemur/Tibia
/Patella
Odds ratio 12.5 [95% CI 4.0–39.3] for (K/L grade of 0) and [95% CI] 1.8–5.0, p < 0.0001 for OA after 12 months in patients in the lowest tertile grade compared to those in the top tertile grade.
[19]2010LOGISMOS69 images3D DESS WEFemur/Tibia/PatellaDSC ± SD: 0.84 ± 0.04(FC);0.80 ± 0.04 (TC); 0.80 ± 0.04 (PC)
[45]2011Graph cuts376 imagesT2-weightedFemur/TibiaDSC: 0.936 (FB); 0.946 (TB);
0.941 (FB + TB)
[44]2009Graph cuts8 imagesDESSFemur/Tibia/PatellaDSC: 0.961 (FB); 0.857
(PB); 0.970 (TB); 0.958
[46]2010Graph cuts30 imagesT2 sagittal mapFemur/TibiaZijdenbos Similarity Index (ZSI) for Avg 95%; Std 0.028; Median 0.96; Min 0.87; Max 0.98.
[47]2020Graph cuts65 slicesT1 sequenceFemur/TibiaMean Square Error (MSE): 0.19
[50]2014Multi-atlas100 training;
50 test
T1 weighted GRE FSFemur/TibiaASD ± SD: 0.63 ± 0.17 mm (FB);
0.53 ± 0.25 mm (TB)
[51]2015Multi-atlas, KNNThe samples from CCBR OAI and SKI10 were usedT1 weighted Turbo 3DTibiaDSC ± SD (training): 0.975 ± 0.010 (TB)
[55]2011Ray casting161 samplesGRE FSFemur/TibiaDSC ± SD: 0.94 ± 0.05 (FB);0.92 ± 0.07 (TB)
[56]2007Region growing; Level set2 samplesT1 weightedFemur/Tibia/PatellaSens: 97.05% (FB); 96.95%(TB); 92.69% (PB) Spec: 98.79% (FB); 98.33%(TB);
[57]2017Level set; predefined
Threshold
8 samplesDESSFemur/TibiaDSC ± SD: 90.28 ± 2.33%
(FB); 91.35 ± 2.22% (TB)
[54]2005FLoG edge detector;
Threshold; Wavelet transforms (WT)
40 samplesGE Signa Horizon
LX 1.5 Tesla
Femur/PatellaThe results show that the proposed method can segment the femur and patella robustly even under bad imaging conditions.
Table 2. Summary of deep learning and machine learning methods for studying knee bone segmentation and classification.
Table 2. Summary of deep learning and machine learning methods for studying knee bone segmentation and classification.
RefYearDataDatasetFeature EngineeringLearning AlgorithmValidationResults
[69]2019X-rayOAIICARandom forest; Naïve BayesLeave-One-Out (LOO)87.15% sensitivity; 82.98% accuracy and up to 80.65% for specificity
[70]2017MRIFrom hospitalGLCMSVM with the linear kernel; SVM with RBF kernel; SVM with polynomial kernel147 images training
66 images testing
95.45% accuracy; 95.45% accuracy; 87.8% accuracy
[71]2018MRIOAIPCASVM
Random forest
Naïve Bayes
ANN
10-fold cross-validationFor JSL grade prediction the best performance was achieve for random forest AUC = 0.785 and F-measure = 0.743, while for the ANN with AUC = 0.695 and F-measure = 0.796.
[72]2016MRIOAIk-means clustering; Neighborhood approximation forestsLOGISMOS
Forest Classifier
Hierarchical Random
108 baseline MRIs and 54 patients’ 12-month follow-up scans4D cartilage surface positioning errors (in millimeters)
[73]2018Pain scores and X-raysOAI and MOSTPCALASSO regression10-fold cross -validationAUC of 0.86 for Radiographic progression
[83]2018MRISKI10Not usedCNN3D-FSE images and T2 mapsASD ± SD: 0.56 ± 0.12 (FB); 0.50 ± 0.14 (TB)
[84]2019MRISKI10,OAI imorphics, OAI ZIBNot used2D/3D CNN and combination of (SSMs)2-fold cross-validation(i) 74.0 ± 7.7 total score.
(ii) DSC: 89.4% (FC).
(iii) DIC: 98.6% (FB), 98.5% (TB), 85.6% (TC), 89.9% (FC).
[85]2020MRNational Institutes of Health (NIH), SKI10Not usedHNN deep learning9-fold cross-validationDSC ± SD: 0.972 ± 0.054 (FB); 0.947 ± 0.0113 (PB)
[86]2019X-rayKorea Centers for Disease Control and Prevention (KCDCP)PCADeep Neural Network (DNN)(66%) train (34%) test,
5F-CV,
(50%) train (50%) test
76.8% AUC
[87]2020X-rayOAI,MOSTNot usedEnsemble and CNN19,704 train 11,743 test0.98 Average precision
and 0.98 ROC
[88]2017X-rayOAI,MOSTFCNCNN30% testing 70% trainingAccuracy 60.3% for (multi-class Grades 0–4)
[89]2018X-rayOAI,MOSTFCNCNN ResNet-3467% train, 11% validation,
22% testing
Accuracy 66.71% (multi-class Grades 0–4)
[91]2019Clinical data,
X-ray
OAI,MOSTCNNGradient Boosting Machine (GBM) and Logistic Regression (LR)MOST dataset for testing and OAI dataset for training, 5F-CVAccuracy 0.79
[90]2019X-rayOAICascadeDeep Neural Network (DNN)10-fold cross82.98% Accuracy
87.15% Sensitivity
80.65% specificity
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ahmed, S.M.; Mstafa, R.J. A Comprehensive Survey on Bone Segmentation Techniques in Knee Osteoarthritis Research: From Conventional Methods to Deep Learning. Diagnostics 2022, 12, 611. https://doi.org/10.3390/diagnostics12030611

AMA Style

Ahmed SM, Mstafa RJ. A Comprehensive Survey on Bone Segmentation Techniques in Knee Osteoarthritis Research: From Conventional Methods to Deep Learning. Diagnostics. 2022; 12(3):611. https://doi.org/10.3390/diagnostics12030611

Chicago/Turabian Style

Ahmed, Sozan Mohammed, and Ramadhan J. Mstafa. 2022. "A Comprehensive Survey on Bone Segmentation Techniques in Knee Osteoarthritis Research: From Conventional Methods to Deep Learning" Diagnostics 12, no. 3: 611. https://doi.org/10.3390/diagnostics12030611

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop