Next Article in Journal
Association of Peripheral Inflammatory Markers with Conception Outcomes Among Women with Endometriosis-Associated Infertility
Previous Article in Journal
Magnetic Nanoparticle-Integrated Microfluidic Chip Enables Reliable Isolation of Plasma Cell-Free DNA for Molecular Diagnostics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Review of CNN-Based Approaches for Preprocessing, Segmentation and Classification of Knee Osteoarthritis

1
Computer Science and Engineering Department, Punjab Engineering College, Chandigarh 160012, India
2
Department of Computer Science, Birkat Al Mouz, University of Nizwa, Nizwa 616, Oman
3
Maharishi Markandeshwar Engineering College, Maharishi Markandeshwar (Deemed to be University), Mullana 133207, India
*
Author to whom correspondence should be addressed.
Diagnostics 2026, 16(3), 461; https://doi.org/10.3390/diagnostics16030461
Submission received: 12 September 2025 / Revised: 31 December 2025 / Accepted: 14 January 2026 / Published: 2 February 2026
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)

Abstract

Osteoarthritis (OA) is a prevalent joint disorder characterized by symptoms such as pain and stiffness, often leading to loss of function and disability. Knee osteoarthritis (KOA) represents the most prevalent type of osteoarthritis. KOA is usually detected using X-ray radiographs of the knee; however, the classification of disease severity remains subjective and varies among clinicians, motivating the need for automated assessment methods. In recent years, deep learning–based approaches have shown promising performance for KOA classification tasks, particularly when applied to structured imaging datasets. This review analyzes convolution neural network (CNN)-based approaches reported in the literature and compares their performance across multiple criteria. Studies were identified through systematic searches of IEEE Xplore, SpringerLink, Elsevier (ScienceDirect), Wiley Online Library, ACM Digital Library, and other sources such as PubMed and arXiv, with the last search conducted in March 2025. The review examines datasets used (primarily X-ray and MRI), preprocessing strategies, segmentation techniques, and deep learning architectures. Reported classification accuracies range from 61% to 98%, depending on the dataset, imaging modality, and task formulation. Finally, this paper highlights key methodological limitations in existing studies and outlines future research directions to improve the robustness and clinical applicability of deep learning–based KOA classification systems.

1. Introduction

Osteoarthritis (OA) is a chronic joint disorder marked by the progressive deterioration of cartilage in the joints. OA breaks down the cartilage and wears it away, leading to pain, stiffness, swelling, and reduced joint movement [1]. Knee OA (KOA) affects the knee joint, leading to pain and limitations in knee movement for those affected. Many factors, such as age, obesity, injury, joint overuse, and genetics, etc., contribute to the development and progression of KOA. This disease has a high prevalence among older people and causes severe discomfort and restriction in movement. The number of people with KOA has increased in recent decades [2] and is also projected to increase in the future [3]. The diagnosis and treatment of this disease also create an economic burden on individuals and society as a whole. Multiple studies have shown that early detection and treatment reduce the effect of KOA to a large extent. Moreover, early detection also reduces the cost of treatment exponentially [4]. The diagnostic accuracy of clinical evaluations and imaging investigations provided by healthcare providers is highly dependent on the knowledge and experience of doctors and therefore poses the risk of misclassification by amateur physicians [5]. Orthopedists, or specialists in joints, have the knowledge, testing abilities, and experience to diagnose; however, they face an ever-increasing workload with rising KOA cases. Additionally, the classification criteria are highly subjective, and different doctors or the same doctor at different times may have different diagnoses on the same X-ray [6]. Hence, there is a need for an automated standardized technique for the detection and classification of KOA at early stages. Such techniques and solutions can help improve the lives of millions of people every year and reduce the burden on the healthcare system.

Significance of This Review

A detailed review of the extended literature ensures the importance of this article, specifically in the progress of deep learning (DL) architectures. Multiple researchers and experts have proposed various approaches to address the need for automated techniques using machine learning (ML) and DL algorithms to detect and classify KOA. This paper presents an extensive survey of existing techniques for automatic detection and classification of KOA. The paper also highlights the limitations of the existing methods and outlines possible future research directions in this area. This paper offers a comparative review of current methods for estimating minimal joint space width and assessing KOA severity using the Kellgren and Lawrence (KL) grading system. The paper also compares the recently proposed DL techniques based on their experimental results.
As shown in Table 1, it is evident that previous recent review papers lack the detailed analysis of various prominent preprocessing techniques and segmentation methods along with dataset details, especially MRI datasets for KOA classification. Therefore, the major research contributions of the paper are as follows:
(a)
A comprehensive survey of relevant recent research studies is carried out, exploring various data sources, data preprocessing techniques, and DL architectures utilized.
(b)
A comparison of performance measures of the research studies is presented. Also, the effect of variations in methodologies on the performance measures such as accuracy, precision, recall, and F1-score, etc., is discussed.
(c)
Shortcomings of the considered research studies are analyzed, and promising future research directions are outlined.
(d)
The review of different preprocessing methods is added as shown in Table 1.
Table 1. Comparison of survey papers with the proposed review in terms of preprocessing, segmentation, deep learning techniques, and imaging dataset coverage.
Table 1. Comparison of survey papers with the proposed review in terms of preprocessing, segmentation, deep learning techniques, and imaging dataset coverage.
PaperYearPreprocessing TechniquesSegmentation TechniquesDL TechniquesX-ray DatasetMRI Dataset
Kokkotis et al. [7]2020
Saini et al. [4]2021
Yeoh et al. [8]2021
Yick et al. [9]2022
Lee et al. [10]2022
Ramazanian et al. [11]2023
Cigdem et al. [12]2023
Zhao et al. [13]2024
Touahema et al. [14]2024
Teoh et al. [15]2024
Tariq et al. [16]2025
This Review
This paper is organized in various sections as follows: Section 2 explains the causes, symptoms, and prevalence of KOA along with its classification criteria. Section 3 lists the sources, selection criteria, and overall process of the literature review. Section 4 highlights the increased use of DL in healthcare and discusses the prominent DL architectures in this application domain. Section 5 elaborates on different data sources used in the papers considered for the review. Various DL approaches used in these research studies, along with data augmentation, data preprocessing, and segmentation techniques, are analyzed in Section 6. Section 7 presents the possible future research directions based on the analyzed research gaps. Lastly, the findings of the review paper are concluded in Section 8.

2. Osteoarthritis Overview

OA is a degenerative joint disorder caused by the deterioration of joint cartilage and the bone beneath it. It is one of the leading causes of disability in the world [17]. The most common symptoms are joint pain and stiffness, and the symptoms usually progress slowly over the years. Other symptoms include joint swelling, reduced range of motion, malfunctioning gait, and weakness or numbness among arms and legs. OA is the leading source of physical disability and impaired quality of life in industrialized nations and is expected to rise with the continuous industrial expansion and aging population [18]. The radiographic features conventionally used to define OA include joint space narrowing, osteophytes, subchondral sclerosis, cyst formation, and abnormalities of bone contour [1]. The principal morphological characteristic of OA is a slowly developing degenerative breakdown of cartilage with only episodic synovitis. The other characteristics include changes occurring in the bone, the synovium, and the muscles [19]. The hallmark of OA pathology is the loss of articular cartilage, which is commonly detected on standard radiographs as a narrowing of the joint space [20]. Osteoarthritis is the most common degenerative joint disorder that affects one or several diarthrodial joints, including small joints (such as those in the hand) and large joints (such as the knee and hip joints) [21]. Effects of OA can be observed on any joint in the body; however, most commonly affected joints are displayed in Figure 1.
KOA is the most prevalent type of OA. The following subsections explain the biological symptoms exhibited by osteoarthritis in the knee, existing medical identification techniques for KOA, its severity grading criteria, and how DL can assist in the automated detection and classification of KOA.

2.1. Knee Osteoarthritis

KOA is the category of osteoarthritis that involves the knee joint and leads to its deformity. According to [22], knee osteoarthritis (KOA) is not just a cartilage-specific condition but is viewed as a chronic disorder affecting the entire joint, including the articular cartilage, meniscus, ligaments, and surrounding muscles, which may arise from various pathophysiological processes. According to [23], knee osteoarthritis can lead to symptoms such as crepitus, bone enlargement, decreased knee flexion, flexion contracture, and tenderness. The bone spurs, cartilage loss, and joint space narrowing are depicted in Figure 2. KOA is identified by symptoms such as joint pain and functional impairments in the knee, which can disrupt both work and daily activities for patients. The KOA adversely affects the functional independence of the patient and negatively impacts the patient’s lifestyle apart from causing pain and disability. Consequently, KOA can lead to persistent pain, crepitus, swelling, morning stiffness, muscle atrophy, reduced quadriceps strength, and impaired postural control.
KOA is broadly of two types, namely patellofemoral osteoarthritis and tibiofemoral osteoarthritis. Patellofemoral OA occurs due to the loss of cartilage of the patella and the trochlear groove. It contributes towards 40% (approx.) of the overall KOA cases [25]. At the same time, tibiofemoral OA occurs due to bone spur formation in the medial tibiofemoral compartment and causes the remaining 60% of KOA cases [26]. Around 30% of people over the age of 60 suffer from KOA across the globe [22] and it is one of the major causes of impairment among the elderly population. Moreover, the cases of KOA have been continuously rising over the past few decades [2]. The authors of [27] identify aging as one of the major reasons for the development of KOA. Though KOA can also occur during young adulthood, higher risk is observed among people over 45 years of age. An estimate predicts over 250 million patients are currently suffering from this disease globally [28]. Such a large number of KOA patients also have a deep economic cost on the overall society. As proposed by [28], KOA can be managed and treated much more effectively if detected early. As stated in [29], all non-pharmacological treatment measures of KOA, such as exercise routines and therapies, as well as pharmacological measures such as corticoid injections, have a higher success rate if implemented at early stages. Therefore, there is a huge requirement for a simple and easy technique to identify this disease in its infancy.
Presently, knee osteoarthritis is diagnosed through physical examinations and techniques such as X-ray, Magnetic Resonance Imaging (MRI) scan, and arthroscopy reports, among others. As mentioned in [30], X-ray and other radiographic images are commonly used techniques to determine the class of knee osteoarthritis due to factors such as cost and availability. Therefore, plain knee radiographs are predominantly used to evaluate KOA, as they effectively reveal osteophyte formation, reduction in knee joint space width, subchondral geodes, and subchondral bone sclerosis. Key characteristics of KOA often include joint space narrowing and bone spurs. However, most of these techniques, including radiographic images, have a high degree of subjectivity due to the difference in experience of the clinician making the diagnosis. Other techniques also have further limitations, such as high cost and dependence on the chondral anatomical location, among others.

2.2. X-Ray Grading: Kellgren–Lawrence Scale

The KL grading system serves as the standard for assessing OA. Recognized by the World Health Organization (WHO) as the standard in 1961, the KL grading system classifies OA severity into the following five stages: 0 (normal), 1 (doubtful), 2 (mild), 3 (moderate), and 4 (severe). KOA is a disease that is very hard to detect in the early stages when the distinction between grades 0 and 1 is very minimal. The classification according to the KL grading scale is dependent on the knee joint space width region area. Narrowing of knee joint space usually represents a higher degree and severity of KOA. As noted in [5], joint space narrowing results from ongoing damage to the articular cartilage, accompanied by the development of osteophytes, subchondral cysts, and subchondral sclerosis in the synovial joints, potentially leading to functional impairment. According to [31], Grade 0 represents the absence of Joint Space Narrowing (JSN) or reactive changes, Grade 1 suggests doubtful JSN and possible osteophytic lipping, while Grade 2 indicates clear osteophytes and potential JSN. Following this, Grade 3 signifies the presence of moderate osteophytes, definite JSN, some degree of sclerosis, and possible bone-end deformities, while Grade 4 denotes extensive osteophytes, pronounced JSN, severe sclerosis, and definitive bone-end deformities. Figure 3 displays the sample X-ray images with respective characteristics of different KL grades.

2.3. MRI-Based Grading Systems

While the X-ray images are widely used for KOA classification, they have limitations in assessing soft tissue structures and early osteoarthritic changes. MRI provides a more comprehensive evaluation of KOA by capturing cartilage integrity, bone marrow lesions, synovitis, and meniscal abnormalities. Several MRI-based grading systems have been developed to provide a quantitative assessment of KOA severity. The prominent MRI-based grading methods are Whole-Organ Magnetic Resonance Imaging Score (WORMS) [32], Boston-Leeds Osteoarthritis Knee Score (BLOKS) [33], and MRI Osteoarthritis Knee Score (MOAKS) [34].
  • WORMS: This system is one of the most widely used MRI-based grading systems for KOA categorization. It evaluates multiple joint structures, including cartilage morphology, bone marrow lesions, menisci, synovitis, and joint effusion. Each structure is graded separately, providing a comprehensive assessment of disease progression. WORMS is particularly useful in longitudinal studies to monitor KOA development over time.
  • BLOKS: BLOKS is another MRI-based grading system designed to assess KOA features related to disease progression. It focuses on specific biomarkers of joint degeneration, such as cartilage loss, bone marrow lesions, and synovitis/effusion. Compared to WORMS, BLOKS places greater emphasis on inflammation-related changes, making it useful for understanding the role of synovitis and effusion in KOA progression.
  • MOAKS: MOAKS is an advanced grading system that builds upon WORMS and BLOKS, integrating their strengths while addressing some of their limitations. It provides detailed scoring for cartilage damage, bone marrow lesions, osteophytes, meniscal integrity, and synovitis. MOAKS offers improved inter-reader reliability and is widely used in clinical research to quantify structural changes in KOA.
The KL grading system that uses X-rays mainly looks at bone spurs and joint space narrowing, but it does not detect early KOA or soft tissue changes. MRI-based systems such as WORMS, BLOKS, and MOAKS give a more detailed view of the knee by showing cartilage damage, bone marrow changes, meniscus problems, and inflammation. These systems are useful for tracking disease progression and severity in research and clinical studies. However, MRI is more expensive, takes longer, and requires special equipment compared to X-rays. Combining MRI-based grading with KL grading gives a clearer understanding of KOA and helps with better diagnosis, monitoring, and treatment planning.

3. Literature Review Methodology

The first step of every research project is to explore related studies and set eligibility criteria to specify which studies to include in the review. The literature review in this paper is carried out as explained in the following subsections.

3.1. Sources of Literature

Credible and trustworthy sources have been consulted to ensure the study’s integrity, with Google Scholar being extensively utilized for initial investigations. Research articles from reputable publishers such as IEEE, Elsevier, Wiley, ACM, ArXiv, Springer, etc. have been examined and incorporated into the research. Table 2 shows the publisher-wise distribution of research studies taken into consideration after various rounds of selection.

3.2. Inclusion and Exclusion Criteria

Precise delineation of inclusion criteria is essential for selecting relevant research articles, as it defines the scope and limitations of the review, aiding reader comprehension [35]. The selection criteria for shortlisting research papers for this research are as listed below:
  • Study that preferably proposes a model developed using publicly available datasets such as Osteoarthritis Initiative (OAI) and Multicenter Osteoarthritis Study (MOST), etc.
  • The research paper should be published in well-reputed journals.
  • The paper included should be a recent study to keep this research up-to-date.
  • Study proposing new methodologies to automate classification or reviewing existing literature or surveys on OA and KOA to keep this research as relevant as possible.
  • Study using preprocessed enhanced images.
  • Study that uses DL-based classification algorithms mostly using CNN-based architecture.
  • The study thoroughly details their research and includes evaluation metrics like accuracy, precision, and recall for the proposed model.
Exclusion: We have excluded the articles that satisfy the following criteria:
  • Articles that discuss only traditional image processing and ML techniques for KOA classification.
  • Articles that use or propose DL architectures other than CNN-based ones, such as autoencoders, transformers, etc.
  • KOA studies focusing on KOA progression based on the patient’s history.
  • Studies using other grading methods except KL grading for X-ray images.
  • Studies that use data modalities other than X-ray and MRI.
The number of research studies and articles filtered at different stages while following the mentioned steps is shown in Figure 4. Figure 5 shows the year-wise distribution of KOA classification, DL, medical imaging, and other related research articles included in this paper.

3.3. Study Selection Process

The overall process for selecting, filtering, and including research studies in this review and comparison with other studies involved multiple steps:
  • Searching for papers using keywords such as “KOA”, “OA”, “KL Grade”, and “DL in Healthcare”, etc.
  • Optimizing the search to include only the studies published by reputed journals.
  • Going through the title and abstract of the study to decide its usefulness for the review.
  • Analyzing all the findings and listing the ones that can be used in the review study.
  • Noting the data sources and preprocessing techniques for the KOA classification studies.
  • Listing all the architectures proposed, fine-tuning methods used, and results obtained by the studies.
  • Mentioning the findings of the research in the appropriate section of this review and citing it.
  • Comparing the performance measures of solutions proposed by different studies using some common evaluation metrics.
  • Representing this survey visually in the form of appropriate figures, tables, graphs, and charts.

4. Deep Learning in Healthcare

Traditional diagnosis methods suffer from limitations such as subjectivity in the diagnosis, reach of the services to a larger population, affordability of diagnostic solutions, etc. These limitations present the need for automated, affordable, and efficient approaches with consistent results. The growing application of artificial intelligence (AI) through advanced ML and DL algorithms is helping experts by either fully or partially automating the diagnostic process. DL methods focus on constructing layered models that allow computers to autonomously perform tasks such as classification and object detection. As mentioned in [9], DL refers to ML programs developed based on “neural networks”, which are inspired by the neural network structure of the human brain and can adapt themselves through repetitive training to recognize patterns. DL is used for supervised learning, including image classification, image generation, object detection, and image segmentation. It is also used in the fields of unsupervised as well as reinforcement learning. In the vast field of image classification, many popular pre-trained models exist, such as VGGNet [36], ResNet [37], YOLO [38], DenseNet [39], MobileNet [40], and EfficientNet [41], etc.
According to [8], these complex DL models have shown excellent similarity with human experts in KOA detection and classification. Transfer Learning (TL) is used by these models, where knowledge from previously trained models is leveraged to address new tasks with minimal additional training or fine-tuning. Transfer learning is a two-phase process for training DL models, involving an initial pre-training phase followed by a fine-tuning phase where the model is adapted to the target task [42]. Thus, the classification of knee X-ray radiographs according to the KL grading scale can also be implemented by utilizing DL models effectively. After training the model, it can be fed new data, and predictions will be generated representing the severity of KOA. As researched upon in [43], DL techniques have been applied to numerous healthcare problems, including medical imaging, computer-aided detection/diagnosis, disease prediction, image segmentation, image generation, etc. Image classification is helpful in the identification of skin diseases in dermatology, eye disease recognition in ophthalmology, and classification of pathological images for various cancers such as breast cancer and brain cancer [44].

Architectures and Applications

DL algorithms use complex multi-layer dense networks of neurons to learn the hidden patterns in the training data. As discussed in detail in [45], the CNN architectures first implement segmentation to group similar portions of an image into a single class and assign them labels. The next step in model building is the detection of features that indicate the presence and severity of KOA. Lastly, classification is done by grouping images having similar features and indicators into the same output classes. It is observed that the CNNs have superior performance with image, speech, and audio signal inputs.
CNNs consist of convolutional, pooling, and fully connected layers. The convolutional layer learns filter weights and biases via backpropagation and gradient descent to detect image features, generating multiple feature maps representing different characteristics of the input tensors [46]. Weight sharing reduces the number of parameters [47]. Common activation functions for classification include ReLU, Sigmoid, Tanh, Leaky ReLU, and SoftMax (Table 3). Consequently, CNNs are widely applied in image classification tasks, including medical imaging.
Figure 6 illustrates the chronological development of CNN architectures over the years. Table 4 summarizes the key features and representative use cases of these models. Widely adopted pretrained networks include VGGNet, ResNet, Inception, YOLO, DenseNet, and EfficientNet [48].
The Visual Geometry Group (VGG) network architectures were developed by a team of researchers from the Department of Science and Engineering at Oxford University. The most widely used models released by this group are VGG16 and VGG19. VGG16 is a convolutional neural network with a structure of 16 layers, comprising 13 convolutional layers and 3 fully connected layers. The initial two convolutional layers utilize 64 feature kernels each [68]. The output is then fed into a max pooling layer with a stride of two. The third and fourth convolutional layers use 128 feature kernels each, followed by another max pooling layer with a stride of 2. The fifth, sixth, and seventh layers employ 256 feature maps, and eighth to thirteenth layers use 512 kernel filters. The final layers utilize max pooling with a stride of 1. The fourteenth and fifteenth layers consist of fully connected hidden layers with 4096 units each, concluding with a Softmax output layer with 1000 units. Figure 7 shows the layered architecture of VGG16.
ResNets are a type of deep neural network architecture that uses residual blocks that contain skip connections or shortcuts that are used to enable the model to learn residual functions. Several ResNet architectures have been proposed, such as ResNet-18 and ResNet-34. ResNet-50, ResNet-101, and ResNet-152. The most popular is the ResNet-50 model that uses a bottleneck block, which reduces computational complexity while increasing the depth of the network. It has a total of 50 layers, and the skip connections enable faster convergence during training by mitigating the vanishing gradient problem. Residual learning is applied to every few stacked layers within these architectures. The residual mapping operation is performed using shortcut connections that are inserted to turn it into a residual version [37].
The researchers in [69] proposed the Inception architecture with Pure Inception blocks and Residual Inception blocks. The inception modules provide the capability to learn diverse features at various levels of abstraction using parallel convolutional filters. These multiple outputs are then concatenated along the depth dimension. As a result, the Inception architecture can incorporate both local and global information and can implement better discrimination of different patterns in the data. Inception takes into account the model of transformers, which are widely used in the domain of Natural Language Processing (NLP) [70].
DenseNets are created to overcome issues found in traditional deep neural networks, such as the vanishing gradient problem and challenges associated with training very deep networks. It was introduced by Huang et al. in 2017 [39]. DenseNets consist of Dense Blocks, which consist of a series of convolutional layers with a fixed number of filters. The Transition Block controls the growth of the number of feature maps and reduces spatial dimensions. Global average pooling calculates the mean value of each feature map, resulting in a single value per channel. DenseNets use Multiscale Image Pyramids for CNNs, data centering through simplified RGB mean subtraction, can handle multiple aspect ratios, and provide a considerable speedup in computation time [71].
YOLO’s core concept involves partitioning the input image into a grid and conducting object detection over the whole image in a single forward pass through the neural network. The YOLO architecture [38] processes the entire image as input, divides it into a grid, and assigns each grid cell the task of predicting bounding boxes and class probabilities. The YOLO architecture has better speed and the ability to detect multiple objects in real-time. The authors in [72] summarize the different versions of YOLO architecture developed throughout the recent years, and it is observed that the use of batch normalization and high-resolution classifiers enhances the performance measures of YOLO. The YOLO architecture has better speed and the ability to detect multiple objects in real-time. The research [72] summarizes the different versions of YOLO architecture developed throughout the recent years. The use of batch normalization and high-resolution classifiers enhances the performance measures of YOLO.

5. Datasets for KOA

The task of building and testing DL models for KOA detection and classification requires a substantial amount of input data for the model training and validation process. These data have been obtained in various forms of images, like X-rays and MRIs, from a diverse range of sources [12]. The data collection process is detailed in the subsequent sections, outlining the primary types/modalities and data sources employed by different studies.

5.1. Imaging Modalities

The data used for detection and classification of KOA and its severity can be in various image forms. These include X-rays, MRI images, ultrasound images, CT scan images, and thermal images among others.
X-ray imaging is a crucial imaging technique in the medical domain that utilizes X-rays to create images of internal body structures such as bones and joints [73]. X-ray imaging is fast, efficient, cost-effective, and has high accuracy in detecting fractures, tumors, foreign objects, and abnormalities in various body systems [74]. In KOA assessment, X-ray images are mainly used to identify bony changes such as joint space narrowing and osteophyte formation; however, they have limited ability to detect early-stage disease and poor sensitivity for soft tissues such as cartilage, menisci, and ligaments. Conversely, MRI uses a strong magnetic field, radio waves, and computer technology to produce detailed images of organs, tissues, and other bodily structures. Unlike X-rays or CT scans, MRI does not involve ionizing radiation, making it a safer option for patients [75]. According to [76], MRI provides several benefits as a medical imaging modality, such as superior soft tissue contrast, multi-planar imaging, absence of ionizing radiation, high spatial resolution, functional imaging abilities, safety, versatility, and real-time imaging. Due to its superior soft tissue contrast, MRI is more suitable for early diagnosis of KOA and for accurately detecting cartilage degeneration, meniscal damage, and bone marrow lesions that are not visible in X-ray images. Additional methods include knee ultrasound [77], which employs high-frequency sound waves to generate real-time images of the knee joints and allows for immediate assessment of soft tissues like cartilage and menisci. The computed tomography (CT) scanning is another imaging technique used. A CT scan is a method that creates detailed cross-sectional slices of the knee joint [78]. Thermal images, or infrared thermography, is a recent imaging technique used in healthcare, which is a painless and non-invasive technique for the early detection of diseases. The resultant images display a color-coded distribution of the temperature of the examined area [79]. The knee image samples of each of these techniques are shown in Figure 8. Overall, X-rays and, to a lesser extent, MRIs are most commonly used for deep learning-based KOA severity classification, as they provide complementary information on bony and soft tissue changes.

5.2. X-Ray and MRI Dataset Sources

Various projects and initiatives were carried out across the world to collect and publish credible datasets of knee images. Some of the image datasets are publicly available and are widely used by researchers all over the world to develop automated KOA detection and classification models. Some of the major datasets are listed in the following subsections.

5.2.1. OAI and MOST Datasets

Datasets for knee osteoarthritis cases have been collected from a wide range of sources across multiple studies. The most prominent sources are the OAI dataset [82] and the MOST [83] dataset. The OAI dataset consists of a large collection of knee X-ray radiographs and is publicly available. Different distributions of these datasets have been published by multiple researchers across the world. One of the most widely used distributions is the Chen distribution published in 2018 [84], consisting of 8260 unique images. Many other distributions have also been published, as highlighted in Table 5. It is analyzed that a large number of research studies have used these datasets to build and evaluate DL models for KOA classification based on KL grading [5,85,86,87,88,89,90,91,92,93,94,95,96,97]. The MOST dataset involves images collected from a longitudinal, prospective observational study of KOA in older Americans, either with existing OA or at higher risk of developing it. This dataset has been used in a number of KOA classification studies [94,96,97].
Along with X-ray images, the OAI and MOST datasets also contain a large number of knee MRI images. Along with training models to perform KOA classification using MRI images, some researchers have also used both X-ray and MRI images to predict KOA severity [98]. Authors in [81] have used MRI images from the OAI dataset for KOA classification; however, due to the 3D nature of the MRI scans, which are a sequence of 2D images, researchers [99] find it useful to use the MRI scans to analyze and predict the KOA progression instead of KOA classification. Similarly, the authors in [100] utilized these MRI scans to develop a DL-based explainer, which provides interpretability for KOA classification based on KL severity grading.

5.2.2. Other Datasets

Other significant datasets used in KOA studies are as follows: MRNet [101], FastMRI [102], FASTMRI+ [103], and other datasets along with data collected from local hospitals. The MRNet dataset, released by Stanford ML group, contains 1370 MRI scans. These scans include 1106 abnormal scans capturing Anterior Cruciate Ligament (ACL) tears and meniscal tears prominently. The FastMRI dataset contains both MRI k-space and Digital Imaging and Communications in Medicine (DICOM) images obtained through MRI knee examinations [102]. The dataset contains 1594 k-space data images and 10,012 DICOM images. The FastMRI+ dataset [103] extends the FastMRI dataset by providing pathological annotations by clinical experts, which are critical for reconstruction frameworks. This dataset contains 16,154 bounding box annotations for different pathology categories originally released by thte FastMRI knee dataset. Many researchers have also sourced KOA X-ray samples from publicly available medical experts data [87,104] and data from local hospitals [87,105,106,107,108]. Some researchers [109] have also carried out community-based studies to obtain knee X-ray images. The number of publicly available datasets for MRI images [82,83] is less as compared to those of X-ray images. Hence, most of the studies using MRI as input data had to rely upon sourcing images from local hospitals [110] and community samples [111].
Table 5 and Table 6 present the details of the different X-ray and MRI datasets considered by some prominent studies for this research problem, respectively.
Table 5. Summary of X-ray datasets used in KOA studies, including dataset details, number of images, and image dimensions.
Table 5. Summary of X-ray datasets used in KOA studies, including dataset details, number of images, and image dimensions.
ReferenceYearDataset DetailNo. of X-Ray ImagesImage Dimension (Pixels)
OAI Dataset along with its variants and MOST Dataset
Sohail et al. [112]2025OAI-modified by Chen [84]8260299 × 299
Ahmed et al. [85]2024OAI-obtained
from Mendeley
8260
Malik et al. [86]2024OAI-obtained
from Mendeley
5778299 × 299
Touahema et al. [87]2024OAI (labeled by Boston University)—obtained
from Mendeley
4446224 × 224
Patil et al. [88]2024OAI2250384 × 384
Mohammed et al. [89]2023OAI obtained from Kaggle9786224 × 224
El-Ghany et al. [90]2023OAI assessed by Boston University X-ray reading center (BU)4446224 × 224
Guida et al. [98]2023OAI [Subset-1: both MRI and X-ray, Subset-2:
Only X-ray]
Subset1: 1100 Subset2: 8821MRI (160 × 160), X-ray (600 × 220)
Pi et al. [91]2023OAI-modified by Chen [84]8260224 × 224
(Model tested with different image sizes)
Pongsakonpruttikul et al. [5]2022OAI-modified by Chen [84]1650224 × 224
Wang et al. [92]2021OAI4506224 × 224
Yunus et al. [93]2022MOST3795224 × 224
Swiecick et al. [94]2021MOST18,503700 × 700
Norman et al. [95]2019OAI39,593500 × 500
Tiulpin et al. [96]2018MOST: for training, OAI: for validation and testing18,376224 × 224
Antony et al. [97]2017OAI & MOSTOAI: 4446 MOST: 2920256 × 256
Other datasets and dataset from local hospitals
Touahema et al. [87]2024Medical Expert Public Dataset—collected from various hospitals and diagnostic centers in India1650362 × 162
Touahema et al. [87]2024El Kelaa des Sraghna Provincial Hospital30
Alshamrani et al. [104]2023Dataset obtained
from Kaggle
3836224 × 224
Hengaju et al. [105]2022Bhaktapur Hospital350256 × 256
Abdullah et al. [106]2022Radiological center (KGS scan center, Madurai)31723000 × 1500
Sikkandar et al. [107]2022Durma and Tumair General Hospital, Riyadh350256 × 256
Olsson et al. [108]2021Danderyd University Hospital6403256 × 256
Shamir et al. [109]2009Baltimore Longitudinal Study of Aging (BLSA)3501000 × 945
Table 6. Summary of MRI datasets used in KOA studies, including dataset details, number of images, and image dimensions.
Table 6. Summary of MRI datasets used in KOA studies, including dataset details, number of images, and image dimensions.
ReferenceYearDataset DetailNo. of Knee MRIImage Dimension (Pixels)
Guo et al. [113]2024OAI + FastMRI + SKI10 + private700
Guida et al. [98]2023OAI [Subset-1: both MRI and X-ray, Subset-2: Only X-ray]1100 (number of knees)After crop:
160 × 160
Harman et al. [114]2023FastMRI+663
Hung et al. [115]2023private (584) + MRNet (120)704512 × 512
Schiratti et al. [99]2021OAI[ 2D MRI images of type “COR IW TSE”9280
Karim et al. [100]2021MOST [2406 patients with MRI data]4678 MRI slicesRe-scaled to 360 × 360
Guida et al. [81]2021OAI [3D DESS MRI—a sequence of 160 2D images]1100384 × 384
Du et al. [116]2018OAI4800448 × 448
Kumar et al. [110]2016SRM Medical College Hospital and Research Center15256 × 256
Marques et al. [111]2013Community based,
Non-treatment Study
268170 × 170

5.3. Dataset Provenance, Label Reliability, and Data Hygiene

KOA studies use large public datasets such as OAI and MOST, which provide bilateral and longitudinal knee radiographs graded using the KL scale [94,112,117]. These datasets follow standardized imaging protocols and employ trained readers. However, KL grading is based on visual assessment and is known to show variability between readers, especially for neighboring grades such as KL 1-2 and KL 2-3. Several studies attempt to reduce this variability by using consensus grading, third-reader adjudication, or selecting the modal grade from multiple readings [92,95,109]. Despite these efforts, some degree of label noise remains unavoidable in X-ray-based KOA datasets.
Another important concern is subject-level dependency. Both OAI and MOST are longitudinal studies in which multiple images may exist for the same participant, including left and right knees and repeated follow-up visits [94,95,117]. If data splitting is performed at the image level instead of the patient level, images from the same individual may appear in both training and testing sets. This can lead to information leakage and overly optimistic performance results. The risk is higher when the left and right knees are treated as independent samples without enforcing subject-wise separation [95,112].
Longitudinal duplication is another source of bias, as radiographs from different time points of the same knee remain highly correlated [94,117]. Some studies explicitly control this issue by restricting analysis to baseline images, using a single knee per subject, or applying subject-wise data partitioning [116,117]. Similar challenges exist in MRI-based studies derived from OAI, where repeated scans and expert annotations may also introduce correlated samples despite richer structural information [98,116]. Table 5 and Table 6 summarize the X-ray and MRI datasets used in KOA studies.

6. Data Preparation and Model Development

Recent research articles often introduce different DL architectures for detecting and classifying KOA using the KL grading scale, and then evaluate these architectures’ performance using metrics such as accuracy, precision, recall, and F1-score. The input data are generally first augmented to provide more exposure while training the DL-based model in a balanced proportion of all the classes. The data are subsequently processed to improve image quality and extract more valuable information, leading to better prediction outcomes. The refined data are input into DL models for detecting and classifying KOA according to the KL grading system. The general steps in most of the research studies are shown in Figure 9. The following subsections describe various building blocks of the automatic KOA detection and classification system.

6.1. Data Augmentation

The majority of the datasets, irrespective of their sources, are highly imbalanced in terms of KL Grade classes. As a result, the models trained on such datasets could generate predictions biased towards a particular class and affect the credibility of the results. Oversampling the minority classes is required to make the datasets more balanced. This involves augmenting the dataset with new images generated by randomly varying the features of existing images. For this, variations to the properties of existing images are applied, and new images are created. These included rotating images from −3 to 3 degrees, varying at every 45 degrees, and adjusting brightness and contrast by multiplying all pixel red, green, and blue values by steps ranging from 0.6 to 1.0 [5]. Some studies have also applied color jittering techniques to randomly alter the brightness, contrast, and saturation of the images in the dataset.
One of the popular approaches to generating augmented data is the TensorFlow ImageDataGenerator library. This provides inbuilt functions to augment given data by adjusting parameters such as brightness, contrast, rotation, width, height, shift, flip, and zoom, etc., of the original set of images. Another method widely used is to upscale the number of minority class images by varying parameters using a Python 3.12 program. It is observed from the research studies that a balanced dataset is more useful for training DL models with better accuracy scores and better prediction rates.

6.2. Preprocessing Methods

The images obtained from knee X-ray radiographs may have a lot of noise and distortions, class imbalance, and are not suitable for applying DL techniques directly. Hence, preprocessing techniques to obtain high-quality data are necessary for better performance of DL models. The most basic preprocessing step is to resize the images into sizes acceptable by pre-trained models. Authors in [89] discarded excess information in the image by cropping 60 pixels, which are cropped from both the top and bottom of each image.
Most studies use contrast enhancement as a primary preprocessing step to improve image quality by modifying intensity distributions. Histogram Equalization (HE) enhances image contrast by redistributing pixel intensities to achieve a roughly uniform histogram [118]. Brightness-preserving HE maintains both bright and dark areas while improving overall contrast [119]. Common image enhancement techniques, including HE, contrast stretching, sharpening filters, and Gaussian smoothing, help detect subtle features like joint space narrowing or osteophytes. Key parameters include grid size (number of image tiles) and clip limit (threshold to prevent over-amplification). Adaptive Histogram Equalization (AHE) applies equalization locally on tiles (commonly 8 × 8 with clip limit 2.0) to adjust local contrast, improving overall image quality. Contrast Limited Adaptive Histogram Equalization (CLAHE) [120,121] further refines this by limiting histogram heights per tile, preserving fine details while enhancing contrast. Focusing on the main region of interest often improves results by removing irrelevant image information. The effect of CLAHE on sample knee X-ray images of each KL grade is illustrated in Figure 10.
In [105], the authors removed noise from the image by preserving bone edges and using an adaptive median filter. Image enhancement is done to improve the perception of information in the image. It involves improving the quality in terms of contrast, sharpness, masking, etc. The study [104] uses denoising techniques to remove noise from the X-ray images. This involves applying a 2D median filter with a kernel size of 5 × 5. Authors in [90] used artifact removal, resizing, contrast handling, and normalization to obtain more useful images. In [122], authors used contrast stretching, histogram equalization and Gaussian filters on MRI images to improve the quality of the images. In [117], the data with missing labels was removed from both OAI and MOST datasets. In [92], the authors converted X-ray images into standard 8-bit grayscale images using the Pydicom toolkit to make data more consistent. The studies [93,116] used Principal Component Analysis (PCA) for the selection of the most optimal features. In [111], partial least square regression was used for feature selection and extraction from MRI images.
In [110], the input MRI images are initially resized to 512 × 512 pixels and then converted to grayscale, followed by contrast enhancement and histogram equalization. A thresholding operation is subsequently applied to remove pixels with low intensity values, and background noise is removed. The authors of [123] use global non-rigid registration followed by four local registrations for image preprocessing before segmentation of the desired region of interest. Overall, nearly all studies apply preprocessing techniques to their respective datasets prior to implementing DL models. These techniques are summarized in Table 7.

6.3. Segmentation Approaches

Proper placement of the ROI is essential for analyzing bone texture in the assessment of OA. Proper localization enhances prediction accuracy. Several studies emphasize the impact of ROI size, shape, and position on OA prediction in knee texture-based methods [128]. ROI detection is the process of localizing and identifying specific regions of importance within an image or dataset for further analysis. ROI detection is carried out by placing bounding boxes in the images around the region to locate the joint space width in radiographs and articular cartilage in MRIs. It is the first step in KOA classification, as stated in [129]. The VGG Image Annotator (VIA) tool was used in [127] to enclose each knee joint with bounding boxes in the images considered. Bounding boxes limit the image to its most informative part and make it easier for the DL models to extract useful features for classification. The authors in [107] performed segmentation using local center of mass algorithms to extract ROI. This results in dividing the image into more meaningful and homogeneous regions for better separation and analysis of components. In [117], to precisely capture ROI, the BoneFinder tool uses a Random Forest regression voting method to identify knee joint landmarks. In [105], active contour segmentation method was used for finding regions of interest. From the literature, it is analyzed that three types of segmentation approaches are used to identify, annotate, and validate the ROI in knee radiographs, namely, manual segmentation, semi-automatic segmentation, and automatic segmentation approaches. The process followed by each approach is broadly shown in Figure 11.
Beyond accurate localization, the choice of ROI strategy directly influences downstream KOA classification performance. Manual ROI selection can provide precise joint localization but introduces reader-dependent variability and limited scalability, which affects reproducibility [130,131]. Semi-automatic methods reduce annotation effort while preserving anatomical consistency; however, their performance depends on initialization quality and parameter tuning [132,133]. Fully automatic approaches enable large-scale and consistent ROI placement but are sensitive to mislocalization and background inclusion. Landmark-based automatic methods, such as BoneFinder, provide anatomically aligned ROIs and more stable feature extraction, which has been shown to improve classification robustness in knee radiograph analysis [117]. Detector-based methods, including YOLO and Faster R-CNN, allow end-to-end ROI detection and classification, and have reported improved performance when trained on well-localized ROIs [5,106]. However, errors in bounding box placement can propagate to the classification stage and degrade performance, especially in advanced OA cases [128].
In MRI scans, accurate segmentation of cartilage is crucial for obtaining precise quantitative measures, including volume, thickness, and surface area, which are vital for detecting structural changes. Despite the challenge posed by cartilage morphology and MRI acquisition, robust segmentation is essential for reliable diagnostic and therapeutic decision-making in knee joint pathology [134]. Manual segmentation is the standard practice in knee cartilage segmentation. However, the correctness check of the segmented area is performed by the expert radiologists, and, therefore, it may introduce human bias in the accuracy of the segmented points. Table 8 summarizes various manual segmentation techniques along with the segmented knee area mentioned in the literature.
Computational methods for segmenting articular cartilage (AC) from MRI images employ advanced image-processing and pattern recognition techniques to extract relevant features, minimizing human intervention [136]. Segmentation approaches are generally categorized into semi-automatic and fully automatic models [133]. Accurate segmentation improves model interpretation and enables focused analysis. The semi-automatic segmentation methods used in KOA studies are summarized in Table 9.
Table 10 presents the fully automatic segmentation methods. Recent studies increasingly leverage DL models for segmentation, with UNet being one of the most widely adopted architectures.
Overall, segmentation should be viewed not only as a preprocessing step but also as a key factor influencing feature quality, classification robustness, and reported performance in KOA studies.

6.4. DL Models for KOA Classification

After preprocessing, model selection is a critical step in KOA severity classification. Most studies rely on pre-trained DL architectures, often adapted for knee joint analysis. DenseNet variants are popular due to efficient feature propagation. For instance, Ref. [90] employed DenseNet-169, achieving high accuracy (96%) on the OAI dataset, while DenseNet-121 was applied in [127] after ROI segmentation using Mask R-CNN and VGG Image Annotator, yielding moderate performance (73% precision, recall, F1-score). DenseNet-201 combined with EfficientNet-B0 for feature extraction, feature fusion, and dimensionality reduction via SVM and neural networks achieved 90% accuracy [119]. DenseNets capture fine-grained features effectively but require careful preprocessing to avoid noise amplification.
ResNet architectures, particularly ResNet101 and ResNet50, are widely used for classification. Reference [89] found ResNet101 most effective among six CNN models, achieving 69% accuracy. Two-step pipelines integrating ROI detection and classification, such as YOLO for ROI followed by ResNet50 for classification [92], achieve ROI detection confidence of 85% and classification accuracy of 69%. Modified YOLOv3 Tiny models also demonstrated strong performance for both KOA detection and severity classification (85–87%) [5]. Faster R-CNN generally provides more precise ROI localization than YOLO, with minimal joint space width detection achieving 99.6% accuracy [106].
VGG and AlexNet remain widely applied due to simplicity and ease of transfer learning. Fine-tuned VGG19 achieved 70% accuracy [126], whereas VGG16 reached 92% after extensive preprocessing, including denoising, contrast enhancement, and feature extraction [104]. Modified AlexNet with transfer learning also delivered high performance (99% accuracy) [106]. However, these architectures may underperform on subtle structural variations compared to deeper networks.
Advanced techniques include Siamese networks for medial-lateral knee similarity learning (62% accuracy) [124], hyper-parameter optimized DCNNs for joint space narrowing and osteophyte staging (77% accuracy) [143], and multimodal fusion models combining X-ray and MRI images (76% accuracy) [98]. Three-dimensional CNNs applied to MRI, such as 3D DenseNet, provide superior spatial feature extraction, achieving up to 96% accuracy for ACL lesion detection [144], while semantic segmentation CNNs with 3D deformable modeling also improve performance [145].
Classical ML remains relevant with feature-engineered approaches. SVM with PCA on MRI [116], Chan-Vese edge detection with SVM [122] (95%), GLCM features with SVM [110] (87%), HOG with multiclass SVM [27] (95%), and Fischer LDA [111] (0.92 AUC) demonstrate that well-engineered features can provide competitive results, though generally underperform compared to deep networks. Ensemble methods combining SVM, Random Forest, and Gradient Boosting also improve multiclass classification (87%) [30].
Overall, DL models dominate KOA severity classification, with DenseNet, ResNet, and VGG variants most frequently applied. Table 11 summarizes different ML and DL architectures used in reviewed articles. The popularly used metrics for evaluating the performance of a classification model are accuracy and F1-score. Other metrics include recall, precision, and specificity. For segmentation and ROI localization, YOLO or Faster R-CNN significantly improves performance, while multimodal or ensemble strategies represent the latest evolution in methodology. The comparison of the performance of different segmentation models is typically performed using the dice score, which computes the percentage of overlapping regions between the original and predicted segmentation masks. Table 12 and Table 13 compare the performance of different ML and DL based classifiers for multiclass and binary classification for KOA severity, illustrating the progression from classical ML and simple CNNs to sophisticated deep networks and multimodal frameworks.
KOA studies show that the selection of the DL model for KOA analysis depends on various factors such as the complexity of the knee structures, the type of imaging data used, and the required computational efficiency. DenseNet and EfficientNet perform well for detailed KL grading because they preserve and reuse features effectively, although they can be memory-intensive and sensitive to noisy preprocessing. ResNet provides a good balance between model depth and computational cost, making it a reliable general-purpose choice; however, it may miss very subtle structural variations in low-contrast images. Detection models such as YOLO and Faster R-CNN improve accuracy by precisely identifying the knee joint region before classification, but require additional training time and high-quality annotations. Simpler CNNs such as VGG and AlexNet remain useful for smaller datasets or as strong baseline models, but their limited depth reduces their ability to capture fine-grained KOA features. For MRI-based studies, 3D CNNs are preferred because they can capture volumetric information needed to assess cartilage and meniscal conditions, though they require large datasets and high computational resources. In general, each model has its own strengths and limitations, making the model choice dependent on the specific task and dataset characteristics.
Table 11. Summary of prominent machine learning and deep learning models employed for KOA severity classification.
Table 11. Summary of prominent machine learning and deep learning models employed for KOA severity classification.
CategoryArchitectureReferences
Deep LearningResidual Networks (ResNets)[89,91,92,104,105,106,108,117,124,146,147,148,149,150,151,152,153]
DenseNets[80,89,90,91,95,100,119,123,127,152,154]
Visual Geometry Group (VGG)[89,94,100,104,105,152,155,156,157]
You Only Look Once (YOLO)[5,92,93,158]
EfficientNet[91,159,160]
Region based CNN (R-CNN)[94,106,127]
MobileNet[89,153,161,162]
AlexNet[106,163]
Darknet[164]
Inception[89,112,153]
ShuffleNet[91]
NASNet[165]
HRNet[166]
LENET[167]
Deep Siamese Network[124]
UNet[95]
CaffeNet[157]
Machine LearningSupport Vector Machines[27,110,116,119,122,150,164,168,169,170]
k-Nearest Neighbours[93,109,169,171,172]
Random Forest Classifier[169,173,174]
Naive Bayes Classifier[174]
Hybrid ModelsCNN with SVM, RF, and Gradient Boosting [30]
Table 12. Performance Comparison of existing multiclass classifiers for KOA classification.
Table 12. Performance Comparison of existing multiclass classifiers for KOA classification.
ReferenceYearDatasetTest Set SizeROI MethodImbalance HandlingValidationKey Performance
Sohail et al. [112]2025OAI826NRData AugmentationinternalAcc: 92.25, F1: 92.30, K: 90.69
Ahmed et al. [85]2024OAI1656NRNRinternalAcc: 56.28, F1: 63
Touahema et al. [87]2024OAI1000NRData AugmentationInternalAcc: 97.20, F1: 97
Malik et al. [86]2024OAI488NRData AugmentationinternalAcc: 89.89, F1: 78.25
Patil et al. [88]2024OAI125DFCNNRinternalAcc: 94
Mohammed et al. [89]2023OAI1656NRNoneinternalAcc: 67, F1: 67
El-Ghany et al. [90]2023OAI1778GradCAMNRinternalAcc: 95.93, F1: 87.08
Guida et al. [98]2023OAI1755NRundersamplinginternalAcc: 76
Alshamran et al. [104]2023Kaggle845NRstratified samplinginternalAcc: 92.17,
F1: 92
Tariq et al. [152]2023OAI1656NRNoneinternalAcc: 98, F1: 97, K: 99
Haseeb et al. [119]2023Kaggle2348NRNRinternalAcc: 90.1, F1: 88
Aladhadh et al. [154]2023Mendeley VI, OAI2500CenterNetNRexternalAcc: 99.14, F1: 99.44,
Dice Score: 99.24 ± 0.03
Kiruthika et al. [125]2022OAI, MOST3500FCNNRinternalAcc: 98.75, F1: 99.3
Pongsakonpruttikul et al. [5]2022OAI150ManualundersamplinginternalAcc: 86.7, F1: 61.1
Abdullah et al. [106]2022private634RPN (Region Proposal Network)NRinternalAcc: 98.90, Dice Score: 98.90
Yunus et al. [93]2022Mendeley1656YOLOv2-ONNXNRinternalAcc: 90.6, F1: 88.0
Cueva et al. [124]2022OAI, private225NRoversamplingexternalAcc: 61.71
Sikkandar et al. [107]2022Private70Local Center of Mass (LCM)NRinternalAcc: 72.01, K: 86
Hengaju et al. [105]2022Private140Active ContourNRinternalAcc: 59
Kondal et al. [127]2022OAI, private1175Mask RCNNNRexternalF1: 73
Swiecicki et al. [94]2021MOST3359RPNNRinternalAcc: 71.90, K: 75.9
Wang et al. [92]2021OAI1660YOLONRinternalAcc: 69.18
Tiulpin et al. [117]2020OAI, MOST11,743Random Forest Regression VotingNRexternalAcc: 67, K: 82
Norman et al. [95]2019OAI5941U-NetNRinternalAcc: 78.36
Pedoia et al. [123]2019OAI657Voxel Based RelaxometryNRinternalR: 76.99, Ssy: 77.94
Du at al. [116]2018OAI100NRNR10-fold CVAcc: 70
Kumar et al. [110]2016Private15Pixel-based segmentationNRinternalAcc: 86.67
Table 13. Performance Comparison of existing binary classifiers for KOA classification based on KL grading.
Table 13. Performance Comparison of existing binary classifiers for KOA classification based on KL grading.
ReferenceYearDatasetTest Set SizeROI MethodImbalance HandlingValidationKey Performance
Mohammed et al. [89]2023OAI1656NRNoneinternalAcc: 83, F1: 83
El-Ghany et al. [90]2023OAI1778GradCAMNRinternalAcc: 93.78, F1: 89.27
Pongsakonpruttikul et al. [5]2022OAI100ManualundersamplinginternalAcc: 85, F1: 85

7. Discussion and Future Research Directions

This paper exhaustively analyzes various DL methods proposed by multiple research studies published across the world. The datasets used, preprocessing techniques, and model architectures have been thoroughly studied to figure out the best possible combinations to classify KOA according to the KL grading scale with the highest performance measures. The major shortcomings in present-day studies are listed below.
  • Lack of availability of a balanced dataset to train the models makes them perform poorly for new and unseen data of the minority class.
  • In an unbalanced dataset, traditional evaluation metrics such as accuracy become misleading as high accuracy can be achieved by simply predicting the majority class all the time, while still performing poorly on the minority class.
  • The quality of the input images for model training requires multiple levels of preprocessing techniques to make them suitable for model training.
  • In most of the available datasets, many images get discarded due to poor resolution or absence of ROI, which further depicts the problem of class imbalance.
  • Requirement of a huge amount of computing resources to train such a large number of images.
  • The labeling of the data points is done by radiologists, which introduces subjectivity in the overall process. The same knee X-ray image can be identified as belonging to separate KL grades by different radiologists. This makes the dataset available for training ambiguous and generates further inconsistency in predicting the actual severity of KOA.
  • Potential data leakage can occur when images from the same patient, such as left and right knees or longitudinal scans, appear in both training and testing sets, leading to inflated performance estimates and reduced model generalizability.
After reviewing the utilization of various DL techniques by different authors for KOA classification based on severity grading, several research gaps have been identified. These gaps highlight opportunities for future research in this domain and also provide probable solutions for the limitations of existing approaches mentioned above.
  • Handling Class Imbalance and Performance Evaluation: Class imbalance can reduce the performance of DL models if not properly addressed. Techniques such as over-sampling, under-sampling, and synthetic data generation can help balance the classes, and creating new datasets with more representative samples or combining data from multiple repositories can further improve model accuracy [175]. In addition, accuracy alone may be misleading for imbalanced datasets, so metrics like sensitivity, specificity, and F1-score should be used to evaluate and compare the performance of models, specifically for healthcare applications [176].
  • Enhancements in DL models: Some of the studies [177] suggested that model computations can be optimized by changing the shape of the convolutional kernel and using texture memory. Other approaches can be explored to reduce the model computations. Collecting large amounts of malignant data for model training, using effective preprocessing techniques for the best feature extraction, and gathering information analysis about the knee can also further improve model performance.
  • Model Complexity: The selection of more complex and accurate models that can deduce a better correlation between the pixel values in the preprocessed X-ray images and KOA severity according to the KL grading scale can improve the overall performance of DL-based models. With rapid improvements in the field of AI and ML and their applications, better and more accurate architectures are being proposed every year [178]. Therefore, newer architectures can be used to identify features in knee X-rays and classify them according to KOA severity.
  • Other efficient DL architectures: The usage of Recurrent Neural Networks, Transformers, Reinforcement Learning, and Generative Adversarial Networks can also be explored for KOA detection and classification.
  • Multimodal Large Models: Multimodal large models that combine knee images with clinical, demographic, or textual data can capture complex relationships between different data types. These models have shown strong performance in medical image analysis [179,180] and can help improve KOA classification accuracy and provide better interpretability.
  • Data Hygiene and Label Reliability: Deep learning models for KOA classification strongly depend on the quality of training data and label consistency. Commonly used public datasets such as OAI and MOST rely on expert-assigned KL grades, which are subjective and show variability across readers, especially for borderline grades. This introduces unavoidable label noise. In addition, these datasets are bilateral and longitudinal, meaning that images from the same patient (left and right knees or follow-up visits) may appear multiple times. If data splitting is done at the image level instead of the patient level, data leakage can occur and lead to overestimated model performance. Therefore, future studies should apply patient-wise data splitting and clearly report dataset handling procedures. At present, KOA models are better suited for clinical support tasks such as triage and quality assurance rather than independent diagnosis.
  • Regulatory and Clinical Validation: In addition to technical accuracy, KOA models require thorough clinical validation before deployment. This includes evaluation using standardized protocols, external testing on independent datasets, and clear reporting of dataset sources and validation strategies. Adherence to regulatory guidelines is necessary to ensure model safety, reliability, and clinical usefulness.
Overall, it is observed that there is a lot of scope for improving the overall accuracy of DL-based models on any kind of KOA X-ray and MRI data.

8. Conclusions

This review presents a comprehensive analysis of the recent progress and provides insights into the future directions for automated detection of KOA. It highlights how DL techniques can assist medical professionals, such as radiologists, in accurately detecting and classifying KOA. Numerous relevant studies were examined to explore the available datasets, preprocessing strategies, segmentation methods, and the variety of models employed for classifying KOA according to the Kellgren-Lawrence grading scale using X-ray and MRI images. Many of the studies reviewed present accuracy and F1 score values greater than 90%, though most approaches use an internal split to validate the results of the trained model. Using limited data from a single source undermines the reliability of the results and proves inadequate to claim generalizability. On the contrary, recent research focuses on developing multiclass classifiers instead of binary classifiers and thus improves the decision-making of the expert and KOA management for the patient. The review also identifies common shortcomings in existing studies, including limited dataset sizes, inconsistent preprocessing, and suboptimal model training. Addressing these challenges may involve generating larger, high-quality datasets, applying more effective preprocessing, and improving model training strategies. Future research should focus on integrating 3D imaging and multimodal MRI–X-ray data to enhance feature extraction and improve KOA severity prediction. This review mainly covered CNN-based models, while other deep learning approaches, such as autoencoders and transformer-based models, are also being used and could be explored in future work.

Author Contributions

Conceptualization, S.R. and M.G.; Methodology, A.R., P.S., and S.R.; Writing—original draft preparation, A.R. and P.S.; Writing—review and editing, N.K., K.K., S.R., and M.G.; Supervision, S.R., M.G., and N.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Ministry of Agriculture, Food, and Water Resources, Sultanate of Oman, under the Deanship of the University of Nizwa grant number MOHERI/SRPP/MOAFWR/1/2024.

Data Availability Statement

No new data were created or analyzed in this study. The review used only publicly available data or published reports.

Acknowledgments

The authors are thankful to the Deanship of the University of Nizwa for supporting this research.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Arden, N.; Nevitt, M.C. Osteoarthritis: Epidemiology. Best Pract. Res. Clin. Rheumatol. 2006, 20, 3–25. [Google Scholar] [CrossRef]
  2. Nguyen, U.S.D.; Zhang, Y.; Zhu, Y.; Niu, J.; Zhang, B.; Felson, D.T. Increasing prevalence of knee pain and symptomatic knee osteoarthritis: Survey and cohort data. Ann. Intern. Med. 2011, 155, 725–732. [Google Scholar] [CrossRef] [PubMed]
  3. Lee, S.; Kim, S.J. Prevalence of knee osteoarthritis, risk factors, and quality of life: The Fifth Korean National Health and Nutrition Examination Survey. Int. J. Rheum. Dis. 2017, 20, 809–817. [Google Scholar] [CrossRef] [PubMed]
  4. Saini, D.; Chand, T.; Chouhan, D.K.; Prakash, M. A comparative analysis of automatic classification and grading methods for knee osteoarthritis focussing on X-Ray images. Biocybern. Biomed. Eng. 2021, 41, 419–444. [Google Scholar] [CrossRef]
  5. Pongsakonpruttikul, N.; Angthong, C.; Kittichai, V.; Chuwongin, S.; Puengpipattrakul, P.; Thongpat, P.; Boonsang, S.; Tongloy, T. Artificial intelligence assistance in radiographic detection and classification of knee osteoarthritis and its severity: A cross-sectional diagnostic study. Eur. Rev. Med. Pharmacol. Sci. 2022, 26, 1549–1558. [Google Scholar] [PubMed]
  6. Yang, J.; Ji, Q.; Ni, M.; Zhang, G.; Wang, Y. Automatic assessment of knee osteoarthritis severity in portable devices based on deep learning. J. Orthop. Surg. Res. 2022, 17, 540. [Google Scholar] [CrossRef]
  7. Kokkotis, C.; Moustakidis, S.; Papageorgiou, E.; Giakas, G.; Tsaopoulos, D. Machine learning in knee osteoarthritis: A review. Osteoarthr. Cartil. Open 2020, 2, 100069. [Google Scholar] [CrossRef]
  8. Yeoh, P.S.Q.; Lai, K.W.; Goh, S.L.; Hasikin, K.; Hum, Y.C.; Tee, Y.K.; Dhanalakshmi, S. Emergence of Deep Learning in Knee Osteoarthritis Diagnosis. Comput. Intell. Neurosci. 2021, 2021, 4931437. [Google Scholar] [CrossRef]
  9. Yick, H.T.V.; Chan, P.K.; Wen, C.; Fung, W.C.; Yan, C.H.; Chiu, K.Y. Artificial intelligence reshapes current understanding and management of osteoarthritis: A narrative review. J. Orthop. Trauma Rehabil. 2022, 29, 22104917221082315. [Google Scholar] [CrossRef]
  10. Lee, L.S.; Chan, P.K.; Wen, C.; Fung, W.C.; Cheung, A.; Chan, V.W.K.; Cheung, M.H.; Fu, H.; Yan, C.H.; Chiu, K.Y. Artificial intelligence in diagnosis of knee osteoarthritis and prediction of arthroplasty outcomes: A review. Arthroplasty 2022, 4, 16. [Google Scholar] [CrossRef]
  11. Ramazanian, T.; Fu, S.; Sohn, S.; Taunton, M.J.; Kremers, H.M. Prediction models for knee osteoarthritis: Review of current models and future directions. Arch. Bone Jt. Surg. 2023, 11, 1. [Google Scholar]
  12. Cigdem, O.; Deniz, C.M. Artificial Intelligence in Knee Osteoarthritis: A Comprehensive Review. Osteoarthr. Imaging 2023, 3, 100161. [Google Scholar] [CrossRef]
  13. Zhao, H.; Ou, L.; Zhang, Z.; Zhang, L.; Liu, K.; Kuang, J. The value of deep learning-based X-ray techniques in detecting and classifying K-L grades of knee osteoarthritis: A systematic review and meta-analysis. Eur. Radiol. 2025, 35, 327–340. [Google Scholar] [CrossRef] [PubMed]
  14. Touahema, S.; Zaimi, I.; Zrira, N.; Ngote, M.N. How Can Artificial Intelligence Identify Knee Osteoarthritis from Radiographic Images with Satisfactory Accuracy?: A Literature Review for 2018–2024. Appl. Sci. 2024, 14, 6333. [Google Scholar] [CrossRef]
  15. Teoh, Y.X.; Othmani, A.; Goh, S.L.; Usman, J.; Lai, K.W. Deciphering knee osteoarthritis diagnostic features with explainable artificial intelligence: A systematic review. IEEE Access 2024, 12, 109080–109108. [Google Scholar] [CrossRef]
  16. Tariq, T.; Suhail, Z.; Nawaz, Z. A Review for automated classification of knee osteoarthritis using KL grading scheme for X-rays. Biomed. Eng. Lett. 2025, 15, 1–35. [Google Scholar] [CrossRef]
  17. Hu, Y.; Wang, Y.; Liu, S.; Wang, H. The Potential Roles of Ferroptosis in Pathophysiology and Treatment of Musculoskeletal Diseases-Opportunities, Challenges, and Perspectives. J. Clin. Med. 2023, 12, 2125. [Google Scholar] [CrossRef]
  18. Madry, H.; Luyten, F.P.; Facchini, A. Biological aspects of early osteoarthritis. Knee Surg. Sport. Traumatol. Arthrosc. 2012, 20, 407–422. [Google Scholar] [CrossRef]
  19. Wieland, H.A.; Michaelis, M.; Kirschbaum, B.J.; Rudolphi, K.A. Osteoarthritis—An untreatable disease? Nat. Rev. Drug Discov. 2005, 4, 331–344. [Google Scholar] [CrossRef]
  20. O’Neill, T.W.; Felson, D.T. Mechanisms of osteoarthritis (OA) pain. Curr. Osteoporos. Rep. 2018, 16, 611–616. [Google Scholar] [CrossRef]
  21. Martel-Pelletier, J.; Barr, A.J.; Cicuttini, F.M.; Conaghan, P.G.; Cooper, C.; Goldring, M.B.; Goldring, S.R.; Jones, G.; Teichtahl, A.J.; Pelletier, J.P. Osteoarthritis. Nat. Rev. Dis. Prim. 2016, 2, 16072. [Google Scholar] [CrossRef] [PubMed]
  22. Heidari, B. Knee osteoarthritis prevalence, risk factors, pathogenesis and features: Part I. Casp. J. Intern. Med. 2011, 2, 205–212. [Google Scholar]
  23. Sharma, L. Osteoarthritis of the knee. N. Engl. J. Med. 2021, 384, 51–59. [Google Scholar] [CrossRef] [PubMed]
  24. Sheth, N.P.; Foran, J.R.H. Arthritis of the Knee—OrthoInfo—AAOS—orthoinfo.aaos.org. 2024. Available online: https://orthoinfo.aaos.org/en/diseases--conditions/arthritis-of-the-knee/ (accessed on 15 July 2024).
  25. Kim, Y.M.; Joo, Y.B. Patellofemoral osteoarthritis. Knee Surg. Relat. Res. 2012, 24, 193. [Google Scholar] [CrossRef]
  26. Roemer, F.W.; Zhang, Y.; Niu, J.; Lynch, J.A.; Crema, M.D.; Marra, M.D.; Nevitt, M.C.; Felson, D.T.; Hughes, L.B.; El-Khoury, G.Y.; et al. Tibiofemoral joint osteoarthritis: Risk factors for MR-depicted fast cartilage loss over a 30-month period in the multicenter osteoarthritis study. Radiology 2009, 252, 772–780. [Google Scholar] [CrossRef]
  27. Gornale, S.S.; Patravali, P.U.; Marathe, K.S.; Hiremath, P.S. Determination of osteoarthritis using histogram of oriented gradients and multiclass SVM. Int. J. Image Graph. Signal Process. 2017, 9, 41. [Google Scholar] [CrossRef]
  28. Cui, A.; Li, H.; Wang, D.; Zhong, J.; Chen, Y.; Lu, H. Global, regional prevalence, incidence and risk factors of knee osteoarthritis in population-based studies. EClinicalMedicine 2020, 29, 100587. [Google Scholar] [CrossRef]
  29. Mora, J.C.; Przkora, R.; Cruz-Almeida, Y. Knee osteoarthritis: Pathophysiology and current treatment modalities. J. Pain Res. 2018, 11, 2189–2196. [Google Scholar] [CrossRef]
  30. Bany Muhammad, M.; Yeasin, M. Interpretable and parameter optimized ensemble model for knee osteoarthritis assessment using radiographs. Sci. Rep. 2021, 11, 14348. [Google Scholar] [CrossRef]
  31. Kohn, M.D.; Sassoon, A.A.; Fernando, N.D. Classifications in brief: Kellgren-Lawrence classification of osteoarthritis. Clin. Orthop. Relat. Res. 2016, 474, 1886–1893. [Google Scholar] [CrossRef]
  32. Peterfy, C.; Guermazi, A.; Zaim, S.; Tirman, P.; Miaux, Y.; White, D.; Kothari, M.; Lu, Y.; Fye, K.; Zhao, S.; et al. Whole-organ magnetic resonance imaging score (WORMS) of the knee in osteoarthritis. Osteoarthr. Cartil. 2004, 12, 177–190. [Google Scholar] [CrossRef]
  33. Hunter, D.J.; Lo, G.H.; Gale, D.; Grainger, A.J.; Guermazi, A.; Conaghan, P.G. The reliability of a new scoring system for knee osteoarthritis MRI and the validity of bone marrow lesion assessment: BLOKS (Boston–Leeds Osteoarthritis Knee Score). Ann. Rheum. Dis. 2008, 67, 206–211. [Google Scholar] [CrossRef] [PubMed]
  34. Hunter, D.J.; Guermazi, A.; Lo, G.H.; Grainger, A.J.; Conaghan, P.G.; Boudreau, R.M.; Roemer, F.W. Evolution of semi-quantitative whole joint assessment of knee OA: MOAKS (MRI Osteoarthritis Knee Score). Osteoarthr. Cartil. 2011, 19, 990–1002. [Google Scholar] [CrossRef] [PubMed]
  35. Stern, C.; Jordan, Z.; McArthur, A. Developing the review question and inclusion criteria. Am. J. Nurs. 2014, 114, 53–56. [Google Scholar] [CrossRef] [PubMed]
  36. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. In Proceedings of the 3rd International Conference on Learning Representations (ICLR 2015), San Diego, CA, USA, 7–9 May 2015; pp. 1–14. [Google Scholar]
  37. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26–30 June 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 770–778. [Google Scholar]
  38. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 779–788. [Google Scholar]
  39. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 4700–4708. [Google Scholar]
  40. Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar] [CrossRef]
  41. Tan, M.; Le, Q. Efficientnet: Rethinking model scaling for convolutional neural networks. In Proceedings of the 36th International Conference on Machine Learning, Long Beach, CA, USA, 10–15 June 2019; PMLR: Cambridge, MA, USA, 2019; pp. 6105–6114. [Google Scholar]
  42. Sarker, I.H. Deep learning: A comprehensive overview on techniques, taxonomy, applications and research directions. SN Comput. Sci. 2021, 2, 420. [Google Scholar] [CrossRef]
  43. Kim, M.; Yun, J.; Cho, Y.; Shin, K.; Jang, R.; Bae, H.j.; Kim, N. Deep learning in medical imaging. Neurospine 2019, 16, 657. [Google Scholar] [CrossRef]
  44. Liu, X.; Gao, K.; Liu, B.; Pan, C.; Liang, K.; Yan, L.; Ma, J.; He, F.; Zhang, S.; Pan, S.; et al. Advances in deep learning-based medical image analysis. Health Data Sci. 2021, 2021, 8786793. [Google Scholar] [CrossRef]
  45. Puttagunta, M.; Ravi, S. Medical image analysis based on deep learning approach. Multimed. Tools Appl. 2021, 80, 24365–24398. [Google Scholar] [CrossRef]
  46. Yamashita, R.; Nishio, M.; Do, R.K.G.; Togashi, K. Convolutional neural networks: An overview and application in radiology. Insights Imaging 2018, 9, 611–629. [Google Scholar] [CrossRef]
  47. Indolia, S.; Goswami, A.K.; Mishra, S.P.; Asopa, P. Conceptual understanding of convolutional neural network—A deep learning approach. Procedia Comput. Sci. 2018, 132, 679–688. [Google Scholar] [CrossRef]
  48. Alzubaidi, L.; Zhang, J.; Humaidi, A.J.; Al-Dujaili, A.; Duan, Y.; Al-Shamma, O.; Santamaría, J.; Fadhel, M.A.; Al-Amidie, M.; Farhan, L. Review of deep learning: Concepts, CNN architectures, challenges, applications, future directions. J. Big Data 2021, 8, 53. [Google Scholar] [CrossRef] [PubMed]
  49. LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef]
  50. Zhou, L.; Yu, W. Improved Convolutional Neural Image Recognition Algorithm based on LeNet-5. J. Comput. Netw. Commun. 2022, 2022, 1636203. [Google Scholar] [CrossRef]
  51. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Commun. ACM Mag. 2012, 60, 84–90. [Google Scholar] [CrossRef]
  52. Thalagala, S.; Walgampaya, C. Application of AlexNet convolutional neural network architecture-based transfer learning for automated recognition of casting surface defects. In Proceedings of the 2021 International Research Conference on Smart Computing and Systems Engineering (SCSE), Colombo, Sri Lanka, 16 September 2021; IEEE: Piscataway, NJ, USA, 2021; Volume 4, pp. 129–136. [Google Scholar]
  53. Zeiler, M.D.; Fergus, R. Visualizing and Understanding Convolutional Networks. In Proceedings of the 13th European Conference on Computer Vision, ECCV 2014, Zurich, Switzerland, 6–12 September 2014; Lecture Notes in Computer Science. Springer: Berlin/Heidelberg, Germany, 2014; Volume 8689, pp. 818–833. [Google Scholar]
  54. Fu, L.; Feng, Y.; Majeed, Y.; Zhang, X.; Zhang, J.; Karkee, M.; Zhang, Q. Kiwifruit detection in field images using Faster R-CNN with ZFNet. IFAC-PapersOnLine 2018, 51, 45–50. [Google Scholar] [CrossRef]
  55. Xiao, J.; Wang, J.; Cao, S.; Li, B. Application of a Novel and Improved VGG-19 Network in the Detection of Workers Wearing Masks. J. Phys. Conf. Ser. 2020, 1518, 012041. [Google Scholar] [CrossRef]
  56. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 1–9. [Google Scholar]
  57. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the inception architecture for computer vision. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26–30 June 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 2818–2826. [Google Scholar]
  58. Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 580–587. [Google Scholar]
  59. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef]
  60. Xu, W.; Fu, Y.L.; Zhu, D. ResNet and Its Application to Medical Image Processing: Research Progress and Challenges. Comput. Methods Programs Biomed. 2023, 240, 107660. [Google Scholar] [CrossRef]
  61. Diwan, T.; Anirudh, G.; Tembhurne, J.V. Object detection using YOLO: Challenges, architectural successors, datasets and applications. Multimed. Tools Appl. 2023, 82, 9243–9275. [Google Scholar] [CrossRef]
  62. Wang, W.; Li, Y.; Zou, T.; Wang, X.; You, J.; Luo, Y. A Novel Image Classification Approach via Dense-MobileNet Models. Mob. Inf. Syst. 2020, 2020, 7602384. [Google Scholar] [CrossRef]
  63. Koonce, B. EfficientNet. In Convolutional Neural Networks with Swift for Tensorflow: Image Recognition and Dataset Categorization; Springer: Berlin/Heidelberg, Germany, 2021; pp. 109–123. [Google Scholar]
  64. Zoph, B.; Vasudevan, V.; Shlens, J.; Le, Q.V. Learning Transferable Architectures for Scalable Image Recognition. In Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 8697–8710. [Google Scholar]
  65. Cano, E.; Mendoza-Avilés, J.; Areiza, M.; Guerra, N.; Mendoza-Valdés, J.L.; Rovetto, C.A. Multi skin lesions classification using fine-tuning and data-augmentation applying NASNet. PeerJ Comput. Sci. 2021, 7, e371. [Google Scholar] [CrossRef]
  66. Liu, Z.; Mao, H.; Wu, C.Y.; Feichtenhofer, C.; Darrell, T.; Xie, S. A convnet for the 2020s. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 11976–11986. [Google Scholar]
  67. Woo, S.; Debnath, S.; Hu, R.; Chen, X.; Liu, Z.; Kweon, I.S.; Xie, S. ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders. In Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 16133–16142. [Google Scholar]
  68. Tammina, S. Transfer learning using VGG-16 with deep convolutional neural network for classifying images. Int. J. Sci. Res. Publ. 2019, 9, 143–150. [Google Scholar] [CrossRef]
  69. Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A. Inception-v4, inception-resnet and the impact of residual connections on learning. In Proceedings of the 31st AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; ACM: New York, NY, USA, 2017; Volume 31. [Google Scholar]
  70. Si, C.; Yu, W.; Zhou, P.; Zhou, Y.; Wang, X.; Yan, S. Inception transformer. In Proceedings of the NIPS’22: 36th International Conference on Neural Information Processing Systems, New Orleans, LA, USA, 28 November–9 December 2022; ACM: New York, NY, USA, 2022; pp. 23495–23509. [Google Scholar]
  71. Iandola, F.; Moskewicz, M.; Karayev, S.; Girshick, R.; Darrell, T.; Keutzer, K. Densenet: Implementing efficient convnet descriptor pyramids. arXiv 2014, arXiv:1404.1869. [Google Scholar]
  72. Jiang, P.; Ergu, D.; Liu, F.; Cai, Y.; Ma, B. A Review of Yolo algorithm developments. Procedia Comput. Sci. 2022, 199, 1066–1073. [Google Scholar] [CrossRef]
  73. Ou, X.; Chen, X.; Xu, X.; Xie, L.; Chen, X.; Hong, Z.; Bai, H.; Liu, X.; Chen, Q.; Li, L.; et al. Recent development in X-ray imaging technology: Future and challenges. Research 2021, 2021, 9892152. [Google Scholar] [CrossRef]
  74. Kajla, V.; Gupta, A.; Khatak, A. Analysis of x-ray images with image processing techniques: A review. In Proceedings of the 4th International Conference on Computing Communication and Automation (ICCCA), Greater Noida, India, 14–15 December 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–4. [Google Scholar]
  75. Nouh, M.R.; Eid, A.F. Magnetic resonance imaging of the spinal marrow: Basic understanding of the normal marrow pattern and its variant. World J. Radiol. 2015, 7, 448–458. [Google Scholar] [CrossRef]
  76. Rao, R.; Venkatesan, R.; Geethanath, S. Role of MRI in medical diagnostics. Resonance 2015, 20, 1003–1011. [Google Scholar] [CrossRef]
  77. Friedman, L.; Finlay, K.; Jurriaans, E. Ultrasound of the knee. Skelet. Radiol. 2001, 30, 361–377. [Google Scholar] [CrossRef]
  78. Victor, J.; Van Doninck, D.; Labey, L.; Innocenti, B.; Parizel, P.; Bellemans, J. How precise can bony landmarks be determined on a CT scan of the knee? Knee 2009, 16, 358–365. [Google Scholar] [CrossRef]
  79. Chen, J.; Ma, B.; Hu, M.; Zhai, G.; Sun, W.Q.; Yang, S.X. Objective Bi-Modal Assessment of Knee Osteoarthritis Severity Grades: Model and Mechanism. IEEE Trans. Instrum. Meas. 2024, 73, 4508611. [Google Scholar] [CrossRef]
  80. Nasser, Y.; El Hassouni, M.; Hans, D.; Jennane, R. A discriminative shape-texture convolutional neural network for early diagnosis of knee osteoarthritis from X-ray images. Phys. Eng. Sci. Med. 2023, 46, 827–837. [Google Scholar] [CrossRef] [PubMed]
  81. Guida, C.; Zhang, M.; Shan, J. Knee osteoarthritis classification using 3D CNN and MRI. Appl. Sci. 2021, 11, 5196. [Google Scholar] [CrossRef]
  82. Peterfy, C.G.; Schneider, E.; Nevitt, M. The osteoarthritis initiative: Report on the design rationale for the magnetic resonance imaging protocol for the knee. Osteoarthr. Cartil. 2008, 16, 1433–1441. [Google Scholar] [CrossRef]
  83. Segal, N.A.; Nevitt, M.C.; Gross, K.D.; Hietpas, J.; Glass, N.A.; Lewis, C.E.; Torner, J.C. The Multicenter Osteoarthritis Study (MOST): Opportunities for rehabilitation research. PM R J. Inj. Funct. Rehabil. 2013, 5, 647–654. [Google Scholar]
  84. Chen, P. Knee Osteoarthritis Severity Grading Dataset. Mendeley Data, V1. 2018. [CrossRef]
  85. Ahmed, R.; Imran, A.S. Knee Osteoarthritis Analysis Using Deep Learning and XAI on X-rays. IEEE Access 2024, 12, 68870–68879. [Google Scholar] [CrossRef]
  86. Malik, I.; Yasmin, M.; Iqbal, A.; Raza, M.; Chun, C.J.; Al-antari, M.A. A novel framework integrating ensemble transfer learning and Ant Colony Optimization for Knee Osteoarthritis severity classification. Multimed. Tools Appl. 2024, 83, 86923–86954. [Google Scholar] [CrossRef]
  87. Touahema, S.; Zaimi, I.; Zrira, N.; Ngote, M.N.; Doulhousne, H.; Aouial, M. MedKnee: A New Deep Learning-Based Software for Automated Prediction of Radiographic Knee Osteoarthritis. Diagnostics 2024, 14, 993. [Google Scholar] [CrossRef]
  88. Patil, A.R.; Salunkhe, S.S. Classification and risk estimation of osteoarthritis using deep learning methods. Meas. Sens. 2024, 35, 101279. [Google Scholar] [CrossRef]
  89. Mohammed, A.S.; Hasanaath, A.A.; Latif, G.; Bashar, A. Knee Osteoarthritis Detection and Severity Classification Using Residual Neural Networks on Preprocessed X-ray Images. Diagnostics 2023, 13, 1380. [Google Scholar] [CrossRef]
  90. Abd El-Ghany, S.; Elmogy, M.; Abd El-Aziz, A. A fully automatic fine tuned deep learning model for knee osteoarthritis detection and progression analysis. Egypt. Inform. J. 2023, 24, 229–240. [Google Scholar] [CrossRef]
  91. Pi, S.W.; Lee, B.D.; Lee, M.S.; Lee, H.J. Ensemble deep-learning networks for automated osteoarthritis grading in knee X-ray images. Sci. Rep. 2023, 13, 22887. [Google Scholar] [CrossRef] [PubMed]
  92. Wang, Y.; Wang, X.; Gao, T.; Du, L.; Liu, W. An Automatic Knee Osteoarthritis Diagnosis Method Based on Deep Learning: Data from the Osteoarthritis Initiative. J. Healthc. Eng. 2021, 2021, 5586529. [Google Scholar] [CrossRef] [PubMed]
  93. Yunus, U.; Amin, J.; Sharif, M.; Yasmin, M.; Kadry, S.; Krishnamoorthy, S. Recognition of Knee Osteoarthritis (KOA) Using YOLOv2 and Classification Based on Convolutional Neural Network. Life 2022, 12, 1126. [Google Scholar] [CrossRef]
  94. Swiecicki, A.; Li, N.; O’Donnell, J.; Said, N.; Yang, J.; Mather, R.C.; Jiranek, W.A.; Mazurowski, M.A. Deep learning-based algorithm for assessment of knee osteoarthritis severity in radiographs matches performance of radiologists. Comput. Biol. Med. 2021, 133, 104334. [Google Scholar] [CrossRef]
  95. Norman, B.; Pedoia, V.; Noworolski, A.; Link, T.M.; Majumdar, S. Applying densely connected convolutional neural networks for staging osteoarthritis severity from plain radiographs. J. Digit. Imaging 2019, 32, 471–477. [Google Scholar] [CrossRef]
  96. Tiulpin, A.; Thevenot, J.; Rahtu, E.; Lehenkari, P.; Saarakkala, S. Automatic knee osteoarthritis diagnosis from plain radiographs: A deep learning-based approach. Sci. Rep. 2018, 8, 1727. [Google Scholar] [CrossRef]
  97. Antony, J.; McGuinness, K.; Moran, K.; O’Connor, N.E. Automatic detection of knee joints and quantification of knee osteoarthritis severity using convolutional neural networks. In Proceedings of the 13th International Conference on Machine Learning and Data Mining in Pattern Recognition, MLDM 2017, New York, NY, USA, 15–20 July 2017; Springer: Berlin/Heidelberg, Germany, 2017; pp. 376–390. [Google Scholar]
  98. Guida, C.; Zhang, M.; Shan, J. Improving knee osteoarthritis classification using multimodal intermediate fusion of X-ray, MRI, and clinical information. Neural Comput. Appl. 2023, 35, 9763–9772. [Google Scholar] [CrossRef]
  99. Schiratti, J.B.; Dubois, R.; Herent, P.; Cahané, D.; Dachary, J.; Clozel, T.; Wainrib, G.; Keime-Guibert, F.; Lalande, A.; Pueyo, M.; et al. A deep learning method for predicting knee osteoarthritis radiographic progression from MRI. Arthritis Res. Ther. 2021, 23, 262. [Google Scholar] [CrossRef]
  100. Karim, M.R.; Jiao, J.; Doehmen, T.; Cochez, M.; Beyan, O.; Rebholz-Schuhmann, D.; Decker, S. DeepKneeExplainer: Explainable knee osteoarthritis diagnosis from radiographs and magnetic resonance imaging. IEEE Access 2021, 9, 39757–39780. [Google Scholar] [CrossRef]
  101. Bien, N.; Rajpurkar, P.; Ball, R.L.; Irvin, J.; Park, A.; Jones, E.; Bereket, M.; Patel, B.N.; Yeom, K.W.; Shpanskaya, K.; et al. Deep-learning-assisted diagnosis for knee magnetic resonance imaging: Development and retrospective validation of MRNet. PLoS Med. 2018, 15, e1002699. [Google Scholar] [CrossRef]
  102. Knoll, F.; Zbontar, J.; Sriram, A.; Muckley, M.J.; Bruno, M.; Defazio, A.; Parente, M.; Geras, K.J.; Katsnelson, J.; Chandarana, H.; et al. fastMRI: A publicly available raw k-space and DICOM dataset of knee images for accelerated MR image reconstruction using machine learning. Radiol. Artif. Intell. 2020, 2, e190007. [Google Scholar] [CrossRef] [PubMed]
  103. Zhao, R.; Yaman, B.; Zhang, Y.; Stewart, R.; Dixon, A.; Knoll, F.; Huang, Z.; Lui, Y.W.; Hansen, M.S.; Lungren, M.P. fastMRI+, clinical pathology annotations for knee and brain fully sampled magnetic resonance imaging data. Sci. Data 2022, 9, 152. [Google Scholar] [CrossRef] [PubMed]
  104. Alshamrani, H.A.; Rashid, M.; Alshamrani, S.S.; Alshehri, A.H. Osteo-NeT: An Automated System for Predicting Knee Osteoarthritis from X-ray Images Using Transfer-Learning-Based Neural Networks Approach. Healthcare 2023, 11, 1206. [Google Scholar] [CrossRef] [PubMed]
  105. Hengaju, U. Classification of Knee Osteoarthritis using CNN. Adv. Image Process. Pattern Recognit. 2022, 5, 25–38. [Google Scholar]
  106. Abdullah, S.S.; Rajasekaran, M.P. Automatic detection and classification of knee osteoarthritis using deep learning approach. La Radiol. Medica 2022, 127, 398–406. [Google Scholar] [CrossRef]
  107. Sikkandar, M.Y.; Begum, S.S.; Alkathiry, A.A.; Alotaibi, M.S.N.; Manzar, M.D. Automatic detection and classification of human knee osteoarthritis using convolutional neural networks. Comput. Mater. Contin. 2022, 70, 4279–4291. [Google Scholar] [CrossRef]
  108. Olsson, S.; Akbarian, E.; Lind, A.; Razavian, A.S.; Gordon, M. Automating classification of osteoarthritis according to Kellgren-Lawrence in the knee using deep learning in an unfiltered adult population. BMC Musculoskelet. Disord. 2021, 22, 844. [Google Scholar] [CrossRef]
  109. Shamir, L.; Ling, S.M.; Scott, W.W.; Bos, A.; Orlov, N.; Macura, T.J.; Eckley, D.M.; Ferrucci, L.; Goldberg, I.G. Knee X-Ray Image Analysis Method for Automated Detection of Osteoarthritis. IEEE Trans. Biomed. Eng. 2009, 56, 407–415. [Google Scholar] [CrossRef]
  110. Kumar, V.A.; Jayanthy, A. Classification of MRI images in 2D coronal view and measurement of articular cartilage thickness for early detection of knee osteoarthritis. In Proceedings of the 2016 IEEE International Conference on Recent Trends in Electronics, Information & Communication Technology (RTEICT), Bangalore, India, 20–21 May 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 1907–1911. [Google Scholar]
  111. Marques, J.; Genant, H.K.; Lillholm, M.; Dam, E.B. Diagnosis of osteoarthritis and prognosis of tibial cartilage loss by quantification of tibia trabecular bone from MRI. Magn. Reson. Med. 2013, 70, 568–575. [Google Scholar] [CrossRef]
  112. Sohail, M.; Azad, M.M.; Kim, H.S. Knee osteoarthritis severity detection using deep inception transfer learning. Comput. Biol. Med. 2025, 186, 109641. [Google Scholar] [CrossRef] [PubMed]
  113. Guo, J.; Yan, P.; Qin, Y.; Liu, M.; Ma, Y.; Li, J.; Wang, R.; Luo, H.; Lv, S. Automated measurement and grading of knee cartilage thickness: A deep learning-based approach. Front. Med. 2024, 11, 1337993. [Google Scholar] [CrossRef] [PubMed]
  114. Harman, F.; Selver, M.A.; Baris, M.M.; Canturk, A.; Oksuz, I. Deep Learning-Based Meniscus Tear Detection From Accelerated MRI. IEEE Access 2023, 11, 144349–144363. [Google Scholar] [CrossRef]
  115. Hung, T.N.K.; Vy, V.P.T.; Tri, N.M.; Hoang, L.N.; Tuan, L.V.; Ho, Q.T.; Le, N.Q.K.; Kang, J.H. Automatic detection of meniscus tears using backbone convolutional neural networks on knee MRI. J. Magn. Reson. Imaging 2023, 57, 740–749. [Google Scholar] [CrossRef]
  116. Du, Y.; Almajalid, R.; Shan, J.; Zhang, M. A novel method to predict knee osteoarthritis progression on MRI using machine learning methods. IEEE Trans. Nanobiosci. 2018, 17, 228–236. [Google Scholar] [CrossRef]
  117. Tiulpin, A.; Saarakkala, S. Automatic Grading of Individual Knee Osteoarthritis Features in Plain Radiographs Using Deep Convolutional Neural Networks. Diagnostics 2020, 10, 932. [Google Scholar] [CrossRef]
  118. Vidyasaraswathi, H.N.; Hanumantharaju, M.C. Review of Various Histogram Based Medical Image Enhancement Techniques. In Proceedings of the 2015 International Conference on Advanced Research in Computer Science Engineering & Technology (ICARCSET 2015), Unnao, India, 6–7 March 2015; ACM: New York, NY, USA, 2015. [Google Scholar]
  119. Haseeb, A.; Khan, M.A.; Shehzad, F.; Alhaisoni, M.; Khan, J.A.; Kim, T.; Cha, J.H. Knee Osteoarthritis Classification Using X-Ray Images Based on Optimal Deep Neural Network. Comput. Syst. Sci. Eng. 2023, 47, 2397–2415. [Google Scholar] [CrossRef]
  120. Pizer, S.; Johnston, R.; Ericksen, J.; Yankaskas, B.; Muller, K. Contrast-limited adaptive histogram equalization: Speed and effectiveness. In Proceedings of the First Conference on Visualization in Biomedical Computing, Atlanta, GA, USA, 22–25 May 1990; IEEE: Piscataway, NJ, USA, 1990; pp. 337–345. [Google Scholar]
  121. Reza, A.M. Realization of the contrast limited adaptive histogram equalization (CLAHE) for real-time image enhancement. J. Vlsi Signal Process. Syst. Signal Image Video Technol. 2004, 38, 35–44. [Google Scholar] [CrossRef]
  122. Kubkaddi, S.; Ravikumar, K. Early detection of knee osteoarthritis using SVM classifier. Int. J. Sci. Eng. Adv. Technol. 2017, 5, 259–262. [Google Scholar]
  123. Pedoia, V.; Lee, J.; Norman, B.; Link, T.M.; Majumdar, S. Diagnosing osteoarthritis from T2 maps using deep learning: An analysis of the entire Osteoarthritis Initiative baseline cohort. Osteoarthr. Cartil. 2019, 27, 1002–1010. [Google Scholar] [CrossRef]
  124. Cueva, J.H.; Castillo, D.; Espinós-Morató, H.; Durán, D.; Díaz, P.; Lakshminarayanan, V. Detection and classification of knee osteoarthritis. Diagnostics 2022, 12, 2362. [Google Scholar] [CrossRef]
  125. Kiruthika, D.; Judith, J. Automatic Detection of Knee Joints and Quantification of Knee Osteoarthritis Severity using Modified Fully connected Convolutional Neural Networks. Int. J. Innov. Sci. Res. Technol. 2022, 7, 569–577. [Google Scholar] [CrossRef]
  126. Chen, P.; Gao, L.; Shi, X.; Allen, K.; Yang, L. Fully automatic knee osteoarthritis severity grading using deep neural networks with a novel ordinal loss. Comput. Med. Imaging Graph. 2019, 75, 84–92. [Google Scholar] [CrossRef] [PubMed]
  127. Kondal, S.; Kulkarni, V.; Gaikwad, A.; Kharat, A.; Pant, A. Automatic grading of knee osteoarthritis on the Kellgren-Lawrence scale from radiographs using convolutional neural networks. In Advances in Deep Learning, Artificial Intelligence and Robotics, Proceedings of the 2nd International Conference on Deep Learning, Artificial Intelligence and Robotics (ICDLAIR), Baronissi, Italy, 7–18 December 2020; Springer: Berlin/Heidelberg, Germany, 2022; pp. 163–173. [Google Scholar]
  128. Bayramoglu, N.; Tiulpin, A.; Hirvasniemi, J.; Nieminen, M.T.; Saarakkala, S. Adaptive segmentation of knee radiographs for selecting the optimal ROI in texture analysis. Osteoarthr. Cartil. 2020, 28, 941–952. [Google Scholar] [CrossRef] [PubMed]
  129. Anifah, L.; Purnama, I.K.E.; Hariadi, M.; Purnomo, M.H. Automatic segmentation of impaired joint space area for osteoarthritis knee on x-ray image using gabor filter based morphology process. IPTEK J. Technol. Sci. 2011, 22, 159–165. [Google Scholar] [CrossRef]
  130. Hirvasniemi, J.; Thevenot, J.; Immonen, V.; Liikavainio, T.; Pulkkinen, P.; Jämsä, T.; Arokoski, J.; Saarakkala, S. Quantification of differences in bone texture from plain radiographs in knees with and without osteoarthritis. Osteoarthr. Cartil. 2014, 22, 1724–1731. [Google Scholar] [CrossRef]
  131. Kraus, V.B.; Feng, S.; Wang, S.; White, S.; Ainslie, M.; Brett, A.; Holmes, A.; Charles, H.C. Trabecular morphometry by fractal signature analysis is a novel marker of osteoarthritis progression. Arthritis Rheum. Off. J. Am. Coll. Rheumatol. 2009, 60, 3711–3722. [Google Scholar] [CrossRef]
  132. Soh, S.S.; Swee, T.T.; Ying, S.S.; En, C.Z.; bin Mazenan, M.N.; Meng, L.K. Magnetic resonance image segmentation for knee osteoarthritis using active shape models. In Proceedings of the 7th Biomedical Engineering International Conference, Nanjing, China, 17–20 October 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 1–5. [Google Scholar]
  133. Gan, H.S.; Sayuti, K.A.; Karim, A.H.A.; Rosidi, R.A.M.; Khaizi, A.S.A. Analysis on semi-automated knee cartilage segmentation model using inter-observer reproducibility: Data from the osteoarthritis initiative. In Proceedings of the 7th International Conference on Bioscience, Biochemistry and Bioinformatics, Kobe, Japan, 21–23 January 2017; ACM: New York, NY, USA, 2017; pp. 12–16. [Google Scholar]
  134. Fripp, J.; Crozier, S.; Warfield, S.K.; Ourselin, S. Automatic segmentation and quantitative analysis of the articular cartilages from magnetic resonance images of the knee. IEEE Trans. Med. Imaging 2009, 29, 55–64. [Google Scholar] [CrossRef]
  135. Balamoody, S.; Williams, T.G.; Waterton, J.C.; Bowes, M.; Hodgson, R.; Taylor, C.J.; Hutchinson, C.E. Comparison of 3T MR scanners in regional cartilage-thickness analysis in osteoarthritis: A cross-sectional multicenter, multivendor study. Arthritis Res. Ther. 2010, 12, R202. [Google Scholar] [CrossRef]
  136. Kumar, D.; Gandhamal, A.; Talbar, S.; Hani, A.F.M. Knee articular cartilage segmentation from MR images: A review. ACM Comput. Surv. (CSUR) 2018, 51, 1–29. [Google Scholar] [CrossRef]
  137. Swanson, M.; Prescott, J.; Best, T.; Powell, K.; Jackson, R.; Haq, F.; Gurcan, M. Semi-automated segmentation to assess the lateral meniscus in normal and osteoarthritic knees. Osteoarthr. Cartil. 2010, 18, 344–353. [Google Scholar] [CrossRef]
  138. Zhu, W.; Jiang, T.; Li, X. Local Region Based Medical Image Segmentation Using J-Divergence Measures. In Proceedings of the 2015 IEEE Engineering in Medicine and Biology 27th Annual Conference, Shanghai, China, 17–18 January 2006; IEEE: Piscataway, NJ, USA, 2006; pp. 7174–7177. [Google Scholar]
  139. Cashman, P.M.; Kitney, R.I.; Gariba, M.A.; Carter, M.E. Automated techniques for visualization and mapping of articular cartilage in MR images of the osteoarthritic knee: A base technique for the assessment of microdamage and submicro damage. IEEE Trans. Nanobiosci. 2002, 1, 42–51. [Google Scholar] [CrossRef] [PubMed]
  140. Li, S.; Zhao, S.; Zhang, Y.; Hong, J.; Chen, W. Source-free unsupervised adaptive segmentation for knee joint MRI. Biomed. Signal Process. Control 2024, 92, 106028. [Google Scholar] [CrossRef]
  141. Woo, B.; Engstrom, C.; Baresic, W.; Fripp, J.; Crozier, S.; Chandra, S.S. Automated anomaly-aware 3D segmentation of bones and cartilages in knee MR images from the Osteoarthritis Initiative. Med. Image Anal. 2024, 93, 103089. [Google Scholar] [CrossRef] [PubMed]
  142. Phan Trung, H.; Nguyen Thiet, S.; Nguyen Trung, T.; Le Tan, L.; Tran Minh, T.; Quan Thanh, T. OsteoGA: An Explainable AI Framework for Knee Osteoarthritis Severity Assessment. In Proceedings of the 12th International Symposium on Information and Communication Technology, Ho Chi Minh, Vietnam, 7–8 December 2023; ACM: New York, NY, USA, 2023; pp. 639–646. [Google Scholar]
  143. Tri Wahyuningrum, R.; Yasid, A.; Jacob Verkerke, G. Deep Neural Networks for Automatic Classification of Knee Osteoarthritis Severity Based on X-ray Images. In Proceedings of the 8th International Conference on Information Technology: IoT and Smart City, Xi’an, China, 25–27 December 2020; ACM: New York, NY, USA, 2020; pp. 110–114. [Google Scholar]
  144. Zhang, L.; Li, M.; Zhou, Y.; Lu, G.; Zhou, Q. Deep Learning Approach for Anterior Cruciate Ligament Lesion Detection: Evaluation of Diagnostic Performance Using Arthroscopy as the Reference Standard. J. Magn. Reson. Imaging 2020, 52, 1745–1752. [Google Scholar] [CrossRef]
  145. Zhou, Z.; Zhao, G.; Kijowski, R.; Liu, F. Deep convolutional neural network for segmentation of knee joint anatomy. Magn. Reson. Med. 2018, 80, 2759–2770. [Google Scholar] [CrossRef]
  146. Wang, K.; Wen, D.; Xiong, Y.; Gao, Z.; Zhang, P.; Dou, Y. Knee Osteoarthritis Assessment in X-rays Using Global and Local Attention Enhancement and Joint Loss. In Proceedings of the International Conference on Frontiers of Electronics, Information and Computation Technologies, Changsha, China, 21–23 May 2021; ACM: New York, NY, USA, 2021; pp. 1–7. [Google Scholar]
  147. Wang, X.; Liu, S.; Zhou, C. Classification of Knee Osteoarthritis Based on Transfer Learning Model and Magnetic Resonance Images. In Proceedings of the 2022 International Conference on Machine Learning, Control, and Robotics (MLCR), Suzhou, China, 29–31 October 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 67–71. [Google Scholar]
  148. Ahmed, N.; Saeed, M.; Aftab, M.; Mehmood, A.; Ilyas, Q.M. Knee Osteoarthritis Detection And Classification Using Transfer Learning. In Proceedings of the 3rd International Conference on Computing and Information Technology (ICCIT), Tabuk, Saudi Arabia, 13–14 September 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 365–369. [Google Scholar]
  149. Mehta, S.; Gaur, A.; Sarathi, M.P. A Simplified Method of Detection and Predicting the Severity of Knee Osteoarthritis. In Proceedings of the 14th International Conference on Computing Communication and Networking Technologies (ICCCNT), Delhi, India, 6–8 July 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 1–7. [Google Scholar]
  150. Kwon, S.B.; Han, H.S.; Lee, M.C.; Kim, H.C.; Ku, Y.; Ro, D.H. Machine learning-based automatic classification of knee osteoarthritis severity using gait data and radiographic images. IEEE Access 2020, 8, 120597–120603. [Google Scholar] [CrossRef]
  151. Yeoh, P.S.Q.; Goh, S.L.; Hasikin, K.; Wu, X.; Lai, K.W. 3D Efficient Multi-Task Neural Network for Knee Osteoarthritis Diagnosis Using MRI Scans: Data From the Osteoarthritis Initiative. IEEE Access 2023, 11, 135323–135333. [Google Scholar] [CrossRef]
  152. Tariq, T.; Suhail, Z.; Nawaz, Z. Knee Osteoarthritis Detection and Classification Using X-Rays. IEEE Access 2023, 11, 48292–48303. [Google Scholar] [CrossRef]
  153. Sarvamangala, D.; Kulkarni, R.V. Grading of Knee Osteoarthritis Using Convolutional Neural Networks. Neural Process. Lett. 2021, 53, 2985–3009. [Google Scholar] [CrossRef]
  154. Aladhadh, S.; Mahum, R. Knee osteoarthritis detection using an improved CenterNet with pixel-wise voting scheme. IEEE Access 2023, 11, 22283–22296. [Google Scholar] [CrossRef]
  155. Harish, H.; Patrot, A.; Bhavan, S.; Gousiya, S.; Livitha, A. Knee Osteoarthritis Prediction Using Deep Learning. In Proceedings of the 2023 International Conference on Recent Advances in Information Technology for Sustainable Development (ICRAIS), Manipal, India, 6–7 November 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 14–19. [Google Scholar]
  156. Huu, P.N.; Thanh, D.N.; le Thi Hai, T.; Duc, H.C.; Viet, H.P.; Trong, C.N. Detection and Classification Knee Osteoarthritis Algorithm using YOLOv3 and VGG-16 Models. In Proceedings of the 7th National Scientific Conference on Applying New Technology in Green Buildings (ATiGB), Da Nang, Vietnam, 11–12 November 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 31–36. [Google Scholar]
  157. Antony, J.; McGuinness, K.; O’Connor, N.E.; Moran, K. Quantifying radiographic knee osteoarthritis severity using deep convolutional neural networks. In Proceedings of the 23rd International Conference on Pattern Recognition (ICPR), Cancun, Mexico, 4–8 December 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 1195–1200. [Google Scholar]
  158. Antonio, P.J.A.; Delmo, J.A.B.; Sevilla, R.V.; Ligayo, M.A.D.; Montesines, D.L. Deep Transfer Network of Knee Osteoarthritis Progression Rate Classification in MR Imaging for Medical Imaging Support System. In Proceedings of the 2022 International Conference on Decision Aid Sciences and Applications (DASA), Chiangrai, Thailand, 23–25 March 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 285–289. [Google Scholar]
  159. Pandey, A.; Kumar, V. Enhancing Knee Osteoarthritis Severity Classification using Improved Efficientnet. In Proceedings of the 10th IEEE Uttar Pradesh Section International Conference on Electrical, Electronics and Computer Engineering (UPCON), Gautam Buddha Nagar, India, 1–3 December 2023; IEEE: Piscataway, NJ, USA, 2023; Volume 10, pp. 1351–1356. [Google Scholar]
  160. Dharmani, B.C.; Khatri, K. Deep Learning for Knee Osteoarthritis Severity Stage Detection using X-Ray Images. In Proceedings of the 15th International Conference on COMmunication Systems & NETworkS (COMSNETS), Bangalore, India, 3–8 January 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 78–83. [Google Scholar]
  161. Asnidar, A.; Ilham, M.R.; Hidayat, M.T.; Kaswar, A.B.; Arenreng, J.M.P.; Andayani, D.D.; Adiba, F. Application of MobileNetV2 Architecture to Classification of Knee Osteoarthritis Based on X-ray Images. In Proceedings of the 2023 International Conference on Advanced Mechatronics, Intelligent Manufacture and Industrial Automation (ICAMIMIA), Surabaya, Indonesia, 14–15 November 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 375–380. [Google Scholar]
  162. Singh, P.P.; Prasad, S.; Chaudhary, A.K.; Patel, C.K.; Debnath, M. Classification of effusion and cartilage erosion affects in osteoarthritis knee MRI images using deep learning model. In Computer Vision and Image Processing, Proceedings of the 4th International Conference, CVIP 2019, Jaipur, India, 27–29 September 2019; Springer: Berlin/Heidelberg, Germany, 2020; pp. 373–383. [Google Scholar]
  163. Sivakumari, T.; Vani, R. Implementation of AlexNet for Classification of Knee Osteoarthritis. In Proceedings of the 7th International Conference on Communication and Electronics Systems (ICCES), Coimbatore, India, 22–24 June 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 1405–1409. [Google Scholar]
  164. Yildirim, M.; Mutlu, H.B. Automatic detection of knee osteoarthritis grading using artificial intelligence-based methods. Int. J. Imaging Syst. Technol. 2024, 34, e23057. [Google Scholar] [CrossRef]
  165. Yoon, J.S.; Yon, C.J.; Lee, D.; Lee, J.J.; Kang, C.H.; Kang, S.B.; Lee, N.K.; Chang, C.B. Assessment of a novel deep learning-based software developed for automatic feature extraction and grading of radiographic knee osteoarthritis. BMC Musculoskelet. Disord. 2023, 24, 869. [Google Scholar] [CrossRef] [PubMed]
  166. Jain, R.K.; Sharma, P.K.; Gaj, S.; Sur, A.; Ghosh, P. Knee osteoarthritis severity prediction using an attentive multi-scale deep convolutional neural network. Multimed. Tools Appl. 2024, 83, 6925–6942. [Google Scholar] [CrossRef]
  167. Mary, C.D.; Rajendran, P.; Sharanyaa, S. Knee Osteoarthritis Severity Prediction Through Medical Image Analysis Using Deep Learning Architectures. In Proceedings of the 2023 International Conference on Data Intelligence and Cognitive Informatics, Tirunelveli, India, 27–28 June 2023; Springer: Berlin/Heidelberg, Germany, 2023; pp. 427–441. [Google Scholar]
  168. Halim, H.N.A.; Azaman, A. Clustering-Based Support Vector Machine (SVM) for Symptomatic Knee Osteoarthritis Severity Classification. In Proceedings of the 9th International Conference on Biomedical and Bioinformatics Engineering, Kyoto Japan, 10–13 November 2022; ACM: New York, NY, USA, 2022; pp. 140–146. [Google Scholar]
  169. Rehman, A.; Raza, A.; Alamri, F.S.; Alghofaily, B.; Saba, T. Transfer learning-based smart features engineering for osteoarthritis diagnosis from knee X-ray images. IEEE Access 2023, 11, 71326–71338. [Google Scholar] [CrossRef]
  170. Sharma, S.; Virk, S.S.; Jain, V. Detection of osteoarthritis using SVM classifications. In Proceedings of the 3rd International Conference on Computing for Sustainable Global Development (INDIACom), New Delhi, India, 16–18 March 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 2997–3002. [Google Scholar]
  171. Zebari, D.A.; Sadiq, S.S.; Sulaiman, D.M. Knee Osteoarthritis Detection Using Deep Feature Based on Convolutional Neural Network. In Proceedings of the 2022 International Conference on Computer Science and Software Engineering (CSASE), Duhok, Iraq, 15–17 March 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 259–264. [Google Scholar]
  172. Roomi, S.M.M.; Suvetha, S.; Maheswari, P.U.; Suganya, R.; Priya, K. Radon Feature Based Osteoarthritis Severity Assessment. In Proceedings of the 2023 International Conference on Signal Processing, Computation, Electronics, Power and Telecommunication (IConSCEPT), Karaikal, India, 25–26 May 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 1–5. [Google Scholar]
  173. Messaoudene, K.; Harrar, K. Computerized diagnosis of knee osteoarthritis from x-ray images using combined texture features: Data from the osteoarthritis initiative. Int. J. Imaging Syst. Technol. 2024, 34, e23063. [Google Scholar] [CrossRef]
  174. Brahim, A.; Jennane, R.; Riad, R.; Janvier, T.; Khedher, L.; Toumi, H.; Lespessailles, E. A decision support tool for early detection of knee OsteoArthritis using X-ray imaging and machine learning: Data from the OsteoArthritis Initiative. Comput. Med. Imaging Graph. 2019, 73, 11–18. [Google Scholar] [CrossRef]
  175. Johnson, J.M.; Khoshgoftaar, T.M. Survey on deep learning with class imbalance. J. Big Data 2019, 6, 27. [Google Scholar] [CrossRef]
  176. Hicks, S.A.; Strümke, I.; Thambawita, V.; Hammou, M.; Riegler, M.A.; Halvorsen, P.; Parasa, S. On evaluation metrics for medical applications of artificial intelligence. Sci. Rep. 2022, 12, 5979. [Google Scholar] [CrossRef]
  177. Zhou, H.; Qin, R.; Liu, Z.; Qian, Y.; Ju, X. Optimizing Performance of Image Processing Algorithms on GPUs. In Proceedings of the 2021 International Conference on Wireless Communications, Networking and Applications, Berlin, Germany, 17–19 December 2021; Springer: Berlin/Heidelberg, Germany, 2021; pp. 936–943. [Google Scholar]
  178. Choudhary, K.; DeCost, B.; Chen, C.; Jain, A.; Tavazza, F.; Cohn, R.; Park, C.W.; Choudhary, A.; Agrawal, A.; Billinge, S.J.; et al. Recent advances and applications of deep learning methods in materials science. npj Comput. Mater. 2022, 8, 59. [Google Scholar] [CrossRef]
  179. Li, X.; Sun, Y.; Lin, J.; Li, L.; Feng, T.; Yin, S. The synergy of seeing and saying: Revolutionary advances in multi-modality medical vision-language large models. Artif. Intell. Sci. Eng. 2025, 1, 79–97. [Google Scholar] [CrossRef]
  180. Li, X.; Li, L.; Jiang, Y.; Wang, H.; Qiao, X.; Feng, T.; Luo, H.; Zhao, Y. Vision-Language Models in medical image analysis: From simple fusion to general large models. Inf. Fusion 2025, 118, 102995. [Google Scholar] [CrossRef]
Figure 1. Commonly affected joints in osteoarthritis.
Figure 1. Commonly affected joints in osteoarthritis.
Diagnostics 16 00461 g001
Figure 2. Pathological changes in an osteoarthritis-affected knee, including bone spurs, cartilage degradation, and narrowing of the joint space [24].
Figure 2. Pathological changes in an osteoarthritis-affected knee, including bone spurs, cartilage degradation, and narrowing of the joint space [24].
Diagnostics 16 00461 g002
Figure 3. Kellgren–Lawrence grading scale for knee osteoarthritis with representative X-ray images.
Figure 3. Kellgren–Lawrence grading scale for knee osteoarthritis with representative X-ray images.
Diagnostics 16 00461 g003
Figure 4. Research articles included after systematic screening.
Figure 4. Research articles included after systematic screening.
Diagnostics 16 00461 g004
Figure 5. Year-wise distribution of studies included in the review (search window: 2016–2025).
Figure 5. Year-wise distribution of studies included in the review (search window: 2016–2025).
Diagnostics 16 00461 g005
Figure 6. Chronological evolution of CNN architectures over the years.
Figure 6. Chronological evolution of CNN architectures over the years.
Diagnostics 16 00461 g006
Figure 7. VGG16 convolutional neural network architecture [68].
Figure 7. VGG16 convolutional neural network architecture [68].
Diagnostics 16 00461 g007
Figure 8. Representative knee images illustrating commonly used imaging modalities: X-ray [80], MRI [81], ultrasound [77], CT [78], and thermal imaging [79].
Figure 8. Representative knee images illustrating commonly used imaging modalities: X-ray [80], MRI [81], ultrasound [77], CT [78], and thermal imaging [79].
Diagnostics 16 00461 g008
Figure 9. General workflow of deep learning-based KOA detection and classification using the KL grading system, including data preprocessing, Region of Interest (ROI) detection, model development, and evaluation steps.
Figure 9. General workflow of deep learning-based KOA detection and classification using the KL grading system, including data preprocessing, Region of Interest (ROI) detection, model development, and evaluation steps.
Diagnostics 16 00461 g009
Figure 10. Illustration of the effects of applying CLAHE on sample knee X-ray images across different KL grades.
Figure 10. Illustration of the effects of applying CLAHE on sample knee X-ray images across different KL grades.
Diagnostics 16 00461 g010
Figure 11. Overview of manual, semi-automatic, and automatic segmentation techniques for knee radiographs.
Figure 11. Overview of manual, semi-automatic, and automatic segmentation techniques for knee radiographs.
Diagnostics 16 00461 g011
Table 2. Publisher-wise distribution of research studies and their selection at different screening stages.
Table 2. Publisher-wise distribution of research studies and their selection at different screening stages.
PublisherTotal Studies FoundInitial SelectionFinal Selection
IEEE28214320
Springer13814919
Elsevier194286
Wiley1044276
ACM182122
Others5255712
Table 3. Common activation functions in CNNs for classification, where x denotes the input from the previous layer before applying the activation function.
Table 3. Common activation functions in CNNs for classification, where x denotes the input from the previous layer before applying the activation function.
Activation FunctionFormula
SoftMax f ( x i ) = e x i j e x j
Sigmoid f ( x ) = 1 1 + e x
Tanh f ( x ) = t a n h ( x )
ReLU f ( x ) = m a x ( 0 , x )
Leaky ReLU f ( x ) = m a x ( 0.01 x , x )
Table 4. Key features and representative use cases of prominent CNN architectures.
Table 4. Key features and representative use cases of prominent CNN architectures.
YearArchitectureKey FeaturesExample Use Case
1998LeNet [49]The initial successful implementation of CNNs involved five alternating layers of convolution and pooling, utilizing tanh or sigmoid activation functionsRecognizing handwritten and machine-printed characters, Face Recognition [50]
2012AlexNet [51]Uses ReLU activation function, use of dropout layers, trained on GPUsLarge-scale image recognition, surface defect recognition [52]
2013ZFNet [53]Less number of filters with reduced stride, retains more pixel information, visualization of featuresClassification of ImageNet data, object classification [54]
2014VGGNet [36]Deeper networks with smaller filters, same depth of convolutional layers, has multiple configurationsImage classification, object detection, medical imaging, surveillance [55]
2014GoogLeNet [56]Use of Inception Module, more efficient computation and deeper networks, multiple version of inceptionImage segmentation, transfer learning, video analysis, medical imaging [57]
2014R-CNN [58]Segmentation into regions of interest, generation of fixed length feature vector, use of bounding boxes and coordinatesObject detection, visual search, document analysis, ocr, autonomous vehicles [59]
2015ResNet [37]Use of skip connections to train deeper networks, overcome vanishing gradient problem, global average pooling after residual blocksSemantic segmentation, medical image analysis, transfer learning, facial recognition, edge computing [60]
2015YOLO [38]Single forward pass and detection, divides image into grid cells, uses bounding boxes for feature detectionSecurity and surveillance, object tracking, drone applications [61]
2016DenseNet [39]Use of dense blocks where each layer is connected to every other layer in feed-forward fashion, feature reuseFine-grained recognition, object recognition in unstructured environments [50]
2017MobileNet [40]For mobile and embedded vision applications, Use depth-wise separable convolutions, reduced model size and complexityMobile and embedded vision applications, real-time object detection, inspection and defect identification [62]
2018EfficientNet [41]Uses compound scaling method, efficient architectural design with MBConv blocks, SE optimization, and the use of Swish activation functionImage classification, object detection and localization, and semantic segmentation [63]
2018NASNet [64]Use of neural architecture search and reinforcement learning, facilitates transferability and scalabilityMedical imaging, autonomous vehicles, and industrial quality control [65]
2022ConvNeXt [66]It integrates vision transformers, layer normalization, and the Gaussian Error Linear Unit (GELU) activation functionPedestrian and traffic sign detection, visual content search and digital asset management [67]
Table 7. Summary of data augmentation, preprocessing, and ROI handling techniques employed in KOA studies.
Table 7. Summary of data augmentation, preprocessing, and ROI handling techniques employed in KOA studies.
CategoryTechniqueKey DetailsReferences
Data AugmentationGeometric transformationsRotation (±3°), flipping, translation, scaling[5,94,95,104,112,124]
Intensity-based augmentationBrightness, contrast, gamma correction,
color jitter
[5,117,124]
Class balancingOversampling/stratified sampling of
minority KL grades
[94,104,106,124]
Noise injectionGaussian noise addition[117]
PreprocessingHistogram equalizationGlobal HE or BPHE[89,110,119,122]
CLAHEClip limit ≈ 2.0, tile grid 8 × 8 [104,105,112,117]
Noise filteringMedian, adaptive median, Gaussian, anisotropic filters[104,105,107,110,122]
NormalizationIntensity scaling; pixel spacing
normalization
[90,92,94,98,106,116]
Resizing/croppingFixed input size; border removal[89,104,105,110]
Grayscale conversion16-bit to 8-bit grayscale (DICOM)[92,94,110]
ROI HandlingKnee joint localizationLandmark detection using BoneFinder/FCN[98,117,125]
Bounding box detectionTemplate matching or DL-based (YOLO, Faster R-CNN)[92,94,95,126,127]
ROI croppingFixed-size patches around joint center[89,98,105,117]
Region proposal networksRPN-based ROI extraction[94,106,127]
BPHE: Brightness-Preserving Histogram Equalization; RPN: Region Proposal Network; FCN: Fully Convolutional Network.
Table 8. Summary of manual segmentation techniques and annotated knee regions in KOA studies.
Table 8. Summary of manual segmentation techniques and annotated knee regions in KOA studies.
Reference and YearInput Data ModalityApproach UsedRemarks
 [130], 2014X-rayMedial, Lateral, and Minimum Joint Space Width (JSW) measured manuallyMiddle part of the condyles from narrowest point of joint used
 [135], 2010MRICartilage segmented manually from sagittal 3D sequences; Uses endpoint segmentation software with livewire algorithmQuality control performed by musculoskeletal radiologists
 [131], 2009X-rayManual joint segmentation; Software to determine joint space width boundary; Automatically identified medial subchondral bone to be used in Fractal Signature Analysis (FSA)Six selected initialization points: tibial spine, lateral tibia, medial tibial, lateral tibial spine, medial femur, lateral femur
Table 9. Summary of semi-automatic segmentation techniques employed in KOA studies.
Table 9. Summary of semi-automatic segmentation techniques employed in KOA studies.
Reference and YearInput Data ModalityApproach UsedRemarks
 [133], 2017X-rayRandom walks model for simultaneous label segmentationFour labels: femoral, tibial, patella, background
 [132], 2014MRIActive Shape Models (ASM) for semi-automatic segmentationArticular cartilage segmented at distal femur
 [137], 2010MRISeed point within meniscus; Gaussian fit threshold; Conditional dilation; Post-processing refinementWorks for normal and degenerative menisci
 [138], 2006X-rayRegion homogeneity based on intensity; Energy function to minimize dissimilarity; Iterative mean/variance updateManual initialization with automatic computation
 [139], 2002MRISemi-automated segmentation and cartilage thickness mappingUses 3D gradient echo MR images
Table 10. Summary of automatic segmentation techniques employed in KOA studies.
Table 10. Summary of automatic segmentation techniques employed in KOA studies.
Reference and YearInput Data ModalityApproach UsedRemarks
[140], 2024MRIBatch normalization and augmented entropy minimization; Refined using voting strategyUncertainty-aware pseudo supervision to
boost performance
[141], 2024MRISemantic segmentation of bones and cartilage; Anomaly aware segmentationImproves bone
anomaly detection
[142], 2023X-rayTibia and femur segmentation using YOLOv8OsteoGA generates images for segmentation
[5], 2022X-rayYOLOv3-tiny to segment ROISame model used for classification
[106], 2022X-rayFaster RCNN to detect ROI; ResNet-50 for feature extractionRPN generates
region proposals
[128], 2020X-rayLocate subchondral bone; Superpixel segmentation using SLICLBP evaluates sub-regions
[129], 2011X-rayTwo-stage segmentation using CLAHE, template matching,
and COM
Accuracy of 100%
[134], 2009MRIBone statistical shape model with cartilage thicknessFemur, tibia,
patella segmented
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Rani, S.; Rout, A.; Soni, P.; Gupta, M.; Kumar, N.; Kumar, K. Review of CNN-Based Approaches for Preprocessing, Segmentation and Classification of Knee Osteoarthritis. Diagnostics 2026, 16, 461. https://doi.org/10.3390/diagnostics16030461

AMA Style

Rani S, Rout A, Soni P, Gupta M, Kumar N, Kumar K. Review of CNN-Based Approaches for Preprocessing, Segmentation and Classification of Knee Osteoarthritis. Diagnostics. 2026; 16(3):461. https://doi.org/10.3390/diagnostics16030461

Chicago/Turabian Style

Rani, Sudesh, Akash Rout, Priyanka Soni, Mayank Gupta, Naresh Kumar, and Karan Kumar. 2026. "Review of CNN-Based Approaches for Preprocessing, Segmentation and Classification of Knee Osteoarthritis" Diagnostics 16, no. 3: 461. https://doi.org/10.3390/diagnostics16030461

APA Style

Rani, S., Rout, A., Soni, P., Gupta, M., Kumar, N., & Kumar, K. (2026). Review of CNN-Based Approaches for Preprocessing, Segmentation and Classification of Knee Osteoarthritis. Diagnostics, 16(3), 461. https://doi.org/10.3390/diagnostics16030461

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop