Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (15)

Search Parameters:
Keywords = automated landmark localization

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
10 pages, 1385 KB  
Article
Prediction of Distal Dural Ring Location in Internal Carotid Paraclinoid Aneurysms Using the Tuberculum Sellae–Anterior Clinoid Process Line
by Masaki Matsumoto, Tohru Mizutani, Tatsuya Sugiyama, Kenji Sumi, Shintaro Arai and Yoichi Morofuji
J. Clin. Med. 2025, 14(17), 5951; https://doi.org/10.3390/jcm14175951 - 22 Aug 2025
Viewed by 678
Abstract
Background/Objectives: Current bone-based landmark approaches have shown variable accuracy and poor reproducibility. We validated a two-point “tuberculum sellae–anterior clinoid process” (TS–ACP) line traced on routine 3D-computed tomography angiography (CTA) for predicting distal dural ring (DDR) position and quantified the interobserver agreement. Methods [...] Read more.
Background/Objectives: Current bone-based landmark approaches have shown variable accuracy and poor reproducibility. We validated a two-point “tuberculum sellae–anterior clinoid process” (TS–ACP) line traced on routine 3D-computed tomography angiography (CTA) for predicting distal dural ring (DDR) position and quantified the interobserver agreement. Methods: We retrospectively reviewed data from 85 patients (87 aneurysms) who were treated via clipping between June 2012 and December 2024. Two blinded neurosurgeons classified each aneurysm as extradural, intradural, or straddling the TS–ACP line. The intraoperative DDR inspection served as the reference standard. Diagnostic accuracy, χ2 statistics, and Cohen’s κ were calculated. Results: The TS–ACP line landmarks were identifiable in all cases. The TS–ACP line classification correlated strongly with operative findings (χ2 = 138.3, p = 6.4 × 10−29). The overall accuracy was 89.7% (78/87), and sensitivity and specificity for identifying intradural aneurysms were 94% and 82%, respectively. The interobserver agreement was substantial (κ = 0.78). Nine aneurysms were misclassified, including four cavernous-sinus lesions that partially crossed the DDR. Retrospective fusion using constructive interference in steady-state magnetic resonance imaging corrected these errors. Conclusions: The TS–ACP line represents a rapid, reproducible tool that reliably localizes the DDR on standard 3D-CTA, showing higher accuracy than previously reported single-landmark techniques. Its high accuracy and substantial inter-observer concordance support incorporation into routine preoperative assessments. Because the method depends on only two easily detectable bony points, it is well-suited for automated implementation, offering a practical pathway toward artificial intelligence-assisted stratification of paraclinoid aneurysms. Full article
(This article belongs to the Special Issue Revolutionizing Neurosurgery: Cutting-Edge Techniques and Innovations)
Show Figures

Graphical abstract

14 pages, 4182 KB  
Article
Automated Landmark Detection and Lip Thickness Classification Using a Convolutional Neural Network in Lateral Cephalometric Radiographs
by Miaomiao Han, Zhengqun Huo, Jiangyan Ren, Haiting Zhu, Huang Li, Jialing Li and Li Mei
Diagnostics 2025, 15(12), 1468; https://doi.org/10.3390/diagnostics15121468 - 9 Jun 2025
Cited by 1 | Viewed by 1027
Abstract
Objective: The objective of this study is to develop a convolutional neural network (CNN) for the automatic detection of soft and hard tissue landmarks and the classification of lip thickness on lateral cephalometric radiographs. Methods: A dataset of 1019 pre-orthodontic lateral cephalograms from [...] Read more.
Objective: The objective of this study is to develop a convolutional neural network (CNN) for the automatic detection of soft and hard tissue landmarks and the classification of lip thickness on lateral cephalometric radiographs. Methods: A dataset of 1019 pre-orthodontic lateral cephalograms from patients with diverse malocclusions was utilized. A CNN-based model was trained to automatically detect 22 cephalometric landmarks. Upper and lower lip thicknesses were measured using some of these landmarks, and a pre-trained decision tree model was employed to classify lip thickness into the thin, normal, and thick categories. Results: The mean radial error (MRE) for detecting 22 landmarks was 0.97 ± 0.52 mm. Successful detection rates (SDRs) at threshold distances of 1.00, 1.50, 2.00, 2.50, 3.00, and 4.00 mm were 72.26%, 89.59%, 95.41%, 97.66%, 98.98%, and 99.47%, respectively. For nine soft tissue landmarks, the MRE was 1.08 ± 0.87 mm. Lip thickness classification accuracy was 0.91 ± 0.04 (upper lip) and 0.90 ± 0.04 (lower lip) in females and 0.92 ± 0.03 (upper lip) and 0.88 ± 0.05 (lower lip) in males. The area under the curve (AUC) values for lip thickness were ≥0.97 for all gender–lip combinations. Conclusions: The CNN-based landmark detection model demonstrated high precision, enabling reliable automatic classification of lip thickness using cephalometric radiographs. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

21 pages, 2806 KB  
Article
A Computer-Aided Approach to Canine Hip Dysplasia Assessment: Measuring Femoral Head–Acetabulum Distance with Deep Learning
by Pedro Franco-Gonçalo, Pedro Leite, Sofia Alves-Pimenta, Bruno Colaço, Lio Gonçalves, Vítor Filipe, Fintan McEvoy, Manuel Ferreira and Mário Ginja
Appl. Sci. 2025, 15(9), 5087; https://doi.org/10.3390/app15095087 - 3 May 2025
Viewed by 1402
Abstract
Canine hip dysplasia (CHD) screening relies on radiographic assessment, but traditional scoring methods often lack consistency due to inter-rater variability. This study presents an AI-driven system for automated measurement of the femoral head center to dorsal acetabular edge (FHC/DAE) distance, a key metric [...] Read more.
Canine hip dysplasia (CHD) screening relies on radiographic assessment, but traditional scoring methods often lack consistency due to inter-rater variability. This study presents an AI-driven system for automated measurement of the femoral head center to dorsal acetabular edge (FHC/DAE) distance, a key metric in CHD evaluation. Unlike most AI models that directly classify CHD severity using convolutional neural networks, this system provides an interpretable, measurement-based output to support a more transparent evaluation. The system combines a keypoint regression model for femoral head center localization with a U-Net-based segmentation model for acetabular edge delineation. It was trained on 7967 images for hip joint detection, 571 for keypoints, and 624 for acetabulum segmentation, all from ventrodorsal hip-extended radiographs. On a test set of 70 images, the keypoint model achieved high precision (Euclidean Distance = 0.055 mm; Mean Absolute Error = 0.0034 mm; Mean Squared Error = 2.52 × 10−5 mm2), while the segmentation model showed strong performance (Dice Score = 0.96; Intersection over Union = 0.92). Comparison with expert annotations demonstrated strong agreement (Intraclass Correlation Coefficients = 0.97 and 0.93; Weighted Kappa = 0.86 and 0.79; Standard Error of Measurement = 0.92 to 1.34 mm). By automating anatomical landmark detection, the system enhances standardization, reproducibility, and interpretability in CHD radiographic assessment. Its strong alignment with expert evaluations supports its integration into CHD screening workflows for more objective and efficient diagnosis and CHD scoring. Full article
(This article belongs to the Special Issue Research on Machine Learning in Computer Vision)
Show Figures

Figure 1

18 pages, 3645 KB  
Review
Cutting Edge: A Comprehensive Guide to Colorectal Cancer Surgery in Inflammatory Bowel Diseases
by Ionut Eduard Iordache, Lucian-Flavius Herlo, Razvan Popescu, Daniel Ovidiu Costea, Luana Alexandrescu, Adrian Paul Suceveanu, Sorin Deacu, Gabriela Isabela Baltatescu, Alina Doina Nicoara, Nicoleta Leopa, Andreea Nelson Twakor, Andrei Octavian Iordache and Liliana Steriu
J. Mind Med. Sci. 2025, 12(1), 6; https://doi.org/10.3390/jmms12010006 - 11 Mar 2025
Viewed by 1338
Abstract
Over the past two decades, surgical techniques in colorectal cancer (CRC) have improved patient outcomes through precision and reduced invasiveness. Open colectomy, laparoscopic surgery, robotic-assisted procedures, and advanced rectal cancer treatments such as total mesorectal excision (TME) and transanal TME are discussed in [...] Read more.
Over the past two decades, surgical techniques in colorectal cancer (CRC) have improved patient outcomes through precision and reduced invasiveness. Open colectomy, laparoscopic surgery, robotic-assisted procedures, and advanced rectal cancer treatments such as total mesorectal excision (TME) and transanal TME are discussed in this article. Traditional open colectomy offers reliable resection but takes longer to recover. Laparoscopic surgery transformed CRC care by improving oncological outcomes, postoperative pain, and recovery. Automated surgery improves laparoscopy’s dexterity, precision, and 3D visualisation, making it ideal for rectal cancer pelvic dissections. TME is the gold standard treatment for rectal cancer, minimising local recurrence, while TaTME improves access for low-lying tumours, preserving the sphincter. In metastatic CRC, palliative procedures help manage blockage, perforation, and bleeding. Clinical examples and landmark trials show each technique’s efficacy in personalised care. Advanced surgical techniques and multidisciplinary approaches have improved CRC survival and quality of life. Advances in CRC treatment require creativity and customised surgery. Full article
Show Figures

Figure 1

15 pages, 2930 KB  
Article
Anatomically Guided Deep Learning System for Right Internal Jugular Line (RIJL) Segmentation and Tip Localization in Chest X-Ray
by Siyuan Wei, Liza Shrestha, Gabriel Melendez-Corres and Matthew S. Brown
Life 2025, 15(2), 201; https://doi.org/10.3390/life15020201 - 29 Jan 2025
Viewed by 1360
Abstract
The right internal jugular line (RIJL) is a type of central venous catheter (CVC) inserted into the right internal jugular vein to deliver medications and monitor vital functions in ICU patients. The placement of RIJL is routinely checked by a clinician in a [...] Read more.
The right internal jugular line (RIJL) is a type of central venous catheter (CVC) inserted into the right internal jugular vein to deliver medications and monitor vital functions in ICU patients. The placement of RIJL is routinely checked by a clinician in a chest X-ray (CXR) image to ensure its proper function and patient safety. To reduce the workload of clinicians, deep learning-based automated detection algorithms have been developed to detect CVCs in CXRs. Although RIJL is the most widely used type of CVCs, there is a paucity of investigations focused on its accurate segmentation and tip localization. In this study, we propose a deep learning system that integrates an anatomical landmark segmentation, an RIJL segmentation network, and a postprocessing function to segment the RIJL course and detect the tip with accuracy and precision. We utilized the nnU-Net framework to configure the segmentation network. The entire system was implemented on the SimpleMind Cognitive AI platform, enabling the integration of anatomical knowledge and spatial reasoning to model relationships between objects within the image. Specifically, the trachea was used as an anatomical landmark to extract a subregion in a CXR image that is most relevant to the RIJL. The subregions were used to generate cropped images, which were used to train the segmentation network. The segmentation results were recovered to original dimensions, and the most inferior point’s coordinates in each image were defined as the tip. With guidance from the anatomical landmark and customized postprocessing, the proposed method achieved improved segmentation and tip localization compared to the baseline segmentation network: the mean average symmetric surface distance (ASSD) was decreased from 2.72 to 1.41 mm, and the mean tip distance was reduced from 11.27 to 8.29 mm. Full article
(This article belongs to the Special Issue Current Progress in Medical Image Segmentation)
Show Figures

Figure 1

13 pages, 860 KB  
Article
Multi-Scale 3D Cephalometric Landmark Detection Based on Direct Regression with 3D CNN Architectures
by Chanho Song, Yoosoo Jeong, Hyungkyu Huh, Jee-Woong Park, Jun-Young Paeng, Jaemyung Ahn, Jaebum Son and Euisung Jung
Diagnostics 2024, 14(22), 2605; https://doi.org/10.3390/diagnostics14222605 - 20 Nov 2024
Viewed by 1817
Abstract
Background: Cephalometric analysis is important in diagnosing and planning treatments for patients, traditionally relying on 2D cephalometric radiographs. With advancements in 3D imaging, automated landmark detection using deep learning has gained prominence. However, 3D imaging introduces challenges due to increased network complexity and [...] Read more.
Background: Cephalometric analysis is important in diagnosing and planning treatments for patients, traditionally relying on 2D cephalometric radiographs. With advancements in 3D imaging, automated landmark detection using deep learning has gained prominence. However, 3D imaging introduces challenges due to increased network complexity and computational demands. This study proposes a multi-scale 3D CNN-based approach utilizing direct regression to improve the accuracy of maxillofacial landmark detection. Methods: The method employs a coarse-to-fine framework, first identifying landmarks in a global context and then refining their positions using localized 3D patches. A clinical dataset of 150 CT scans from maxillofacial surgery patients, annotated with 30 anatomical landmarks, was used for training and evaluation. Results: The proposed method achieved an average RMSE of 2.238 mm, outperforming conventional 3D CNN architectures. The approach demonstrated consistent detection without failure cases. Conclusions: Our multi-scale-based 3D CNN framework provides a reliable method for automated landmark detection in maxillofacial CT images, showing potential for other clinical applications. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

26 pages, 2887 KB  
Article
Implicit Is Not Enough: Explicitly Enforcing Anatomical Priors inside Landmark Localization Models
by Simon Johannes Joham, Arnela Hadzic and Martin Urschler
Bioengineering 2024, 11(9), 932; https://doi.org/10.3390/bioengineering11090932 - 17 Sep 2024
Cited by 1 | Viewed by 2147
Abstract
The task of localizing distinct anatomical structures in medical image data is an essential prerequisite for several medical applications, such as treatment planning in orthodontics, bone-age estimation, or initialization of segmentation methods in automated image analysis tools. Currently, Anatomical Landmark Localization (ALL) is [...] Read more.
The task of localizing distinct anatomical structures in medical image data is an essential prerequisite for several medical applications, such as treatment planning in orthodontics, bone-age estimation, or initialization of segmentation methods in automated image analysis tools. Currently, Anatomical Landmark Localization (ALL) is mainly solved by deep-learning methods, which cannot guarantee robust ALL predictions; there may always be outlier predictions that are far from their ground truth locations due to out-of-distribution inputs. However, these localization outliers are detrimental to the performance of subsequent medical applications that rely on ALL results. The current ALL literature relies heavily on implicit anatomical constraints built into the loss function and network architecture to reduce the risk of anatomically infeasible predictions. However, we argue that in medical imaging, where images are generally acquired in a controlled environment, we should use stronger explicit anatomical constraints to reduce the number of outliers as much as possible. Therefore, we propose the end-to-end trainable Global Anatomical Feasibility Filter and Analysis (GAFFA) method, which uses prior anatomical knowledge estimated from data to explicitly enforce anatomical constraints. GAFFA refines the initial localization results of a U-Net by approximately solving a Markov Random Field (MRF) with a single iteration of the sum-product algorithm in a differentiable manner. Our experiments demonstrate that GAFFA outperforms all other landmark refinement methods investigated in our framework. Moreover, we show that GAFFA is more robust to large outliers than state-of-the-art methods on the studied X-ray hand dataset. We further motivate this claim by visualizing the anatomical constraints used in GAFFA as spatial energy heatmaps, which allowed us to find an annotation error in the hand dataset not previously discussed in the literature. Full article
(This article belongs to the Special Issue Machine Learning-Aided Medical Image Analysis)
Show Figures

Graphical abstract

11 pages, 3728 KB  
Article
SpineHRformer: A Transformer-Based Deep Learning Model for Automatic Spine Deformity Assessment with Prospective Validation
by Moxin Zhao, Nan Meng, Jason Pui Yin Cheung, Chenxi Yu, Pengyu Lu and Teng Zhang
Bioengineering 2023, 10(11), 1333; https://doi.org/10.3390/bioengineering10111333 - 20 Nov 2023
Cited by 11 | Viewed by 2862
Abstract
The Cobb angle (CA) serves as the principal method for assessing spinal deformity, but manual measurements of the CA are time-consuming and susceptible to inter- and intra-observer variability. While learning-based methods, such as SpineHRNet+, have demonstrated potential in automating CA measurement, their accuracy [...] Read more.
The Cobb angle (CA) serves as the principal method for assessing spinal deformity, but manual measurements of the CA are time-consuming and susceptible to inter- and intra-observer variability. While learning-based methods, such as SpineHRNet+, have demonstrated potential in automating CA measurement, their accuracy can be influenced by the severity of spinal deformity, image quality, relative position of rib and vertebrae, etc. Our aim is to create a reliable learning-based approach that provides consistent and highly accurate measurements of the CA from posteroanterior (PA) X-rays, surpassing the state-of-the-art method. To accomplish this, we introduce SpineHRformer, which identifies anatomical landmarks, including the vertices of endplates from the 7th cervical vertebra (C7) to the 5th lumbar vertebra (L5) and the end vertebrae with different output heads, enabling the calculation of CAs. Within our SpineHRformer, a backbone HRNet first extracts multi-scale features from the input X-ray, while transformer blocks extract local and global features from the HRNet outputs. Subsequently, an output head to generate heatmaps of the endplate landmarks or end vertebra landmarks facilitates the computation of CAs. We used a dataset of 1934 PA X-rays with diverse degrees of spinal deformity and image quality, following an 8:2 ratio to train and test the model. The experimental results indicate that SpineHRformer outperforms SpineHRNet+ in landmark detection (Mean Euclidean Distance: 2.47 pixels vs. 2.74 pixels), CA prediction (Pearson correlation coefficient: 0.86 vs. 0.83), and severity grading (sensitivity: normal-mild; 0.93 vs. 0.74, moderate; 0.74 vs. 0.77, severe; 0.74 vs. 0.7). Our approach demonstrates greater robustness and accuracy compared to SpineHRNet+, offering substantial potential for improving the efficiency and reliability of CA measurements in clinical settings. Full article
(This article belongs to the Special Issue Artificial Intelligence in Auto-Diagnosis and Clinical Applications)
Show Figures

Figure 1

29 pages, 6436 KB  
Article
Fish Monitoring from Low-Contrast Underwater Images
by Nikos Petrellis, Georgios Keramidas, Christos P. Antonopoulos and Nikolaos Voros
Electronics 2023, 12(15), 3338; https://doi.org/10.3390/electronics12153338 - 4 Aug 2023
Cited by 7 | Viewed by 3719
Abstract
A toolset supporting fish detection, orientation, tracking and especially morphological feature estimation with high speed and accuracy, is presented in this paper. It can be exploited in fish farms to automate everyday procedures including size measurement and optimal harvest time estimation, fish health [...] Read more.
A toolset supporting fish detection, orientation, tracking and especially morphological feature estimation with high speed and accuracy, is presented in this paper. It can be exploited in fish farms to automate everyday procedures including size measurement and optimal harvest time estimation, fish health assessment, quantification of feeding needs, etc. It can also be used in an open sea environment to monitor fish size, behavior and the population of various species. An efficient deep learning technique for fish detection is employed and adapted, while methods for fish tracking are also proposed. The fish orientation is classified in order to apply a shape alignment technique that is based on the Ensemble of Regression Trees machine learning method. Shape alignment allows the estimation of fish dimensions (length, height) and the localization of fish body parts of particular interest such as the eyes and gills. The proposed method can estimate the position of 18 landmarks with an accuracy of about 95% from low-contrast underwater images where the fish can be hardly distinguished from its background. Hardware and software acceleration techniques have been applied at the shape alignment process reducing the frame processing latency to less than 0.5 us on a general purpose computer and less than 16 ms on an embedded platform. As a case study, the developed system has been trained and tested with several Mediterranean fish species in the category of seabream. A large public dataset with low-resolution underwater videos and images has also been developed to test the proposed system under worst case conditions. Full article
Show Figures

Figure 1

17 pages, 6416 KB  
Article
A Pseudoinverse Siamese Convolutional Neural Network of Transformation Invariance Feature Detection and Description for a SLAM System
by Chaofeng Yuan, Yuelei Xu, Jingjing Yang, Zhaoxiang Zhang and Qing Zhou
Machines 2022, 10(11), 1070; https://doi.org/10.3390/machines10111070 - 12 Nov 2022
Cited by 3 | Viewed by 2003
Abstract
Simultaneous localization and mapping (SLAM) systems play an important role in the field of automated robotics and artificial intelligence. Feature detection and matching are crucial aspects affecting the overall accuracy of the SLAM system. However, the accuracy of the position and matching cannot [...] Read more.
Simultaneous localization and mapping (SLAM) systems play an important role in the field of automated robotics and artificial intelligence. Feature detection and matching are crucial aspects affecting the overall accuracy of the SLAM system. However, the accuracy of the position and matching cannot be guaranteed when confronted with a cross-view angle, illumination, texture, etc. Moreover, deep learning methods are very sensitive to perspective change and do not have the invariance of geometric transformation. Therefore, a novel pseudo-Siamese convolutional network of a transformation invariance feature detection and a description for the SLAM system is proposed in this paper. The proposed method, by learning transformation invariance features and descriptors, simultaneously improves the front-end landmark detection and tracking module of the SLAM system. We converted the input image to the transform field; the backbone network was designed to extract feature maps. Then, the feature detection subnetwork and feature description subnetwork were decomposed and designed; finally, we constructed a convolutional network of transformation invariance feature detections and a description for the visual SLAM system. We implemented many experiments in datasets, and the results of the experiments demonstrated that our method has a state-of-the-art performance in global tracking when compared to that of the traditional visual SLAM systems. Full article
(This article belongs to the Section Robotics, Mechatronics and Intelligent Machines)
Show Figures

Figure 1

14 pages, 2586 KB  
Article
Learning Cephalometric Landmarks for Diagnostic Features Using Regression Trees
by Sameera Suhail, Kayla Harris, Gaurav Sinha, Maayan Schmidt, Sujala Durgekar, Shivam Mehta and Madhur Upadhyay
Bioengineering 2022, 9(11), 617; https://doi.org/10.3390/bioengineering9110617 - 27 Oct 2022
Cited by 4 | Viewed by 5640
Abstract
Lateral cephalograms provide important information regarding dental, skeletal, and soft-tissue parameters that are critical for orthodontic diagnosis and treatment planning. Several machine learning methods have previously been used for the automated localization of diagnostically relevant landmarks on lateral cephalograms. In this study, we [...] Read more.
Lateral cephalograms provide important information regarding dental, skeletal, and soft-tissue parameters that are critical for orthodontic diagnosis and treatment planning. Several machine learning methods have previously been used for the automated localization of diagnostically relevant landmarks on lateral cephalograms. In this study, we applied an ensemble of regression trees to solve this problem. We found that despite the limited size of manually labeled images, we can improve the performance of landmark detection by augmenting the training set using a battery of simple image transforms. We further demonstrated the calculation of second-order features encoding the relative locations of landmarks, which are diagnostically more important than individual landmarks. Full article
(This article belongs to the Special Issue Advances in Appliance Design and Techniques in Orthodontics)
Show Figures

Figure 1

19 pages, 1949 KB  
Article
Investigation on Robustness of Vehicle Localization Using Cameras and LiDAR
by Christian Rudolf Albrecht, Jenny Behre, Eva Herrmann, Stefan Jürgens and Uwe Stilla
Vehicles 2022, 4(2), 445-463; https://doi.org/10.3390/vehicles4020027 - 12 May 2022
Cited by 6 | Viewed by 3604
Abstract
Vehicle self-localization is one of the most important capabilities for automated driving. Current localization methods already provide accuracy in the centimeter range, so robustness becomes a key factor, especially in urban environments. There is no commonly used standard metric for the robustness of [...] Read more.
Vehicle self-localization is one of the most important capabilities for automated driving. Current localization methods already provide accuracy in the centimeter range, so robustness becomes a key factor, especially in urban environments. There is no commonly used standard metric for the robustness of localization systems, but a set of different approaches. Here, we show a novel robustness score that combines different aspects of robustness and evaluate a graph-based localization method with the help of fault injections. In addition, we investigate the influence of semantic class information on robustness with a layered landmark model. By using the perturbation injections and our novel robustness score for test drives, system vulnerabilities or possible improvements are identified. Furthermore, we demonstrate that semantic class information allows early discarding of misclassified dynamic objects such as pedestrians, thus improving false-positive rates. This work provides a method for the robustness evaluation of landmark-based localization systems that are also capable of measuring the impact of semantic class information for vehicle self-localization. Full article
Show Figures

Figure 1

21 pages, 2512 KB  
Article
Cephalometric Landmark Detection in Lateral Skull X-ray Images by Using Improved SpatialConfiguration-Net
by Martin Šavc, Gašper Sedej and Božidar Potočnik
Appl. Sci. 2022, 12(9), 4644; https://doi.org/10.3390/app12094644 - 5 May 2022
Cited by 6 | Viewed by 9643
Abstract
Accurate automated localization of cephalometric landmarks in skull X-ray images is the basis for planning orthodontic treatments, predicting skull growth, or diagnosing face discrepancies. Such diagnoses require as many landmarks as possible to be detected on cephalograms. Today’s best methods are adapted to [...] Read more.
Accurate automated localization of cephalometric landmarks in skull X-ray images is the basis for planning orthodontic treatments, predicting skull growth, or diagnosing face discrepancies. Such diagnoses require as many landmarks as possible to be detected on cephalograms. Today’s best methods are adapted to detect just 19 landmarks accurately in images varying not too much. This paper describes the development of the SCN-EXT convolutional neural network (CNN), which is designed to localize 72 landmarks in strongly varying images. The proposed method is based on the SpatialConfiguration-Net network, which is upgraded by adding replications of the simpler local appearance and spatial configuration components. The CNN capacity can be increased without increasing the number of free parameters simultaneously by such modification of an architecture. The successfulness of our approach was confirmed experimentally on two datasets. The SCN-EXT method was, with respect to its effectiveness, around 4% behind the state-of-the-art on the small ISBI database with 250 testing images and 19 cephalometric landmarks. On the other hand, our method surpassed the state-of-the-art on the demanding AUDAX database with 4695 highly variable testing images and 72 landmarks statistically significantly by around 3%. Increasing the CNN capacity as proposed is especially important for a small learning set and limited computer resources. Our algorithm is already utilized in orthodontic clinical practice. Full article
(This article belongs to the Special Issue Advances in Biomedical Image Processing and Analysis)
Show Figures

Figure 1

21 pages, 9230 KB  
Article
Tree Trunk Recognition in Orchard Autonomous Operations under Different Light Conditions Using a Thermal Camera and Faster R-CNN
by Ailian Jiang, Ryozo Noguchi and Tofael Ahamed
Sensors 2022, 22(5), 2065; https://doi.org/10.3390/s22052065 - 7 Mar 2022
Cited by 30 | Viewed by 5368
Abstract
In an orchard automation process, a current challenge is to recognize natural landmarks and tree trunks to localize intelligent robots. To overcome low-light conditions and global navigation satellite system (GNSS) signal interruptions under a dense canopy, a thermal camera may be used to [...] Read more.
In an orchard automation process, a current challenge is to recognize natural landmarks and tree trunks to localize intelligent robots. To overcome low-light conditions and global navigation satellite system (GNSS) signal interruptions under a dense canopy, a thermal camera may be used to recognize tree trunks using a deep learning system. Therefore, the objective of this study was to use a thermal camera to detect tree trunks at different times of the day under low-light conditions using deep learning to allow robots to navigate. Thermal images were collected from the dense canopies of two types of orchards (conventional and joint training systems) under high-light (12–2 PM), low-light (5–6 PM), and no-light (7–8 PM) conditions in August and September 2021 (summertime) in Japan. The detection accuracy for a tree trunk was confirmed by the thermal camera, which observed an average error of 0.16 m for 5 m, 0.24 m for 15 m, and 0.3 m for 20 m distances under high-, low-, and no-light conditions, respectively, in different orientations of the thermal camera. Thermal imagery datasets were augmented to train, validate, and test using the Faster R-CNN deep learning model to detect tree trunks. A total of 12,876 images were used to train the model, 2318 images were used to validate the training process, and 1288 images were used to test the model. The mAP of the model was 0.8529 for validation and 0.8378 for the testing process. The average object detection time was 83 ms for images and 90 ms for videos with the thermal camera set at 11 FPS. The model was compared with the YOLO v3 with same number of datasets and training conditions. In the comparisons, Faster R-CNN achieved a higher accuracy than YOLO v3 in tree truck detection using the thermal camera. Therefore, the results showed that Faster R-CNN can be used to recognize objects using thermal images to enable robot navigation in orchards under different lighting conditions. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

18 pages, 654 KB  
Review
Current Applications, Opportunities, and Limitations of AI for 3D Imaging in Dental Research and Practice
by Kuofeng Hung, Andy Wai Kan Yeung, Ray Tanaka and Michael M. Bornstein
Int. J. Environ. Res. Public Health 2020, 17(12), 4424; https://doi.org/10.3390/ijerph17124424 - 19 Jun 2020
Cited by 102 | Viewed by 12176
Abstract
The increasing use of three-dimensional (3D) imaging techniques in dental medicine has boosted the development and use of artificial intelligence (AI) systems for various clinical problems. Cone beam computed tomography (CBCT) and intraoral/facial scans are potential sources of image data to develop 3D [...] Read more.
The increasing use of three-dimensional (3D) imaging techniques in dental medicine has boosted the development and use of artificial intelligence (AI) systems for various clinical problems. Cone beam computed tomography (CBCT) and intraoral/facial scans are potential sources of image data to develop 3D image-based AI systems for automated diagnosis, treatment planning, and prediction of treatment outcome. This review focuses on current developments and performance of AI for 3D imaging in dentomaxillofacial radiology (DMFR) as well as intraoral and facial scanning. In DMFR, machine learning-based algorithms proposed in the literature focus on three main applications, including automated diagnosis of dental and maxillofacial diseases, localization of anatomical landmarks for orthodontic and orthognathic treatment planning, and general improvement of image quality. Automatic recognition of teeth and diagnosis of facial deformations using AI systems based on intraoral and facial scanning will very likely be a field of increased interest in the future. The review is aimed at providing dental practitioners and interested colleagues in healthcare with a comprehensive understanding of the current trend of AI developments in the field of 3D imaging in dental medicine. Full article
(This article belongs to the Special Issue Big Data in Dental Research and Oral Healthcare)
Show Figures

Figure 1

Back to TopTop