Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (149)

Search Parameters:
Keywords = artificial landmarks

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 3413 KB  
Article
AI-Based Angle Map Analysis of Facial Asymmetry in Peripheral Facial Palsy
by Andreas Heinrich, Gerd Fabian Volk, Christian Dobel and Orlando Guntinas-Lichius
Bioengineering 2026, 13(4), 426; https://doi.org/10.3390/bioengineering13040426 - 6 Apr 2026
Viewed by 520
Abstract
Peripheral facial palsy (PFP) causes pronounced facial asymmetry and functional impairment, highlighting the need for reliable, objective assessment. This study presents a novel, fully automated, reference-free method for quantifying facial symmetry using artificial intelligence (AI)-based facial landmark detection. A total of 405 datasets [...] Read more.
Peripheral facial palsy (PFP) causes pronounced facial asymmetry and functional impairment, highlighting the need for reliable, objective assessment. This study presents a novel, fully automated, reference-free method for quantifying facial symmetry using artificial intelligence (AI)-based facial landmark detection. A total of 405 datasets from 198 PFP patients were analyzed, each including nine standardized facial expressions covering both resting and dynamic movements. AI detected 478 landmarks per image, from which 225 paired landmarks were used to compute local asymmetry angles. Systematic evaluation identified 91 highly informative landmark pairs, primarily around the eyes, nose and mouth, which simplified the analysis and enhanced discriminatory power, while also enabling region-specific assessment of asymmetry. Statistical evaluation included Kruskal–Wallis H-tests across clinical scores and Spearman correlations, showing moderate to strong associations (0.32–0.73, p < 0.001). The fully automated pipeline produced reproducible results and demonstrated robustness to head rotation. Intuitive full-face angle maps allowed direct assessment of asymmetry without a reference image. This AI-driven approach provides a robust, objective, and visually interpretable framework for clinical monitoring, severity classification, and treatment evaluation in PFP, combining quantitative precision with practical applicability. Full article
Show Figures

Figure 1

21 pages, 13964 KB  
Article
Towards Generalizable Deepfake Detection via Facial Landmark-Guided Convolution and Local Structure Awareness
by Hao Chen, Zhengxu Zhang, Qin Li and Chunhui Feng
Algorithms 2026, 19(4), 270; https://doi.org/10.3390/a19040270 - 1 Apr 2026
Viewed by 410
Abstract
As deepfakes become increasingly realistic, there is a growing need for robust and highly accurate facial forgery detection algorithms. Existing studies show that global feature modeling approaches (Transformer, VMamba) are effective in capturing long-range dependencies, yet they often lack sufficient sensitivity to localized [...] Read more.
As deepfakes become increasingly realistic, there is a growing need for robust and highly accurate facial forgery detection algorithms. Existing studies show that global feature modeling approaches (Transformer, VMamba) are effective in capturing long-range dependencies, yet they often lack sufficient sensitivity to localized facial tampering artifacts. Meanwhile, traditional convolutional methods excel at extracting local image features but struggle to incorporate prior knowledge about facial anatomy, resulting in limited representational capability. To address these limitations, this paper proposes LGMamba, a novel detection framework that integrates facial guidance focusing on key facial components and fine-grained detail regions commonly manipulated in deepfakes with global modeling. First, we introduce an innovative Landmark-Guided Convolution (LGConv), which adaptively adjusts convolutional sampling positions using facial landmark information. This allows the model to attend to forgery-prone facial regions, such as the eyes and mouth. Second, we design a parallel Facial Structure Awareness Block (FSAB) to operate alongside the VMamba-based visual State-Space Model. Equipped with a multi-stage residual design and a CBAM attention mechanism, FSAB enhances the model’s sensitivity to subtle facial artifacts, enabling joint exploitation of global semantic consistency and fine-grained forgery cues within a unified architecture. The proposed LGMamba achieves superior performance compared to existing mainstream approaches. In cross-dataset evaluations, it attains AUC scores of 92.34% on CD1 and 96.01% on CD2, outperforming all compared methods. Full article
Show Figures

Figure 1

18 pages, 3009 KB  
Review
Research Trends, Hotspots and Future Perspectives of Geometric Morphometrics in Entomology: A Scientometric Review
by Yusha Tan, Zihui Zhao, Xiaojuan Yuan, Yuanqi Zhao, Di Su and Yuehua Song
Insects 2026, 17(3), 325; https://doi.org/10.3390/insects17030325 - 17 Mar 2026
Viewed by 735
Abstract
Geometric morphometrics is an important component of quantitative research on insect morphology, widely applied in taxonomy, intraspecific variation, and phylogenetic studies. However, systematic research in this field remains limited, with few comprehensive summaries of research trends, hotspots, and core theories. This study, based [...] Read more.
Geometric morphometrics is an important component of quantitative research on insect morphology, widely applied in taxonomy, intraspecific variation, and phylogenetic studies. However, systematic research in this field remains limited, with few comprehensive summaries of research trends, hotspots, and core theories. This study, based on scientometric methods, analyzed 1321 publications indexed in the Web of Science database up to 31 December 2025, and presents a meta-scientific review from a macro perspective, revealing the research trends, hotspots, and future directions in the field. The results show that: (1) annual publications exhibit overall growth, while research methods evolved from single landmark analysis to multimodal and interdisciplinary approaches; (2) scientists from Brazil, the USA, and France are major contributors, with studies spanning morphology, taxonomy, and ecology; (3) taxonomic studies centered on wing shape analysis constitutes a major research hotspot, closely related to phylogeny, allometry, and sexual dimorphism; (4) highly co-cited studies provide the main theoretical and methodological foundations for the field. Future research, building on existing hotspots, will further integrate geometric morphometrics with genomics, ecological functional data, three-dimensional geometric morphometrics, and artificial intelligence-assisted approaches to advance integrative taxonomy within interdisciplinary and data-driven frameworks. Full article
(This article belongs to the Section Other Arthropods and General Topics)
Show Figures

Figure 1

15 pages, 1927 KB  
Article
Reliability of Automated Cephalometric Analysis: A Comparative Assessment of Stratification Strategies Based on Chronological Age Versus Dentition Stage
by Anh Thi Ngoc Do, Hung Trong Hoang, Hieu Ngoc Le and Thuy-Trang Thi Ho
Dent. J. 2026, 14(3), 167; https://doi.org/10.3390/dj14030167 - 12 Mar 2026
Viewed by 356
Abstract
Objectives: This study evaluated the accuracy of an artificial intelligence (AI)-based cephalometric software (WebCeph version 2.0.0.) compared with manual tracing and determined whether stratifying patients by chronological age or dentition stage provides a more clinically relevant assessment of AI accuracy. Methods: [...] Read more.
Objectives: This study evaluated the accuracy of an artificial intelligence (AI)-based cephalometric software (WebCeph version 2.0.0.) compared with manual tracing and determined whether stratifying patients by chronological age or dentition stage provides a more clinically relevant assessment of AI accuracy. Methods: Three hundred lateral cephalometric radiographs of Vietnamese patients were traced manually by an orthodontist (reference standard) and analyzed automatically by WebCeph. Intra-observer reliability was validated using ICC and Dahlberg’s error. We analyzed the data using three stratification strategies: (1) Overall; (2) Chronological age (<18, 18–25, >25 years); and (3) Dentition stage (<9 primary-early mixed, 9–12 late mixed, >12 permanent). The primary outcome was the absolute measurement difference (∣Δ∣), analyzed using the Kruskal–Wallis test and effect size (η2). Results: Overall, WebCeph showed high concordance with manual tracing (ICC > 0.80 for most parameters). Chronological age stratification showed weak associations with measurement error; differences between groups were largely non-significant (p>0.05) with a small effect size (η20.015). In contrast, the dentition stage revealed significant performance disparities (p<0.05). Notably, accuracy for the Mandibular Arc (ICC = 0.349) and Mandibular Plane Angle (p=0.048) degraded significantly in the primary-early mixed group, a vulnerability obscured by chronological age-based stratification. Conclusions: Dentition stage is a more sensitive and biologically relevant predictor of AI accuracy than chronological age. While WebCeph is reliable for permanent dentition, accuracy degrades significantly in the primary-early mixed phase. Clinicians should prioritize manual verification of mandibular and incisor landmarks in mixed-dentition children. Full article
(This article belongs to the Special Issue New Trends in Digital Dentistry)
Show Figures

Figure 1

17 pages, 277 KB  
Review
Artificial Intelligence Methods in Cephalometric Image Analysis—A Systematic Narrative Review
by Katarzyna Zaborowicz, Maciej Zaborowicz, Katarzyna Cieślińska and Barbara Biedziak
J. Clin. Med. 2026, 15(5), 1920; https://doi.org/10.3390/jcm15051920 - 3 Mar 2026
Viewed by 563
Abstract
Background: The dynamic development of information technologies, particularly in the fields of computer image analysis and artificial intelligence (AI) algorithms, plays an increasingly important role in orthodontic diagnostics. Cephalometric images constitute a fundamental element in orthodontic treatment planning. They contain encoded information related [...] Read more.
Background: The dynamic development of information technologies, particularly in the fields of computer image analysis and artificial intelligence (AI) algorithms, plays an increasingly important role in orthodontic diagnostics. Cephalometric images constitute a fundamental element in orthodontic treatment planning. They contain encoded information related to the assessment of craniofacial growth and development, which is the focus of algorithms employing machine learning and process automation. Objectives: The aim of this paper is to present the current state of knowledge regarding the application of artificial intelligence methods in cephalometric image analysis, with particular emphasis on studies published between 2020 and 2025 in the Scopus and Web of Science databases. Results: Twenty key studies were included. The most commonly used models were convolutional neural networks (CNN), You Only Look Once (YOLO), Bayesian convolutional neural networks (BCNN), artificial neural networks (ANN), stacked hourglass networks, and Deep Neural Patchworks (DNP). In landmark detection tasks, the average location errors ranged from 1 to 2 mm compared to expert annotations, remaining within clinically acceptable limits. YOLO- and CNN-based systems achieved accuracy comparable to that of experienced orthodontists, while BCNN models additionally provided uncertainty estimates that improved clinical interpretability. In classification tasks, artificial neural network (ANN) models assessing cervical vertebral maturity (CVM) achieved an accuracy of up to 95%. In screening studies prior to orthognathic surgery, a multilayer perceptron combined with a regional convolutional neural network achieved 96.3% agreement with expert decisions. Conclusions: AI-based tools provide clinically acceptable accuracy in cephalometric analysis, with landmark detection errors typically ranging from 1 to 2 mm compared to expert assessment. These systems improve repeatability and significantly reduce analysis time, especially when used in semi-automated workflows. AI-based assessment of cervical vertebral maturity and surgical eligibility shows high agreement with expert decisions, confirming their role as reliable tools to support clinical decision-making. Nevertheless, broader validation in different patient populations is necessary before routine clinical implementation. Full article
(This article belongs to the Section Dentistry, Oral Surgery and Oral Medicine)
30 pages, 708 KB  
Article
AI-Assisted Sentencing Modeling Under Explainability Constraints: Framework Design and Judicial Applicability Analysis
by Jie Sun and Tao Shen
Information 2026, 17(3), 234; https://doi.org/10.3390/info17030234 - 1 Mar 2026
Viewed by 619
Abstract
The integration of artificial intelligence into criminal sentencing decisions represents one of the most consequential applications of algorithmic systems in contemporary governance. While AI-assisted risk assessment tools promise enhanced consistency and predictive accuracy, their deployment in judicial contexts raises profound concerns regarding transparency, [...] Read more.
The integration of artificial intelligence into criminal sentencing decisions represents one of the most consequential applications of algorithmic systems in contemporary governance. While AI-assisted risk assessment tools promise enhanced consistency and predictive accuracy, their deployment in judicial contexts raises profound concerns regarding transparency, due process, and fundamental rights. This paper proposes a comprehensive framework for AI-assisted sentencing modeling that embeds explainability as a foundational constraint rather than an afterthought. Drawing upon the landmark State v. Loomis decision, empirical analyses of the COMPAS algorithm, and emerging regulatory frameworks including the European Union Artificial Intelligence Act, we examine the tension between predictive performance and interpretive transparency. Our framework integrates a three-layer explanation architecture: inherent interpretability through generalized additive models (GA2Ms) providing transparent global structure, exact local feature attribution derived directly from the additive model decomposition without approximation, and counterfactual reasoning that identifies minimal input changes altering risk classifications. We demonstrate through rigorous experimental validation on the ProPublica COMPAS dataset (n = 6172) that explainability-constrained models achieve comparable predictive validity to opaque alternatives (AUC 0.71 versus 0.70–0.72 for black-box methods) while satisfying constitutional due process requirements and emerging regulatory mandates under the EU Artificial Intelligence Act. The impossibility theorems governing algorithmic fairness are examined in light of their implications for sentencing equity, and we propose that transparent model architectures enable targeted interventions unavailable when decision logic remains concealed. The paper concludes with policy guidance for jurisdictions seeking to implement AI-assisted sentencing systems that balance public safety objectives with procedural fairness and individual rights. Full article
(This article belongs to the Special Issue Artificial Intelligence Technologies for Sustainable Development)
Show Figures

Figure 1

24 pages, 4604 KB  
Article
Quantification of Craniofacial Growth Pattern Based on Deep Learning
by Ziyi Hu, Yuyanran Zhang, Ningtao Liu, Xin Gao, Ziyu Huang, Guanglin Wu, Zhiyong Zhang and Shuang Wang
Bioengineering 2026, 13(3), 277; https://doi.org/10.3390/bioengineering13030277 - 27 Feb 2026
Viewed by 653
Abstract
Background: Childhood and adolescence constitute a critical period for craniofacial growth. Understanding its developmental patterns is essential for clinical decision-making in orthodontics and maxillofacial surgery. Traditional cephalometric analysis relies on manual landmarking, which oversimplifies complex morphology and introduces subjectivity. Although deep learning, a [...] Read more.
Background: Childhood and adolescence constitute a critical period for craniofacial growth. Understanding its developmental patterns is essential for clinical decision-making in orthodontics and maxillofacial surgery. Traditional cephalometric analysis relies on manual landmarking, which oversimplifies complex morphology and introduces subjectivity. Although deep learning, a key artificial intelligence (AI) technology, has demonstrated remarkable performance in image analysis and classification, most methods still depend on manual annotations during training, perpetuating subjectivity and limiting model generalizability and robustness on large datasets. This hinders the development of objective, comprehensive methods to quantify craniofacial growth that account for its multi-tissue complexity. Methods: To address these limitations, this study developed an end-to-end deep learning framework based on lateral cephalometric radiographs from 41,625 individuals aged 4–18 years. Without relying on manual annotations, the model is designed to autonomously extract dynamic imaging features associated with continuous age intervals in craniofacial development and further discern features related to sexual dimorphism. Gradient-weighted Class Activation Mapping (Grad-CAM) was employed to visualize the learned features, generating population-averaged saliency maps that highlight age-related and sex-related patterns. Furthermore, we introduced two novel quantitative metrics, the Age-related Saliency Index (ASI) and the Sex-related Saliency Index (SSI), to evaluate the significance of developmental and dimorphic characteristics in key craniofacial regions. Results: Age-related saliency maps extended the focus from external contours to internal anatomical details of the bones, intuitively visualizing the relative importance of multiple bone regions during dynamic development, with the ASI providing a quantitative prioritization of these regions. The Sex-related Saliency Index (SSI) quantified the dynamic evolution of sexual dimorphism, demonstrating that early-stage differences were widely distributed across cranial bones and gradually became concentrated in the mandibular region by adulthood. Conclusions: This study established an end-to-end deep learning framework for analyzing large-scale lateral cephalometric radiographs. By generating age- and sex-related average saliency maps and their corresponding quantitative indices, we visualized and quantified the spatiotemporal growth dynamics and sexual dimorphism across distinct craniofacial skeletal regions throughout development. These findings not only validate established developmental theories but also provide novel insights into the coordinated growth patterns of craniofacial bones and sex-specific radiological characteristics, offering clinicians objective quantitative references for assessing developmental stages and guiding the timing of interventions targeting specific craniofacial regions. Full article
Show Figures

Figure 1

13 pages, 1494 KB  
Article
Development and Clinical Validation of an Artificial Intelligence-Based Automated Visual Acuity Testing System
by Kelvin Zhenghao Li, Hnin Hnin Oo, Kenneth Chee Wei Liang, Najah Ismail, Jasmine Ling Ling Chua, Jackson Jie Sheng Chng, Yang Wu, Daryl Wei Ren Wong, Sumaya Rani Khan, Boon Peng Yap, Rong Tong, Choon Meng Kiew, Yufei Huang, Chun Hau Chua, Alva Khai Shin Lim and Xiuyi Fan
Life 2026, 16(2), 357; https://doi.org/10.3390/life16020357 - 20 Feb 2026
Viewed by 829
Abstract
Background: To develop and validate an automated visual acuity (VA) testing system integrating artificial intelligence (AI)–driven speech and image recognition technologies, enabling self-administered, clinic-based VA assessment; Methods: The system incorporated a fine-tuned Whisper speech-recognition model with Silero voice activity detection and pose estimation [...] Read more.
Background: To develop and validate an automated visual acuity (VA) testing system integrating artificial intelligence (AI)–driven speech and image recognition technologies, enabling self-administered, clinic-based VA assessment; Methods: The system incorporated a fine-tuned Whisper speech-recognition model with Silero voice activity detection and pose estimation through facial landmark and ArUco marker detection. A state-driven interface guided users through sequential testing with and without a pinhole. Speech recognition was enhanced using a local Singaporean English dataset. Laboratory validation assessed speech and pose recognition performance, while clinical validation compared automated and manual VA testing at a tertiary eye clinic; Results: The fine-tuned model reduced word error rates from 17.83% to 9.81% for letters and 2.76% to 1.97% for numbers. Pose detection accurately identified valid occluder states. Among 72 participants (144 eyes), automated unaided VA showed good agreement with manual VA (ICC = 0.77, 95% CI 0.62–0.85), while pinhole VA demonstrated moderate agreement (ICC = 0.63, 95% CI 0.25–0.83). Automated testing took longer (132.1 ± 47.5 s vs. 97.1 ± 47.8 s; p < 0.001), but user experience remained positive (mean Likert scale score 4.3 ± 0.8); Conclusions: The AI-based automated VA system delivered accurate, reliable, and user-friendly performance, supporting its feasibility for clinical implementation. Full article
(This article belongs to the Section Biochemistry, Biophysics and Computational Biology)
Show Figures

Figure 1

13 pages, 256 KB  
Review
AI in High-Frequency Micro-Ultrasound: Advancing Prostate Imaging from Segmentation to Cancer Detection
by Ludovica Cella, Marco Paciotti, Pier Paolo Avolio, Vittorio Fasulo, Andrea Piccolini, Rebecca Canneto, Giacomo Cavadini, Luca Di Stefano, Alberto Saita, Paolo Casale, Massimo Lazzeri, Nicolò Maria Buffi and Giovanni Lughezzani
Cancers 2026, 18(4), 665; https://doi.org/10.3390/cancers18040665 - 18 Feb 2026
Viewed by 620
Abstract
Background/Objective: High-frequency micro-ultrasound (micro-US) offers real-time, high-resolution imaging for prostate cancer. Although artificial intelligence (AI) has shown potential in enhancing micro-US interpretation, a comprehensive review of this emerging field is currently missing. This review synthesizes current evidence on AI applied to ExactVu 29 [...] Read more.
Background/Objective: High-frequency micro-ultrasound (micro-US) offers real-time, high-resolution imaging for prostate cancer. Although artificial intelligence (AI) has shown potential in enhancing micro-US interpretation, a comprehensive review of this emerging field is currently missing. This review synthesizes current evidence on AI applied to ExactVu 29 MHz micro-US for prostate cancer. Methods: PubMed/MEDLINE, Embase, Scopus, Web of Science and the Cochrane Library were searched up to December 2025. Studies were included if they applied machine learning or deep learning directly to 29 MHz micro-US data and reported quantitative performance metrics. Results: Ten studies met the inclusion criteria: six on prostate cancer detection, three on prostate segmentation and one on micro-US–histopathology registration. Detection models ranged from classical quantitative ultrasound machine learning to deep architectures using self-supervision, transformers, multiple-instance learning, ensemble calibration and 3D segmentation-based pipelines. Among core-level models for clinically significant cancer, area under the receiver operating characteristic curve (AUROC) values clustered around 0.76–0.81; one lesion-level framework reported an AUROC of 0.92, though at a non-comparable analytical unit. Segmentation studies achieved accurate prostate delineation (Dice similarity coefficient ≈ 0.94), and a single study demonstrated high-precision 3D registration to whole-mount histopathology (Dice similarity coefficient 0.97 and landmark error < 3 mm). All studies evaluated AI on previously acquired data, without real-time clinical implementation. Conclusions: AI for micro-US shows promising and reproducible early results across detection, segmentation and registration, but evidence is still limited. In view of the potential of AI to optimize micro-US utilization and its related advantages, additional efforts are warranted to achieve clinical adoption. Full article
(This article belongs to the Special Issue Image Assisted High Precision Radiation Oncology)
16 pages, 9112 KB  
Review
Lateral Cephalometric Radiography: Principles, Common Positioning Errors, and AI-Driven Quality Control
by Rossana Izzetti, Maria Pisano, Chiara Cinquini, Lorenzo Cinci, Antonio Barone and Cosimo Nardi
Diagnostics 2026, 16(4), 543; https://doi.org/10.3390/diagnostics16040543 - 12 Feb 2026
Viewed by 1318
Abstract
This narrative review provides a contemporary synthesis of lateral cephalometric radiography (LCR), addressing both its foundational principles and the impact of technological integration, with a focus on enhancing diagnostic reliability. A structured literature search (PubMed, up to September 2025) was conducted around five [...] Read more.
This narrative review provides a contemporary synthesis of lateral cephalometric radiography (LCR), addressing both its foundational principles and the impact of technological integration, with a focus on enhancing diagnostic reliability. A structured literature search (PubMed, up to September 2025) was conducted around five domains: LCR’s diagnostic role, acquisition methods, positioning errors, comparisons with cone-beam computed tomography (CBCT), and Artificial Intelligence (AI)-driven quality control. Precise patient positioning—maintaining symmetry and a horizontal Frankfort plane—is paramount, as common errors (tilting, rotation, nodding) introduce quantifiable inaccuracies in key measurements. While digital innovation, particularly deep learning models for automated landmark detection and error flagging, improves the consistency of workflow, current AI tools require validation and human oversight to manage limitations in generalizability. When contextualized against three-dimensional imaging, LCR maintains a favorable balance of diagnostic utility and lower radiation dose, supporting its selective, indication-based use in contemporary practice. Ultimately, this review suggests that adherence to a meticulous acquisition technique remains the cornerstone of reliable LCR analysis, even as AI and digital tools evolve to augment the clinician’s role. Full article
Show Figures

Figure 1

24 pages, 596 KB  
Review
Materials and Techniques for Splinting Scan Bodies: A Scoping Review
by Aspasia Pachiou, Ioulianos Rachiotis, Alexis Ioannidis, Pune N. Paqué, Ronald E. Jung and Christos Rahiotis
Materials 2026, 19(4), 664; https://doi.org/10.3390/ma19040664 - 9 Feb 2026
Viewed by 690
Abstract
Background: Digital implant impressions using intraoral scanners are increasingly adopted; however, their accuracy remains challenging in complete-arch and extended edentulous scenarios due to limited anatomical reference points and cumulative stitching errors. Various splinting techniques, scan-body modifications, and auxiliary geometric devices have been proposed [...] Read more.
Background: Digital implant impressions using intraoral scanners are increasingly adopted; however, their accuracy remains challenging in complete-arch and extended edentulous scenarios due to limited anatomical reference points and cumulative stitching errors. Various splinting techniques, scan-body modifications, and auxiliary geometric devices have been proposed to enhance digital accuracy, yet the available evidence is highly heterogeneous and lacks comprehensive synthesis. Methods: This scoping review was conducted according to PRISMA-ScR guidelines. A systematic search of PubMed/MEDLINE, Embase, Scopus, and Web of Science databases identified studies evaluating materials, designs, or techniques intended to splint, stabilize, or geometrically augment intraoral scan bodies in digital implant workflows. In vitro, clinical, and mixed-design studies were included. Data were extracted descriptively and synthesized narratively. Results: Seventy-three studies met the inclusion criteria, the majority of which were in vitro investigations focused on fully edentulous arches. Splinting strategies included direct resin-based connections, rigid or semi-rigid auxiliary geometric devices, modified scan bodies with extensional geometries, and artificial landmarks. Most studies reported improved trueness, precision, or scanning efficiency when rigid or geometrically enriched devices were used, particularly in long-span or angulated implant configurations. However, flexible or optically interfering splints occasionally reduced accuracy, and outcomes were strongly scanner-dependent. Conclusions: Splinting and auxiliary scanning strategies generally enhance the accuracy of complete-arch digital implant impressions, especially when rigid, well-engineered, or geometrically complex designs are employed. Modified scan bodies and calibrated auxiliary devices appear particularly promising, while flexible splints may be counterproductive. Standardized protocols and further in vivo validation are required to optimize digital implant workflows. Full article
(This article belongs to the Special Issue Advanced Dental Materials: From Design to Application, Third Edition)
Show Figures

Figure 1

21 pages, 399 KB  
Review
Melanoma Beyond the Microscope in the Era of AI and Integrated Diagnostics
by Serra Aksoy, Pinar Demircioglu and Ismail Bogrekci
Dermato 2026, 6(1), 6; https://doi.org/10.3390/dermato6010006 - 3 Feb 2026
Viewed by 628
Abstract
Background/Objectives: Melanoma remains one of the most malignant types of skin cancer with rising incidence numbers, despite the progress made in the prevention and management of the disease. Recent technological advancements, such as developments in the field of molecular biology, imaging, and artificial [...] Read more.
Background/Objectives: Melanoma remains one of the most malignant types of skin cancer with rising incidence numbers, despite the progress made in the prevention and management of the disease. Recent technological advancements, such as developments in the field of molecular biology, imaging, and artificial intelligence (AI), have led to a paradigm shift in the diagnosis, assessment, and management of melanoma. The current review aims to integrate current research on melanoma, moving beyond the boundaries of conventional histological analysis. Methods: This is a critical appraisal narrative review that focuses on recent studies in the areas of translation research and digital health with regard to melanoma. This research particularly targeted recent studies within the last five years, with landmark studies implicated when appropriate. Evidence was synthesized within the major categories that include epidemiology, early diagnosis, histopathology, predictive biomarkers, genetic/epigenetic changes, AI-assisted diagnostic platforms, and novel therapeutic platforms & targets. Results: Early detection techniques, innovative imaging, and biomarker-guided risk adjustment can improve diagnostic accuracy and prognostic stratification. The potential of AI in dermoscopy, digital pathology, and decision analytical systems is evident, although validation, bias, and integration issues need to be addressed. Advances in immunotherapy, targeted therapies, and novel molecular/immunological targets are expanding and facilitating integrated and personalized management. Conclusions: There is a trend in melanoma research to shift towards an integrated diagnostic platform that involves the use of AI, molecular characterization, and clinical inputs to enable more accurate and personalized diagnoses. To realize this potential, there is a need to validate, collaborate, and address ethics and implementation. Full article
(This article belongs to the Collection Artificial Intelligence in Dermatology)
Show Figures

Graphical abstract

15 pages, 4459 KB  
Article
Automated Custom Sunglasses Frame Design Using Artificial Intelligence and Computational Design
by Prodromos Minaoglou, Anastasios Tzotzis, Klodian Dhoska and Panagiotis Kyratsis
Machines 2026, 14(1), 109; https://doi.org/10.3390/machines14010109 - 17 Jan 2026
Viewed by 962
Abstract
Mass production in product design typically relies on standardized geometries and dimensions to accommodate a broad user population. However, when products are required to interface directly with the human body, such generalized design approaches often result in inadequate fit and reduced user comfort. [...] Read more.
Mass production in product design typically relies on standardized geometries and dimensions to accommodate a broad user population. However, when products are required to interface directly with the human body, such generalized design approaches often result in inadequate fit and reduced user comfort. This limitation highlights the necessity of fully personalized design methodologies based on individual anthropometric characteristics. This paper presents a novel application that automates the design of custom-fit sunglasses through the integration of Artificial Intelligence (AI) and Computational Design. The system is implemented using both textual (Python™ version 3.10.11) and visual (Grasshopper 3D™ version 1.0.0007) programming environments. The proposed workflow consists of the following four main stages: (a) acquisition of user facial images, (b) AI-based detection of facial landmarks, (c) three-dimensional reconstruction of facial features via an optimization process, and (d) generation of a personalized sunglass frame, exported as a three-dimensional model. The application demonstrates a robust performance across a diverse set of test images, consistently generating geometries that conformed closely to each user’s facial morphology. The accurate recognition of facial features enables the successful generation of customized sunglass frame designs. The system is further validated through the fabrication of a physical prototype using additive manufacturing, which confirms both the manufacturability and the fit of the final design. Overall, the results indicate that the combined use of AI-driven feature extraction and parametric Computational Design constitutes a powerful framework for the automated development of personalized wearable products. Full article
Show Figures

Figure 1

27 pages, 80350 KB  
Article
Pose-Based Static Sign Language Recognition with Deep Learning for Turkish, Arabic, and American Sign Languages
by Rıdvan Yayla, Hakan Üçgün and Mahmud Abbas
Sensors 2026, 26(2), 524; https://doi.org/10.3390/s26020524 - 13 Jan 2026
Viewed by 919
Abstract
Advancements in artificial intelligence have significantly enhanced communication for individuals with hearing impairments. This study presents a robust cross-lingual Sign Language Recognition (SLR) framework for Turkish, American English, and Arabic sign languages. The system utilizes the lightweight MediaPipe library for efficient hand landmark [...] Read more.
Advancements in artificial intelligence have significantly enhanced communication for individuals with hearing impairments. This study presents a robust cross-lingual Sign Language Recognition (SLR) framework for Turkish, American English, and Arabic sign languages. The system utilizes the lightweight MediaPipe library for efficient hand landmark extraction, ensuring stable and consistent feature representation across diverse linguistic contexts. Datasets were meticulously constructed from nine public-domain sources (four Arabic, three American, and two Turkish). The final training data comprises curated image datasets, with frames for each language carefully selected from varying angles and distances to ensure high diversity. A comprehensive comparative evaluation was conducted across three state-of-the-art deep learning architectures—ConvNeXt (CNN-based), Swin Transformer (ViT-based), and Vision Mamba (SSM-based)—all applied to identical feature sets. The evaluation demonstrates the superior performance of contemporary vision Transformers and state space models in capturing subtle spatial cues across diverse sign languages. Our approach provides a comparative analysis of model generalization capabilities across three distinct sign languages, offering valuable insights for model selection in pose-based SLR systems. Full article
(This article belongs to the Special Issue Sensor Systems for Gesture Recognition (3rd Edition))
Show Figures

Figure 1

25 pages, 2813 KB  
Review
PSMA-Based Radiopharmaceuticals in Prostate Cancer Theranostics: Imaging, Clinical Advances, and Future Directions
by Ali Cahid Civelek
Cancers 2026, 18(2), 234; https://doi.org/10.3390/cancers18020234 - 12 Jan 2026
Cited by 1 | Viewed by 1738
Abstract
Prostate cancer remains one of the most common malignancies in men worldwide, with incidence and mortality steadily increasing across diverse populations. While early detection and radical prostatectomy can achieve durable control in a subset of patients, approximately 40% of men will ultimately experience [...] Read more.
Prostate cancer remains one of the most common malignancies in men worldwide, with incidence and mortality steadily increasing across diverse populations. While early detection and radical prostatectomy can achieve durable control in a subset of patients, approximately 40% of men will ultimately experience biochemical recurrence often in the absence of clinically detectable disease. Conventional imaging approaches—CT, MRI, and bone scintigraphy—have limited sensitivity for early relapses, frequently leading to delayed diagnosis and suboptimal treatment planning. The discovery of prostate-specific membrane antigen (PSMA) in 1987 and its subsequent clinical translation into positron emission tomography (PET) imaging with [68Ga]Ga-PSMA-11 in 2012, followed by U.S. FDA approval in 2020, has transformed the landscape of prostate cancer imaging. PSMA PET has demonstrated superior accuracy over conventional imaging, as highlighted in the landmark proPSMA trial and now serves as the foundation for theranostic approaches that integrate diagnostic imaging with targeted radioligand therapy. The clinical approval of [177Lu]Lu-PSMA-617 (Pluvicto®: (lutetium Lu 177 vipivotide tetraxetan, Advanced Accelerator Applications USA, Inc., a Novartis company) has established targeted radioligand therapy as a viable option for men with metastatic castration-resistant prostate cancer, extending survival in patients with limited alternatives. Emerging strategies, including next-generation ligands with improved tumor uptake and altered clearance pathways, as well as the integration of artificial intelligence for imaging quantification, are poised to further refine patient selection, dosimetry, and treatment outcomes. This review highlights the evolution of PSMA-based imaging and therapy, discusses current clinical applications and limitations, and outlines future directions for optimizing theranostic strategies in prostate cancer care. Full article
(This article belongs to the Section Cancer Therapy)
Show Figures

Figure 1

Back to TopTop