Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (403)

Search Parameters:
Keywords = computer-assisted image processing

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 2167 KB  
Article
AI-Powered Service Robots for Smart Airport Operations: Real-World Implementation and Performance Analysis in Passenger Flow Management
by Eleni Giannopoulou, Panagiotis Demestichas, Panagiotis Katrakazas, Sophia Saliverou and Nikos Papagiannopoulos
Sensors 2026, 26(3), 806; https://doi.org/10.3390/s26030806 - 25 Jan 2026
Viewed by 70
Abstract
The proliferation of air travel demand necessitates innovative solutions to enhance passenger experience while optimizing airport operational efficiency. This paper presents the pilot-scale implementation and evaluation of an AI-powered service robot ecosystem integrated with thermal cameras and 5G wireless connectivity at Athens International [...] Read more.
The proliferation of air travel demand necessitates innovative solutions to enhance passenger experience while optimizing airport operational efficiency. This paper presents the pilot-scale implementation and evaluation of an AI-powered service robot ecosystem integrated with thermal cameras and 5G wireless connectivity at Athens International Airport. The system addresses critical challenges in passenger flow management through real-time crowd analytics, congestion detection, and personalized robotic assistance. Eight strategically deployed thermal cameras monitor passenger movements across check-in areas, security zones, and departure entrances while employing privacy-by-design principles through thermal imaging technology that reduces personally identifiable information capture. A humanoid service robot, equipped with Robot Operating System navigation capabilities and natural language processing interfaces, provides real-time passenger assistance including flight information, wayfinding guidance, and congestion avoidance recommendations. The wi.move platform serves as the central intelligence hub, processing video streams through advanced computer vision algorithms to generate actionable insights including passenger count statistics, flow rate analysis, queue length monitoring, and anomaly detection. Formal trial evaluation conducted on 10 April 2025, with extended operational monitoring from April to June 2025, demonstrated strong technical performance with application round-trip latency achieving 42.9 milliseconds, perfect service reliability and availability ratings of one hundred percent, and comprehensive passenger satisfaction scores exceeding 4.3/5 across all evaluated dimensions. Results indicate promising potential for scalable deployment across major international airports, with identified requirements for sixth-generation network capabilities to support enhanced multi-robot coordination and advanced predictive analytics functionalities in future implementations. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

21 pages, 1482 KB  
Article
Advancing a Sustainable Human–AI Collaboration Ecosystem in Interface Design: A User-Centered Analysis of Interaction Processes and Design Opportunities Based on Participants from China
by Chang Xiong, Guangliang Sang and Ken Nah
Sustainability 2026, 18(2), 1139; https://doi.org/10.3390/su18021139 - 22 Jan 2026
Viewed by 104
Abstract
The application of Generative Artificial Intelligence (GenAI)—defined as a class of AI systems capable of autonomously generating new content such as images, texts, and design solutions based on learned data patterns—has become increasingly widespread in creative design. By supporting ideation, rapid trial-and-error, and [...] Read more.
The application of Generative Artificial Intelligence (GenAI)—defined as a class of AI systems capable of autonomously generating new content such as images, texts, and design solutions based on learned data patterns—has become increasingly widespread in creative design. By supporting ideation, rapid trial-and-error, and data-driven decision-making, GenAI enables designers to explore design alternatives more efficiently and enhances human–computer interaction experiences. In design practice, GenAI functions not only as a productivity-enhancing tool but also as a collaborative partner that assists users in visual exploration, concept refinement, and iterative development. However, users still face a certain learning curve before effectively adopting these technologies. Within the framework of human-centered artificial intelligence, contemporary design practices place greater emphasis on inclusivity across diverse user groups and on enabling intuitive “what-you-think-is-what-you-get” interaction experiences. From a sustainable design perspective, GenAI’s capabilities in digital simulation, rapid iteration, and automated feedback contribute to more efficient design workflows, reduced collaboration costs, and broader access to creative participation for users with varying levels of expertise. These characteristics play a crucial role in enhancing the accessibility of design resources and supporting the long-term sustainability of creative processes. Focusing on the context of China’s digital design industry, this study investigates the application of GenAI in design workflows through an empirical case study of Zhitu AI, a generative design tool developed by Beijing Didi Infinity Technology Development Co., Ltd. The study conducts a literature review to outline the role of GenAI in visual design processes and employs observation-based experiments and semi-structured interviews with users of varying levels of design expertise. The findings reveal key pain points across stages such as prompt formulation, secondary editing, and asset generation. Drawing on the Kano model, the study further identifies potential design opportunities and discusses their value in improving efficiency, supporting non-expert users, and promoting more sustainable and inclusive design practices. Full article
(This article belongs to the Section Sustainable Products and Services)
Show Figures

Figure 1

27 pages, 3106 KB  
Article
An Adaptive Hybrid Metaheuristic Algorithm for Lung Cancer in Pathological Image Segmentation
by Muhammed Faruk Şahin and Ferzat Anka
Diagnostics 2026, 16(1), 84; https://doi.org/10.3390/diagnostics16010084 - 26 Dec 2025
Viewed by 341
Abstract
Background/Objectives: Histopathological images are fundamental for the morphological diagnosis and subtyping of lung cancer. However, their high resolution, color diversity, and structural complexity make automated segmentation highly challenging. This study aims to address these challenges by developing a novel hybrid metaheuristic approach for [...] Read more.
Background/Objectives: Histopathological images are fundamental for the morphological diagnosis and subtyping of lung cancer. However, their high resolution, color diversity, and structural complexity make automated segmentation highly challenging. This study aims to address these challenges by developing a novel hybrid metaheuristic approach for multilevel image thresholding to enhance segmentation accuracy and computational efficiency. Methods: An adaptive hybrid metaheuristic algorithm, termed SCSOWOA, is proposed by integrating the Sand Cat Swarm Optimization (SCSO) algorithm with the Whale Optimization Algorithm (WOA). The algorithm combines the exploration capacity of SCSO with the exploitation strength of WOA in a sequential and adaptive manner. The model was evaluated on histopathological images of lung cancer from the LC25000 dataset with threshold levels ranging from 2 to 12, using PSNR, SSIM, and FSIM as performance metrics. Results: The proposed algorithm achieved stable and high-quality segmentation results, with average values of 27.9453 dB in PSNR, 0.8048 in SSIM, and 0.8361 in FSIM. At the threshold level of T = 12, SCSOWOA obtained the highest performance, with SSIM and FSIM scores of 0.9340 and 0.9542, respectively. Furthermore, it demonstrated the lowest average execution time of 1.3221 s, offering up to a 40% improvement in computational efficiency compared with other metaheuristic methods. Conclusions: The SCSOWOA algorithm effectively balances exploration and exploitation processes, providing high-accuracy, low-variance, and computationally efficient segmentation. These findings highlight its potential as a robust and practical solution for AI-assisted histopathological image analysis and lung cancer diagnosis systems. Full article
(This article belongs to the Special Issue Advances in Lung Cancer Diagnosis)
Show Figures

Graphical abstract

30 pages, 1920 KB  
Article
Handwriting-Based Mathematical Assistant Software System Using Computer Vision Methods
by Ahmet Alkan and Gozde Yolcu Oztel
Mathematics 2025, 13(24), 4001; https://doi.org/10.3390/math13244001 - 15 Dec 2025
Viewed by 462
Abstract
Mathematics is a discipline that forms the foundation of many fields and should be learned gradually, starting from early childhood. However, some subjects can be difficult to learn due to their abstract nature, the need for attention and planning, and math anxiety. Therefore, [...] Read more.
Mathematics is a discipline that forms the foundation of many fields and should be learned gradually, starting from early childhood. However, some subjects can be difficult to learn due to their abstract nature, the need for attention and planning, and math anxiety. Therefore, in this study, a system that contributes to mathematics teaching using computer vision approaches has been developed. In the proposed system, users can write operations directly in their own handwriting on the system interface, learn their results, or test the accuracy of their answers. They can also test themselves with random questions generated by the system. In addition, visual graph generation has been added to the system, ensuring that education is supported with visuals and made enjoyable. Besides the character recognition test, which is applied on public datasets, the system was also tested with images obtained from 22 different users, and successful results were observed. The study utilizes CNN networks for handwritten character detection and self-created image processing algorithms to organize the obtained characters into equations. The system can work with equations that include single and multiple unknowns, trigonometric functions, derivatives, integrals, etc. Operations can be performed, and successful results can be achieved even for users who write in italicized handwriting. Furthermore, equations written within each closed figure on the same page are evaluated locally. This allows multiple problems to be solved on the same page, providing a user-friendly approach. The system can be an assistant for improving performance in mathematics education. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

13 pages, 2512 KB  
Article
AI-Based Detection of Dental Features on CBCT: Dual-Layer Reliability Analysis
by Natalia Kazimierczak, Nora Sultani, Natalia Chwarścianek, Szymon Krzykowski, Zbigniew Serafin, Aleksandra Ciszewska and Wojciech Kazimierczak
Diagnostics 2025, 15(24), 3207; https://doi.org/10.3390/diagnostics15243207 - 15 Dec 2025
Viewed by 718
Abstract
Background/Objectives: Artificial intelligence (AI) systems may enhance diagnostic accuracy in cone-beam computed tomography (CBCT) analysis. However, most validations focus on isolated tooth-level tasks rather than clinically meaningful full-mouth assessment outcomes. To evaluate the diagnostic accuracy of a commercial AI platform for detecting dental [...] Read more.
Background/Objectives: Artificial intelligence (AI) systems may enhance diagnostic accuracy in cone-beam computed tomography (CBCT) analysis. However, most validations focus on isolated tooth-level tasks rather than clinically meaningful full-mouth assessment outcomes. To evaluate the diagnostic accuracy of a commercial AI platform for detecting dental treatment features on CBCT images at both tooth and full-scan levels. Methods: In this retrospective single-center study, 147 CBCT scans (4704 tooth positions) were analyzed. Two experienced readers annotated treatment features (missing teeth, fillings, endodontic treatments, crowns, pontics, orthodontic appliances, implants), and consensus served as the reference. Anonymized datasets were processed by a cloud-based AI system (Diagnocat Inc., San Francisco, CA, USA). Diagnostic metrics—sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and F1-score—were calculated with 95% patient-clustered bootstrap confidence intervals. A “Perfect Agreement” criterion defined full-scan level success as an entirely error-free full-mouth report. Results: Tooth-level AI performance was excellent, with accuracy exceeding 99% for most categories. Sensitivity was highest for missing teeth (99.3%) and endodontic treatments (99.0%). Specificity and NPV exceeded 98.5% and 99.7%, respectively. Full-scan level Perfect Agreement was achieved in 82.3% (95% CI: 76.2–88.4%), with errors concentrated in teeth presenting multiple co-existing findings. Conclusions: The evaluated AI platform demonstrates near-perfect accuracy in detecting isolated dental features but moderate reliability in generating complete full-mouth reports. It functions best as an assistive diagnostic tool, not as an autonomous system. Full article
(This article belongs to the Special Issue Medical Imaging Diagnosis of Oral and Maxillofacial Diseases)
Show Figures

Figure 1

22 pages, 13391 KB  
Article
LSCNet: A Lightweight Shallow Feature Cascade Network for Small Object Detection in UAV Imagery
by Zening Wang and Amiya Nayak
Future Internet 2025, 17(12), 568; https://doi.org/10.3390/fi17120568 - 11 Dec 2025
Viewed by 436
Abstract
Unmanned Aerial Vehicles have become essential mobile sensing nodes in Internet of Things ecosystems, with applications ranging from disaster monitoring to traffic surveillance. However, wireless bandwidth is severely strained when sending enormous amounts of high-resolution aerial video to ground stations. To address these [...] Read more.
Unmanned Aerial Vehicles have become essential mobile sensing nodes in Internet of Things ecosystems, with applications ranging from disaster monitoring to traffic surveillance. However, wireless bandwidth is severely strained when sending enormous amounts of high-resolution aerial video to ground stations. To address these communication limitations, the current research paradigm is shifting toward UAV-assisted edge computing, where visual data is processed locally to extract semantic information for transmitting results to the ground or making autonomous decisions. Although deep detection is the dominant trend in general object detection, the heavy computational burden of these deep detection methods struggles to meet the stringent efficiency requirements of airborne edge platforms. Consequently, although recently proposed single-stage models like YOLOv10 can quickly detect objects in natural images, their over-dependence on deep features for computation results in wasted computational resources, as shallow information is crucial for small object detection in aerial scenes. In this paper, we propose LSCNet (Lightweight Shallow Feature Cascade Network), a novel lightweight architecture designed for UAV edge computing to handle aerial object detection tasks. Our lightweight Cascade Network focuses on feature extraction and shallow feature enhancement. LSCNet achieves 44.6% mAP50 on VisDrone2019 and 36.1% mAP50 on UAVDT, while decreasing parameters by 33% to 1.48 M. These results not only show how effective LSCNet is for real-time object detection but also provide a foundation for future developments in semantic communication within aerial networks. Full article
Show Figures

Figure 1

23 pages, 935 KB  
Review
Integration and Innovation in Digital Implantology–Part II: Emerging Technologies and Converging Workflows: A Narrative Review
by Tommaso Lombardi and Alexandre Perez
Appl. Sci. 2025, 15(23), 12789; https://doi.org/10.3390/app152312789 - 3 Dec 2025
Viewed by 862
Abstract
Emerging artificial intelligence (AI) and robotic surgical technologies have the potential to influence digital implant dentistry substantially. As a narrative review, and building on the foundations outlined in Part I, which described current digital tools and workflows alongside their persistent interface-related limitations, this [...] Read more.
Emerging artificial intelligence (AI) and robotic surgical technologies have the potential to influence digital implant dentistry substantially. As a narrative review, and building on the foundations outlined in Part I, which described current digital tools and workflows alongside their persistent interface-related limitations, this second part examines how AI and robotics may overcome these barriers. This synthesis is based on peer-reviewed literature published between 2020 and 2025, identified through searches in PubMed, Scopus, and Web of Science. Current evidence suggests that AI-based approaches, including rule-based systems, traditional machine learning, and deep learning, may achieve expert-level performance in diagnostic imaging, multimodal data registration, virtual patient model generation, implant planning, prosthetic design, and digital smile design. These methods offer substantial improvements in efficiency, reproducibility, and accuracy while reducing reliance on manual data handling across software, datasets, and workflow interfaces. In parallel, robotic-assisted implant surgery has advanced from surgeon-guided systems to semi-autonomous and fully autonomous platforms, with the potential to provide enhanced surgical precision and reduce operator dependency compared with conventional static or dynamic navigation. Several of these technologies have already reached early stages of clinical deployment, although important challenges remain regarding interoperability, standardization, validation, and the continuing need for human oversight. Together, these innovations may enable the gradual convergence of digital technologies, real-time-assisted, unified, end-to-end implant prosthodontic workflows, and gradual automation, while acknowledging that full automation remains a longer-term prospect. By synthesizing current evidence and proof-of-concept applications, this review aims to provide clinicians with a comprehensive overview of the AI and robotics toolkit relevant to implant dentistry and to outline both the opportunities and remaining limitations of these disruptive technologies as the field progresses towards seamless, fully integrated treatment pathways. Full article
Show Figures

Figure 1

14 pages, 1391 KB  
Article
In Vivo Accuracy Assessment of Two Intraoral Scanners Using Open-Source Software: A Comparative Full-Arch Pilot Study
by Francesco Puleio, Fabio Salmeri, Ettore Lupi, Ines Urbano, Roberta Gasparro, Simone De Vita and Roberto Lo Giudice
Oral 2025, 5(4), 97; https://doi.org/10.3390/oral5040097 - 2 Dec 2025
Cited by 1 | Viewed by 547
Abstract
Background: The precision of intraoral scanners (IOSs) is a key factor in ensuring the reliability of digital impressions, particularly in full-arch workflows. Although proprietary metrology tools are generally employed for scanner validation, open-source platforms could provide a cost-effective alternative for clinical research. Methods: [...] Read more.
Background: The precision of intraoral scanners (IOSs) is a key factor in ensuring the reliability of digital impressions, particularly in full-arch workflows. Although proprietary metrology tools are generally employed for scanner validation, open-source platforms could provide a cost-effective alternative for clinical research. Methods: This in vivo study compared the precision of two IOSs—3Shape TRIOS 3 and Planmeca Emerald S—using an open-source analytical workflow based on Autodesk Meshmixer and CloudCompare. A single healthy subject underwent five consecutive full-arch scans per device. Digital models were trimmed, aligned by manual landmarking and iterative closest-point refinement, and analyzed at six deviation thresholds (<0.01 mm to <0.4 mm). The percentage of surface points within clinically acceptable limits (<0.3 mm) was compared using paired t-tests. Results: TRIOS 3 exhibited significantly higher repeatability than Planmeca Emerald S (p < 0.001). At the <0.3 mm threshold, 99.3% ± 0.4% of points were within tolerance for TRIOS 3 versus 92.9% ± 6.8% for Planmeca. At the <0.1 mm threshold, values were 89.6% ± 5.7% and 47.3% ± 13.7%, respectively. Colorimetric deviation maps confirmed greater spatial consistency of TRIOS 3, particularly in posterior regions. Conclusions: Both scanners achieved clinically acceptable precision for full-arch impressions; however, TRIOS 3 demonstrated superior repeatability and lower variability. The proposed open-source workflow proved feasible and reliable, offering an accessible and reproducible method for IOS performance assessment in clinical settings. Full article
Show Figures

Figure 1

16 pages, 3257 KB  
Article
A Two-Stage Unet Framework for Sub-Resolution Assist Feature Prediction
by Mu Lin, Le Ma, Lisong Dong and Xu Ma
Micromachines 2025, 16(11), 1301; https://doi.org/10.3390/mi16111301 - 20 Nov 2025
Viewed by 463
Abstract
Sub-resolution assist feature (SRAF) is a widely used resolution enhancement technology for improving image contrast and the common process window in advanced lithography processes. However, both model-based SRAF and rule-based SRAF methods suffer from challenges of adaptability or high computational cost. The primary [...] Read more.
Sub-resolution assist feature (SRAF) is a widely used resolution enhancement technology for improving image contrast and the common process window in advanced lithography processes. However, both model-based SRAF and rule-based SRAF methods suffer from challenges of adaptability or high computational cost. The primary learning-based SRAF method adopts an end-to-end mode, treating the entire mask pattern as a pixel map, and it is difficult to obtain precise geometric parameters for the commonly used Manhattan SRAFs. This paper proposes a two-stage Unet framework to effectively predict the centroid coordinates and dimensions of SRAF polygons. Furthermore, an adaptive hybrid attention mechanism is introduced to dynamically integrate global and local features, thus enhancing the prediction accuracy. Additionally, a warm-up cosine annealing learning rate strategy is adopted to improve the training stability and convergence speed. Simulation results demonstrate that the proposed method accurately and rapidly estimates the SRAF parameters. Compared to traditional neural networks, the proposed method can better predict SRAF patterns, with the mean pattern error and edge placement error values showing the most significant reductions. PE decreases from 25,776.44 to 15,203.33 and EPE from 5.8367 to 3.5283, respectively. This significantly improves the image fidelity of the lithography system. Full article
(This article belongs to the Special Issue Recent Advances in Lithography)
Show Figures

Figure 1

33 pages, 751 KB  
Review
Rewiring the Lymphatic Landscape: Disorders, Remodeling, and Cancer Progression
by Sudeep Kumar, Ujjwal Adhikari and Brijendra Singh
Lymphatics 2025, 3(4), 37; https://doi.org/10.3390/lymphatics3040037 - 18 Nov 2025
Viewed by 1492
Abstract
The lymphatic system is essential for maintaining the body’s fluid balance, lipid absorption, and immune regulation. The dysfunction of the lymphatic system is associated with a wide spectrum of disorders. These disorders include primary and secondary lymphedema, congenital malformations, and lymphatic neoplasms. In [...] Read more.
The lymphatic system is essential for maintaining the body’s fluid balance, lipid absorption, and immune regulation. The dysfunction of the lymphatic system is associated with a wide spectrum of disorders. These disorders include primary and secondary lymphedema, congenital malformations, and lymphatic neoplasms. In cancer patients, lymphatic remodeling is essential, which facilitates tumor progression and metastasis, while tertiary lymphoid structures (TLSs) develop during chronic inflammation and may be involved in anti-tumor immunity. This review highlights the immunological basis of lymphatic disorders, with a particular focus on cellular and molecular biomarkers that define disease states. The recent advances in molecular imaging techniques, such as ultrasonography (US), computed tomography (CT), and magnetic resonance lymphography (MRL), have improved and identified the diagnosis and therapeutic targets for lymphedema. Moreover, nanobiotechnology and nano-delivery tools have further enhanced the visibility of cancer cells by imaging. Artificial Intelligence (AI) in lymphatic systems have offered a new spectrum for disease prediction using forms of AI such as natural language processing (NLP), machine learning (ML), robotics-assisted approaches, fussy model (FM), and natural language processing (NLP)-based algorithms. Collectively, these advanced tools have improved diagnostic approaches and reveal exciting opportunities for future research and new therapeutic developments in patient care. Full article
Show Figures

Figure 1

15 pages, 3387 KB  
Article
Automatic Apparent Nasal Index from Single Facial Photographs Using a Lightweight Deep Learning Pipeline: A Pilot Study
by Babak Saravi, Lara Schorn, Julian Lommen, Max Wilkat, Andreas Vollmer, Hamza Eren Güzel, Michael Vollmer, Felix Schrader, Christoph K. Sproll, Norbert R. Kübler and Daman D. Singh
Medicina 2025, 61(11), 1922; https://doi.org/10.3390/medicina61111922 - 27 Oct 2025
Viewed by 915
Abstract
Background and Objectives: Quantifying nasal proportions is central to facial plastic and reconstructive surgery, yet manual measurements are time-consuming and variable. We sought to develop a simple, reproducible deep learning pipeline that localizes the nose in a single frontal photograph and automatically [...] Read more.
Background and Objectives: Quantifying nasal proportions is central to facial plastic and reconstructive surgery, yet manual measurements are time-consuming and variable. We sought to develop a simple, reproducible deep learning pipeline that localizes the nose in a single frontal photograph and automatically computes the two-dimensional, photograph-derived apparent nasal index (aNI)—width/height × 100—enabling classification into five standard anthropometric categories. Materials and Methods: From CelebA we curated 29,998 high-quality near-frontal images (training 20,998; validation 5999; test 3001). Nose masks were manually annotated with the VGG Image Annotator and rasterized to binary masks. Ground-truth aNI was computed from the mask’s axis-aligned bounding box. A lightweight one-class YOLOv8n detector was trained to localize the nose; predicted aNI was computed from the detected bounding box. Performance was assessed on the held-out test set using detection coverage and mAP, agreement metrics between detector- and mask-based aNI (MAE, RMSE, R2; Bland–Altman), and five-class classification metrics (accuracy, macro-F1). Results: The detector returned at least one accepted nose box in 3000/3001 test images (99.97% coverage). Agreement with ground truth was strong: MAE 3.04 nasal index units (95% CI 2.95–3.14), RMSE 4.05, and R2 0.819. Bland–Altman analysis showed a small negative bias (−0.40, 95% CI −0.54 to −0.26) with limits of agreement −8.30 to 7.50 (95% CIs −8.54 to −8.05 and 7.25 to 7.74). After excluding out-of-range cases (<40.0), five-class classification on n = 2976 images achieved macro-F1 0.705 (95% CI 0.608–0.772) and 80.7% accuracy; errors were predominantly adjacent-class swaps, consistent with the small aNI error. Additional analyses confirmed strong ordinal agreement (weighted κ = 0.71 linear, 0.78 quadratic; Spearman ρ = 0.76) and near-perfect adjacent-class accuracy (0.999); performance remained stable when thresholds were shifted ±2 NI units and across sex and age subgroups. Conclusions: A compact detector can deliver near-universal nose localization and accurate automatic estimation of the nasal index from a single photograph, enabling reliable five-class categorization without manual measurements. The approach is fast, reproducible, and promising as a calibrated decision-support adjunct for surgical planning, outcomes tracking, and large-scale morphometric research. Full article
(This article belongs to the Special Issue Recent Advances in Plastic and Reconstructive Surgery)
Show Figures

Figure 1

23 pages, 2069 KB  
Article
Early Lung Cancer Detection via AI-Enhanced CT Image Processing Software
by Joel Silos-Sánchez, Jorge A. Ruiz-Vanoye, Francisco R. Trejo-Macotela, Marco A. Márquez-Vera, Ocotlán Diaz-Parra, Josué R. Martínez-Mireles, Miguel A. Ruiz-Jaimes and Marco A. Vera-Jiménez
Diagnostics 2025, 15(21), 2691; https://doi.org/10.3390/diagnostics15212691 - 24 Oct 2025
Viewed by 2368
Abstract
Background/Objectives: Lung cancer remains the leading cause of cancer-related mortality worldwide among both men and women. Early and accurate detection is essential to improve patient outcomes. This study explores the use of artificial intelligence (AI)-based software for the diagnosis of lung cancer through [...] Read more.
Background/Objectives: Lung cancer remains the leading cause of cancer-related mortality worldwide among both men and women. Early and accurate detection is essential to improve patient outcomes. This study explores the use of artificial intelligence (AI)-based software for the diagnosis of lung cancer through the analysis of medical images in DICOM format, aiming to enhance image visualization, preprocessing, and diagnostic precision in chest computed tomography (CT) scans. Methods: The proposed system processes DICOM medical images converted to standard formats (JPG or PNG) for preprocessing and analysis. An ensemble of classical machine learning algorithms—including Random Forest, Gradient Boosting, Support Vector Machine, and K-Nearest Neighbors—was implemented to classify pulmonary images and predict the likelihood of malignancy. Image normalization, denoising, segmentation, and feature extraction were performed to improve model reliability and reproducibility. Results: The AI-enhanced system demonstrated substantial improvements in diagnostic accuracy and robustness compared with individual classifiers. The ensemble model achieved a classification accuracy exceeding 90%, highlighting its effectiveness in identifying malignant and non-malignant lung nodules. Conclusions: The findings indicate that AI-assisted CT image processing can significantly contribute to the early detection of lung cancer. The proposed methodology enhances diagnostic confidence, supports clinical decision-making, and represents a viable step toward integrating AI into radiological workflows for early cancer screening. Full article
Show Figures

Figure 1

14 pages, 3288 KB  
Article
CT Morphometric Analysis of Ossification Centres in the Fetal Th12 Vertebra
by Magdalena Grzonkowska, Michał Kułakowski, Zofia Dzięcioł-Anikiej, Agnieszka Rogalska, Beata Zwierko, Sara Kierońska-Siwak, Karol Elster, Stanisław Orkisz and Mariusz Baumgart
Brain Sci. 2025, 15(11), 1138; https://doi.org/10.3390/brainsci15111138 - 24 Oct 2025
Viewed by 538
Abstract
Objectives: The present study aimed to determine the growth dynamics of the ossification centers of the twelfth thoracic vertebra in the human fetus, focusing on detailed linear, surface, and volumetric parameters of both the vertebral body and neural processes. Methods: The investigation was [...] Read more.
Objectives: The present study aimed to determine the growth dynamics of the ossification centers of the twelfth thoracic vertebra in the human fetus, focusing on detailed linear, surface, and volumetric parameters of both the vertebral body and neural processes. Methods: The investigation was based on 55 human fetuses (27 males, 28 females) aged 17–30 weeks of gestation. High-resolution low-dose computed tomography, three-dimensional reconstruction, digital image analysis and appropriate statistical modeling were used to obtain detailed morphometric measurements. Results: All measured morphometric parameters of the Th12 vertebral body ossification center—transverse and sagittal diameters, cross-sectional area, and volume—increased linearly with gestational age (R2 = 0.94–0.97). A similar linear growth pattern was demonstrated for the length, width, cross-sectional area, and volume of the right and left neural process ossification centers (R2 = 0.97–0.98). No statistically significant sex-related or side-related differences were found, allowing the establishment of single normative growth curves for each parameter. Conclusions: This study provides the first comprehensive CT-based normative data for the ossification centers of the fetal Th12 vertebra in the second and early third trimesters. The presented linear growth models and reference values may assist anatomists, radiologists, obstetricians, and pediatric spine surgeons in estimating fetal age, and in the prenatal and postnatal assessment of congenital spinal anomalies, especially at the thoracolumbar junction. Further research on larger and broader gestational cohorts is warranted to validate and extend these findings. Full article
Show Figures

Figure 1

28 pages, 8411 KB  
Article
SEPoolConvNeXt: A Deep Learning Framework for Automated Classification of Neonatal Brain Development Using T1- and T2-Weighted MRI
by Gulay Maçin, Melahat Poyraz, Zeynep Akca Andi, Nisa Yıldırım, Burak Taşcı, Gulay Taşcı, Sengul Dogan and Turker Tuncer
J. Clin. Med. 2025, 14(20), 7299; https://doi.org/10.3390/jcm14207299 - 16 Oct 2025
Viewed by 734
Abstract
Background/Objectives: The neonatal and infant periods represent a critical window for brain development, characterized by rapid and heterogeneous processes such as myelination and cortical maturation. Accurate assessment of these changes is essential for understanding normative trajectories and detecting early abnormalities. While conventional [...] Read more.
Background/Objectives: The neonatal and infant periods represent a critical window for brain development, characterized by rapid and heterogeneous processes such as myelination and cortical maturation. Accurate assessment of these changes is essential for understanding normative trajectories and detecting early abnormalities. While conventional MRI provides valuable insights, automated classification remains challenging due to overlapping developmental stages and sex-specific variability. Methods: We propose SEPoolConvNeXt, a novel deep learning framework designed for fine-grained classification of neonatal brain development using T1- and T2-weighted MRI sequences. The dataset comprised 29,516 images organized into four subgroups (T1 Male, T1 Female, T2 Male, T2 Female), each stratified into 14 age-based classes (0–10 days to 12 months). The architecture integrates residual connections, grouped convolutions, and channel attention mechanisms, balancing computational efficiency with discriminative power. Model performance was compared with 19 widely used pre-trained CNNs under identical experimental settings. Results: SEPoolConvNeXt consistently achieved test accuracies above 95%, substantially outperforming pre-trained CNN baselines (average ~70.7%). On the T1 Female dataset, early stages achieved near-perfect recognition, with slight declines at 11–12 months due to intra-class variability. The T1 Male dataset reached >98% overall accuracy, with challenges in intermediate months (2–3 and 8–9). The T2 Female dataset yielded accuracies between 99.47% and 100%, including categories with perfect F1-scores, whereas the T2 Male dataset maintained strong but slightly lower performance (>93%), especially in later infancy. Combined evaluations across T1 + T2 Female and T1 Male + Female datasets confirmed robust generalization, with most subgroups exceeding 98–99% accuracy. The results demonstrate that domain-specific architectural design enables superior sensitivity to subtle developmental transitions compared with generic transfer learning approaches. The lightweight nature of SEPoolConvNeXt (~9.4 M parameters) further supports reproducibility and clinical applicability. Conclusions: SEPoolConvNeXt provides a robust, efficient, and biologically aligned framework for neonatal brain maturation assessment. By integrating sex- and age-specific developmental trajectories, the model establishes a strong foundation for AI-assisted neurodevelopmental evaluation and holds promise for clinical translation, particularly in monitoring high-risk groups such as preterm infants. Full article
(This article belongs to the Special Issue Artificial Intelligence and Deep Learning in Medical Imaging)
Show Figures

Figure 1

20 pages, 2616 KB  
Article
Biomimetic Transfer Learning-Based Complex Gastrointestinal Polyp Classification
by Daniela-Maria Cristea, Daniela Onita and Laszlo Barna Iantovics
Biomimetics 2025, 10(10), 699; https://doi.org/10.3390/biomimetics10100699 - 15 Oct 2025
Cited by 1 | Viewed by 634
Abstract
(1) Background: This research investigates the application of Artificial Intelligence (AI), particularly biomimetic convolutional neural networks (CNNs), for the automatic classification of gastrointestinal (GI) polyps in endoscopic images. The study combines AI and Transfer learning techniques to support early detection of colorectal cancer [...] Read more.
(1) Background: This research investigates the application of Artificial Intelligence (AI), particularly biomimetic convolutional neural networks (CNNs), for the automatic classification of gastrointestinal (GI) polyps in endoscopic images. The study combines AI and Transfer learning techniques to support early detection of colorectal cancer by enhancing diagnostic accuracy with pre-trained models; (2) Methods: The Kvasir dataset, comprising 4000 annotated endoscopic images across eight polyp categories, was used. Images were pre-processed via normalisation, resizing, and data augmentation. Several CNN architectures, including state-of-the-art optimized ResNet50, DenseNet121, and MobileNetV2, were trained and evaluated. Models were assessed through training, validation, and testing phases, using performance metrics such as overall accuracy, confusion matrix, precision, recall, and F1 score; (3) Results: ResNet50 achieved the highest validation accuracy at 90.5%, followed closely by DenseNet121 with 87.5% and MobileNetV2 with 86.5%. The models demonstrated good generalisation, with small differences between training and validation accuracy. The average inference time was under 0.5 s on a computer with limited resources, confirming real-time applicability. Confusion matrix analysis indicates that common errors frequently occur between visually similar classes, particularly when reviewed by less-experienced medical physicians. These errors underscore the difficulty of distinguishing subtle features in gastrointestinal imagery and highlight the value of model-assisted diagnostics; (4) Conclusions: The obtained results confirm that Deep learning-based CNN architectures, combined with Transfer learning and optimisation techniques, can classify accurately endoscopic images and support medical diagnostics. Full article
(This article belongs to the Special Issue Bio-Inspired Artificial Intelligence in Healthcare)
Show Figures

Figure 1

Back to TopTop