Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (178)

Search Parameters:
Keywords = task-based image quality assessment

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 1732 KB  
Article
Enhancing Endangered Feline Conservation in Asia via a Pose-Guided Deep Learning Framework for Individual Identification
by Weiwei Xiao, Wei Zhang and Haiyan Liu
Diversity 2025, 17(12), 853; https://doi.org/10.3390/d17120853 - 12 Dec 2025
Viewed by 251
Abstract
The re-identification of endangered felines is critical for species conservation and biodiversity assessment. This paper proposes the Pose-Guided Network with the Adaptive L2 Regularization (PGNet-AL2) framework to overcome key challenges in wild feline re-identification, such as extensive pose variations, small sample sizes, and [...] Read more.
The re-identification of endangered felines is critical for species conservation and biodiversity assessment. This paper proposes the Pose-Guided Network with the Adaptive L2 Regularization (PGNet-AL2) framework to overcome key challenges in wild feline re-identification, such as extensive pose variations, small sample sizes, and inconsistent image quality. This framework employs a dual-branch architecture for multi-level feature extraction and incorporates an adaptive L2 regularization mechanism to optimize parameter learning, effectively mitigating overfitting in small-sample scenarios. Applying the proposed method to the Amur Tiger Re-identification in the Wild (ATRW) dataset, we achieve a mean Average Precision (mAP) of 91.3% in single-camera settings, outperforming the baseline PPbM-b (Pose Part-based Model) by 18.5 percentage points. To further evaluate its generalization, we apply it to a more challenging task, snow leopard re-identification, using a dataset of 388 infrared videos obtained from the Wildlife Conservation Society (WCS). Despite the poor quality of infrared videos, our method achieves a mAP of 94.5%. The consistent high performance on both the ATRW and snow leopard datasets collectively demonstrates the method’s strong generalization capability and practical utility. Full article
Show Figures

Graphical abstract

30 pages, 28717 KB  
Article
A Multi-Parameter Inspection Platform for Transparent Packaging Containers: System Design for Stress, Dimensional, and Defect Detection
by Huaxing Yu, Zhongqing Jia, Chen Guan, Zhaohui Yu, Xiaolong Ma, Xiangshuai Wang, Bing Zhao and Xiaofei Wang
Sensors 2025, 25(24), 7531; https://doi.org/10.3390/s25247531 - 11 Dec 2025
Viewed by 272
Abstract
With increasing quality demands in pharmaceutical and cosmetic packaging, this work presents a unified inspection platform for transparent ampoules that synergistically integrates stress measurement, dimensional measurement, and surface defect detection. Key innovations include an integrated system architecture, a shared-resource task scheduling mechanism, and [...] Read more.
With increasing quality demands in pharmaceutical and cosmetic packaging, this work presents a unified inspection platform for transparent ampoules that synergistically integrates stress measurement, dimensional measurement, and surface defect detection. Key innovations include an integrated system architecture, a shared-resource task scheduling mechanism, and an optimized deployment strategy tailored for production-like conditions. Non-contact residual stress measurement is achieved using the photoelastic method, while telecentric imaging combined with subpixel contour extraction enables accurate dimensional assessment. A YOLOv8-based deep learning model efficiently identifies multiple surface defect types, enhancing detection performance without increasing hardware complexity. Experimental validation under laboratory conditions simulating production lines demonstrates a stress measurement error of ±3 nm, dimensional accuracy of ±0.2 mm, and defect detection mAP@0.5 of 90.3%. The platform meets industrial inspection requirements and shows strong scalability and engineering potential. Future work will focus on real-time operation and exploring stress–defect coupling for intelligent quality prediction. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

45 pages, 59804 KB  
Article
Multi-Threshold Art Symmetry Image Segmentation and Numerical Optimization Based on the Modified Golden Jackal Optimization
by Xiaoyan Zhang, Zuowen Bao, Xinying Li and Jianfeng Wang
Symmetry 2025, 17(12), 2130; https://doi.org/10.3390/sym17122130 - 11 Dec 2025
Viewed by 214
Abstract
To address the issues of uneven population initialization, insufficient individual information interaction, and passive boundary handling in the standard Golden Jackal Optimization (GJO) algorithm, while improving the accuracy and efficiency of multilevel thresholding in artistic image segmentation, this paper proposes an improved Golden [...] Read more.
To address the issues of uneven population initialization, insufficient individual information interaction, and passive boundary handling in the standard Golden Jackal Optimization (GJO) algorithm, while improving the accuracy and efficiency of multilevel thresholding in artistic image segmentation, this paper proposes an improved Golden Jackal Optimization algorithm (MGJO) and applies it to this task. MGJO introduces a high-quality point set for population initialization, ensuring a more uniform distribution of initial individuals in the search space and better adaptation to the complex grayscale characteristics of artistic images. A dual crossover strategy, integrating horizontal and vertical information exchange, is designed to enhance individual information sharing and fine-grained dimensional search, catering to the segmentation needs of artistic image textures and color layers. Furthermore, a global-optimum-based boundary handling mechanism is constructed to prevent information loss when boundaries are exceeded, thereby preserving the boundary details of artistic images. The performance of MGJO was evaluated on the CEC2017 (dim = 30, 100) and CEC2022 (dim = 10, 20) benchmark suites against seven algorithms, including GWO and IWOA. Population diversity analysis, exploration–exploitation balance assessment, Wilcoxon rank-sum tests, and Friedman mean-rank tests all demonstrate that MGJO significantly outperforms the comparison algorithms in optimization accuracy, stability, and statistical reliability. In multilevel thresholding for artistic image segmentation, using Otsu’s between-class variance as the objective function, MGJO achieves higher fitness values (approaching Otsu’s optimal values) across various artistic images with complex textures and colors, as well as benchmark images such as Baboon, Camera, and Lena, in 4-, 6-, 8-, and 10-level thresholding tasks. The resulting segmented images exhibit superior peak signal-to-noise ratio (PSNR), structural similarity (SSIM), and feature similarity (FSIM) compared to other algorithms, more precisely preserving brushstroke details and color layers. Friedman average rankings consistently place MGJO in the lead. These experimental results indicate that MGJO effectively overcomes the performance limitations of the standard GJO, demonstrating excellent performance in both numerical optimization and multilevel thresholding artistic image segmentation. It provides an efficient solution for high-dimensional complex optimization problems and practical demands in artistic image processing. Full article
(This article belongs to the Section Engineering and Materials)
Show Figures

Figure 1

10 pages, 496 KB  
Article
Adaptive 3D Augmentation in StyleGAN2-ADA for High-Fidelity Lung Nodule Synthesis from Limited CT Volumes
by Oleksandr Fedoruk, Konrad Klimaszewski and Michał Kruk
Sensors 2025, 25(24), 7404; https://doi.org/10.3390/s25247404 - 5 Dec 2025
Viewed by 415
Abstract
Generative adversarial networks (GANs) typically require large datasets for effective training, which poses challenges for volumetric medical imaging tasks where data are scarce. This study addresses this limitation by extending adaptive discriminator augmentation (ADA) for three-dimensional (3D) StyleGAN2 to improve generative performance on [...] Read more.
Generative adversarial networks (GANs) typically require large datasets for effective training, which poses challenges for volumetric medical imaging tasks where data are scarce. This study addresses this limitation by extending adaptive discriminator augmentation (ADA) for three-dimensional (3D) StyleGAN2 to improve generative performance on limited volumetric data. The proposed 3D StyleGAN2-ADA redefines all 2D operations for volumetric processing and incorporates the full set of original augmentation techniques. Experiments are conducted on the NoduleMNIST3D dataset of lung CT scans containing 590 voxel-based samples across two classes. Two augmentation pipelines are evaluated—one using color-based transformations and another employing a comprehensive set of 3D augmentations including geometric, filtering, and corruption augmentations. Performance is compared against the same network and dataset without any augmentations at all by assessing generation quality with Kernel Inception Distance (KID) and 3D Structural Similarity Index Measure (SSIM). Results show that volumetric ADA substantially improves training stability and reduces the risk of a mode collapse, even under severe data constraints. A strong augmentation strategy improves the realism of generated 3D samples and better preserves anatomical structures relative to those without data augmentation. These findings demonstrate that adaptive 3D augmentations effectively enable high-quality synthetic medical image generation from extremely limited volumetric datasets. The source code and the weights of the networks are available in the GitHub repository. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

27 pages, 14828 KB  
Review
Computational Insights into Root Canal Treatment: A Survey of Selected Methods in Imaging, Segmentation, Morphological Analysis, and Clinical Management
by Jianning Li, Kerstin Bitter, Anh Duc Nguyen, Hagay Shemesh, Paul Zaslansky and Stefan Zachow
Dent. J. 2025, 13(12), 579; https://doi.org/10.3390/dj13120579 - 3 Dec 2025
Viewed by 420
Abstract
Background/Objectives: Root canal treatment (RCT) is a common dental procedure performed to preserve teeth by removing infected or at-risk pulp tissue caused by caries, trauma, or other pulpal conditions. A successful outcome, among others, depends on accurate identification of the root canal anatomy, [...] Read more.
Background/Objectives: Root canal treatment (RCT) is a common dental procedure performed to preserve teeth by removing infected or at-risk pulp tissue caused by caries, trauma, or other pulpal conditions. A successful outcome, among others, depends on accurate identification of the root canal anatomy, planning a suitable therapeutic strategy, and ensuring a bacteria-tight root canal filling. Despite advances in dental techniques, there remains limited integration of computational methods to support key stages of treatment. This review aims to provide a comprehensive overview of computational methods applied throughout the full workflow of RCT, examining their potential to support clinical decision-making, improve treatment planning and outcome assessment, and help bridge the interdisciplinary gap between dentistry and computational research. Methods: A comprehensive literature review was conducted to identify and analyze computational methods applied to different stages of RCT, including root canal segmentation, morphological analysis, treatment planning, quality evaluation, follow-up, and prognosis prediction. In addition, a taxonomy based on application was developed to categorize these methods based on their function within the treatment process. Insights from the authors’ own research experience were also incorporated to highlight implementation challenges and practical considerations. Results: The review identified a wide range of computational methods aimed at enhancing the consistency and efficiency of RCT. Key findings include the use of advanced image processing for segmentation, image analysis for diagnosis and treatment planning, machine learning for morphological classification, and predictive modeling for outcome estimation. While some methods demonstrate high sensitivity and specificity in diagnostic and planning tasks, many remain in experimental stages and lack clinical integration. There is also a noticeable absence of advanced computational techniques for micro-computed tomography and morphological analysis. Conclusions: Computational methods offer significant potential to improve decision-making and outcomes in RCT. However, greater focus on clinical translation and development of cross-modality methodology is needed. The proposed taxonomy provides a structured framework for organizing existing methods and identifying future research directions tailored to specific phases of treatment. This review serves as a resource for both dental professionals, computer scientists and researchers seeking to bridge the gap between clinical practice and computational innovation. Full article
Show Figures

Graphical abstract

18 pages, 10663 KB  
Article
Assessment of Image Quality Performance of a Photon-Counting Computed Tomography Scanner Approved for Whole-Body Clinical Applications
by Francesca Saveria Maddaloni, Antonio Sarno, Alessandro Loria, Anna Piai, Cristina Lenardi, Antonio Esposito and Antonella del Vecchio
Sensors 2025, 25(23), 7338; https://doi.org/10.3390/s25237338 - 2 Dec 2025
Viewed by 412
Abstract
Background: Photon-counting computed tomography (PCCT) represents a major technological advance in clinical CT imaging, offering superior spatial resolution, enhanced material discrimination, and potential radiation dose reduction compared to conventional energy-integrating detector systems. As the first clinically approved PCCT scanner becomes available, establishing a [...] Read more.
Background: Photon-counting computed tomography (PCCT) represents a major technological advance in clinical CT imaging, offering superior spatial resolution, enhanced material discrimination, and potential radiation dose reduction compared to conventional energy-integrating detector systems. As the first clinically approved PCCT scanner becomes available, establishing a comprehensive characterization of its image quality is essential to understand its performance and clinical impact. Methods: Image quality was evaluated using a commercial quality assurance phantom with acquisition protocols typically used for three anatomical regions—head, abdomen/thorax, and inner ear—representing diverse clinical scenarios. Each region was scanned using both ultra-high-resolution (UHR, 120 × 0.2 mm slices) and conventional (144 × 0.4 mm slices) protocols. Conventional metrics, including signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), slice thickness accuracy, and uniformity, were assessed following international standards. Task-based analysis was also performed through target transfer function (TTF), noise power spectrum (NPS), and detectability index (d′) to evaluate diagnostic relevance. Results: UHR protocols provided markedly improved spatial resolution, particularly in the inner ear imaging, as confirmed by TTF analysis, though with increased noise and reduced low-contrast detectability in certain conditions. CT numbers showed linear correspondence with known attenuation coefficients across all protocols. Conclusions: This study establishes a detailed technical characterization of the first clinical PCCT scanner, demonstrating significant improvements in terms of spatial resolution and accuracy of the quantitative image analysis, while highlighting the need for noise–contrast optimization in high-resolution imaging. Full article
(This article belongs to the Special Issue Recent Progress in X-Ray Medical Imaging and Detectors)
Show Figures

Figure 1

25 pages, 23748 KB  
Article
HyperHazeOff: Hyperspectral Remote Sensing Image Dehazing Benchmark
by Artem Nikonorov, Dmitry Sidorchuk, Nikita Odinets, Vladislav Volkov, Anastasia Sarycheva, Ekaterina Dudenko, Mikhail Zhidkov and Dmitry Nikolaev
J. Imaging 2025, 11(12), 422; https://doi.org/10.3390/jimaging11120422 - 26 Nov 2025
Viewed by 432
Abstract
Hyperspectral remote sensing images (HSIs) provide invaluable information for environmental and agricultural monitoring, yet they are often degraded by atmospheric haze, which distorts spatial and spectral content and hinders downstream analysis. Progress in hyperspectral dehazing has been limited by the absence of paired [...] Read more.
Hyperspectral remote sensing images (HSIs) provide invaluable information for environmental and agricultural monitoring, yet they are often degraded by atmospheric haze, which distorts spatial and spectral content and hinders downstream analysis. Progress in hyperspectral dehazing has been limited by the absence of paired real-haze benchmarks; most prior studies rely on synthetic haze or unpaired data, restricting fair evaluation and generalization. We present HyperHazeOff, the first comprehensive benchmark for hyperspectral dehazing that unifies data, tasks, and evaluation protocols. It comprises (i) RRealHyperPDID, 110 scenes with paired real-haze and haze-free HSIs (plus RGB images), and (ii) RSyntHyperPDID, 2616 paired samples generated using a physically grounded haze formation model. The benchmark also provides agricultural field delineation and land classification annotations for downstream task quality assessment, standardized train/validation/test splits, preprocessing pipelines, baseline implementations, pretrained weights, and evaluation tools. Across six state-of-the-art methods (three RGB-based and three HSI-specific), we find that hyperspectral models trained on the widely used HyperDehazing dataset fail to generalize to real haze, while training on RSyntHyperPDID enables significant real-haze restoration by AACNet. HyperHazeOff establishes reproducible baselines and is openly available to advance research in hyperspectral dehazing. Full article
(This article belongs to the Special Issue Multispectral and Hyperspectral Imaging: Progress and Challenges)
Show Figures

Figure 1

14 pages, 1607 KB  
Article
Blind Image Quality Assessment Using Convolutional Neural Networks
by Mariusz Frackiewicz, Henryk Palus and Wojciech Trojanowski
Sensors 2025, 25(22), 7078; https://doi.org/10.3390/s25227078 - 20 Nov 2025
Viewed by 507
Abstract
In the domain of image and multimedia processing, image quality is a critical factor, as it directly influences the performance of subsequent tasks such as compression, transmission, and content analysis. Reliable assessment of image quality is therefore essential not only for benchmarking algorithms [...] Read more.
In the domain of image and multimedia processing, image quality is a critical factor, as it directly influences the performance of subsequent tasks such as compression, transmission, and content analysis. Reliable assessment of image quality is therefore essential not only for benchmarking algorithms but also for ensuring user satisfaction in real-world multimedia applications. The most advanced Blind image quality assessment (BIQA) methods are typically built upon deep learning models and rely on complex architectures that, while effective, require substantial computational resources and large-scale training datasets. This complexity can limit their scalability and practical deployment, particularly in resource-constrained environments. In this paper, we revisit a model inspired by one of the early applications of convolutional neural networks (CNNs) in BIQA and demonstrate that by leveraging recent advancements in machine learning—such as Bayesian hyperparameter optimization and widely used stochastic optimization methods (e.g., Adam)—it is possible to achieve competitive performance using a simpler, more scalable, and lightweight architecture. To evaluate the proposed approach, we conducted extensive experiments on widely used benchmark datasets, including TID2013 and KADID-10k. The results show that the proposed model achieves competitive performance while maintaining a substantially more efficient design. These findings suggest that lightweight CNN-based models, when combined with modern optimization strategies, can serve as a viable alternative to more elaborate frameworks, offering an improved balance between accuracy, efficiency, and scalability. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

36 pages, 39540 KB  
Article
Enhancing Pest Detection in Deep Learning Through a Systematic Image Quality Assessment and Preprocessing Framework
by Shuyi Jia, Maryam Horri Rezaei and Barmak Honarvar Shakibaei Asli
J. Exp. Theor. Anal. 2025, 3(4), 39; https://doi.org/10.3390/jeta3040039 - 20 Nov 2025
Viewed by 318
Abstract
This study addresses the critical challenge of variable image quality in deep learning-based automated pest identification. We propose a holistic pipeline that integrates systematic Image Quality Assessment (IQA) with tailored preprocessing to enhance the performance of a YOLOv5 object detection model. The methodology [...] Read more.
This study addresses the critical challenge of variable image quality in deep learning-based automated pest identification. We propose a holistic pipeline that integrates systematic Image Quality Assessment (IQA) with tailored preprocessing to enhance the performance of a YOLOv5 object detection model. The methodology begins with a No-Reference IQA using BRISQUE, PIQE, and NIQE metrics to quantitatively diagnose image clarity, noise, and distortion. Based on this assessment, a tailored preprocessing stage employing six different filters (Wiener, Lucy–Richardson, etc.) is applied to rectify degradations. Enhanced images are then used to train a YOLOv5 model for detecting four common pest species. Experimental results demonstrate that our IQA-anchored pipeline significantly improves image quality, with average BRISQUE and PIQE scores reducing from 40.78 to 25.42 and 34.94 to 30.38, respectively. Consequently, the detection confidence for challenging pests increased, for instance, from 0.27 to 0.44 for Peach Borer after dataset enhancement. This work concludes that a methodical approach to image quality management is not an optional step but a critical prerequisite that directly dictates the performance ceiling of automated deep learning systems in agriculture, offering a reusable blueprint for robust visual recognition tasks. Full article
Show Figures

Figure 1

21 pages, 2304 KB  
Article
Hierarchical Prompt Engineering and Task-Differentiated Low-Rank Adaptation for Artificial Intelligence-Generated Content Image Quality Assessment
by Minjuan Gao, Qiaorong Zhang, Chenye Song, Xuande Zhang and Yankang Li
Information 2025, 16(11), 1006; https://doi.org/10.3390/info16111006 - 19 Nov 2025
Viewed by 695
Abstract
Assessing the quality of Artificial Intelligence-Generated Content (AIGC) images remains a critical challenge, as conventional Image Quality Assessment (IQA) methods often fail to capture the semantic consistency between generated images and their textual prompts. This study aims to establish an interpretable and efficient [...] Read more.
Assessing the quality of Artificial Intelligence-Generated Content (AIGC) images remains a critical challenge, as conventional Image Quality Assessment (IQA) methods often fail to capture the semantic consistency between generated images and their textual prompts. This study aims to establish an interpretable and efficient multimodal framework for evaluating AIGC image quality. The research addresses three key scientific questions: how to leverage structured prompt semantics for more interpretable assessments, how to enable parameter-efficient yet accurate adaptation, and how to achieve unified handling of perceptual and semantic subtasks. To this end, we propose the Prompt-Enhanced Low-Rank Adaptation (PELA) framework, which integrates Hierarchical Prompt Engineering and Low-Rank Adaptation within a CLIP-based backbone. Hierarchical prompts encode multi-level semantics for fine-grained evaluation, while low-rank adaptation enables lightweight, task-specific optimization. Experiments conducted on AGIQA-1K, AGIQA-3K, and AIGCIQA-2023 datasets demonstrate that PELA achieves superior correlation with human perceptual judgments and sets new state-of-the-art results across multiple metrics. The findings confirm that combining structured prompt semantics with efficient adaptation offers a compact, interpretable, and scalable paradigm for multimodal image quality assessment. Full article
Show Figures

Figure 1

19 pages, 14156 KB  
Article
Image Prompt Adapter-Based Stable Diffusion for Enhanced Multi-Class Weed Generation and Detection
by Boyang Deng and Yuzhen Lu
AgriEngineering 2025, 7(11), 389; https://doi.org/10.3390/agriengineering7110389 - 15 Nov 2025
Cited by 1 | Viewed by 1233
Abstract
The curation of large-scale, diverse datasets for robust weed detection is extremely time-consuming and resource-intensive in practice. Generative artificial intelligence (AI) opens up opportunities for image generation to supplement real-world image acquisition and annotation efforts. However, it is not a trial task to [...] Read more.
The curation of large-scale, diverse datasets for robust weed detection is extremely time-consuming and resource-intensive in practice. Generative artificial intelligence (AI) opens up opportunities for image generation to supplement real-world image acquisition and annotation efforts. However, it is not a trial task to generate high-quality, multi-class weed images that capture the nuances and variations in visual representations for enhanced weed detection. This study presents a novel investigation of advanced stable diffusion (SD) integrated with a module with image prompt capability, IP-Adapter, for weed image generation. Using the IP-Adapter-based model, two image feature encoders, CLIP (contrastive language image pre-training) and BioCLIP (a vision foundation model for biological images), were utilized to generate weed instances, which were then inserted into existing weed images. Image generation and weed detection experiments are conducted on a 10-class weed dataset captured in vegetable fields. The perceptual quality of generated images is assessed in terms of Fréchet Inception Distance (FID) and Inception Score (IS). YOLOv11 (You Only Look Once version 11) models were trained for weed detection, achieving an improved mAP@50:95 of 1.26% on average when combining inserted weed instances with real ones in training, compared to using original images alone. Both the weed dataset and software programs in this study will be made publicly available. This study offers valuable perspectives into the use of IP-adapter-based SD for generating weed images and weed detection. Full article
Show Figures

Figure 1

26 pages, 5171 KB  
Article
A Method to Measure Neighborhood Quality with Hedonic Price Models in Three Latin American Cities
by Marco Aurélio Stumpf González and Diego Alfonso Erba
Real Estate 2025, 2(4), 18; https://doi.org/10.3390/realestate2040018 - 3 Nov 2025
Viewed by 1036
Abstract
Location effects play a crucial role in the real estate market, encompassing aspects of accessibility and neighborhood quality. While traditional measures exist for accessibility, evaluating neighborhood quality can be a complex task. Understanding these elements is essential for accurately estimating property values, whether [...] Read more.
Location effects play a crucial role in the real estate market, encompassing aspects of accessibility and neighborhood quality. While traditional measures exist for accessibility, evaluating neighborhood quality can be a complex task. Understanding these elements is essential for accurately estimating property values, whether for commercial or tax purposes. Recently developed methods based on web scraping and automatic detection using artificial intelligence have proven effective but require substantial human and financial resources, often unavailable in small cities. As a solution, this study proposes and evaluates a simpler mechanism for assessing neighborhood quality using Google Street View images and a scoring system in a human-centered approach. Based on image interpretation, a set of weights is assigned to each point, resulting in a micro-neighborhood quality assessment. This study was conducted in three Latin American cities, and the resulting variable was integrated into hedonic price models. The findings demonstrate the feasibility and effectiveness of the proposed approach. The novelty of this study lies in applying a method based on quasi-objective criteria and adapted to cities with limited technological resources. Full article
Show Figures

Figure 1

21 pages, 609 KB  
Review
Artificial Intelligence Tools for Supporting Histopathologic and Molecular Characterization of Gynecological Cancers: A Review
by Aleksandra Asaturova, João Pinto, António Polonia, Evgeny Karpulevich, Xavier Mattias-Guiu and Catarina Eloy
J. Clin. Med. 2025, 14(21), 7465; https://doi.org/10.3390/jcm14217465 - 22 Oct 2025
Viewed by 932
Abstract
Background/Objectives: Accurate diagnosis, prognosis, and prediction of treatment response are essential in managing gynecologic cancers and maintaining patient quality of life. Computational pathology, powered by artificial intelligence (AI), offers a transformative opportunity for objective histopathological assessment. This review provides a comprehensive, user-oriented [...] Read more.
Background/Objectives: Accurate diagnosis, prognosis, and prediction of treatment response are essential in managing gynecologic cancers and maintaining patient quality of life. Computational pathology, powered by artificial intelligence (AI), offers a transformative opportunity for objective histopathological assessment. This review provides a comprehensive, user-oriented overview of existing AI tools for the characterization of gynecological cancers, critically evaluating their clinical applicability and identifying key challenges for future development. Methods: A systematic literature search was conducted in PubMed and Web of Science for studies published up to 2025. The search focused on AI tools developed for the diagnosis, prognosis, or treatment prediction of gynecologic cancers based on histopathological images. After applying selection criteria, 36 studies were included for in-depth analysis, covering ovarian, uterine, cervical, and other gynecological cancers. Studies on cytopathology and pure tumor detection were excluded. Results: Our analysis identified AI tools addressing critical clinical tasks, including histopathologic subtyping, grading, staging, molecular subtyping, and prediction of therapy response (e.g., to platinum-based chemotherapy or PARP inhibitors). The performance of these tools varied significantly. While some demonstrated high accuracy and promising results in internal validation, many were limited by a lack of external validation, potential biases from training data, and performance that is not yet sufficient for routine clinical use. Direct comparison between studies was often hindered by the use of non-standardized evaluation metrics and evolving disease classifications over the past decade. Conclusions: AI tools for gynecologic cancers represent a promising field with the potential to significantly support pathological practice. However, their current development is heterogeneous, and many tools lack the robustness and validation required for clinical integration. There is a pressing need to invest in the creation of clinically driven, interpretable, and accurate AI tools that are rigorously validated on large, multicenter cohorts. Future efforts should focus on standardizing evaluation metrics and addressing unmet diagnostic needs, such as the molecular subtyping of rare tumors, to ensure these technologies can reliably benefit patient care. Full article
Show Figures

Figure 1

35 pages, 5316 KB  
Review
Machine Learning for Quality Control in the Food Industry: A Review
by Konstantinos G. Liakos, Vassilis Athanasiadis, Eleni Bozinou and Stavros I. Lalas
Foods 2025, 14(19), 3424; https://doi.org/10.3390/foods14193424 - 4 Oct 2025
Cited by 3 | Viewed by 5730
Abstract
The increasing complexity of modern food production demands advanced solutions for quality control (QC), safety monitoring, and process optimization. This review systematically explores recent advancements in machine learning (ML) for QC across six domains: Food Quality Applications; Defect Detection and Visual Inspection Systems; [...] Read more.
The increasing complexity of modern food production demands advanced solutions for quality control (QC), safety monitoring, and process optimization. This review systematically explores recent advancements in machine learning (ML) for QC across six domains: Food Quality Applications; Defect Detection and Visual Inspection Systems; Ingredient Optimization and Nutritional Assessment; Packaging—Sensors and Predictive QC; Supply Chain—Traceability and Transparency and Food Industry Efficiency; and Industry 4.0 Models. Following a PRISMA-based methodology, a structured search of the Scopus database using thematic Boolean keywords identified 124 peer-reviewed publications (2005–2025), from which 25 studies were selected based on predefined inclusion and exclusion criteria, methodological rigor, and innovation. Neural networks dominated the reviewed approaches, with ensemble learning as a secondary method, and supervised learning prevailing across tasks. Emerging trends include hyperspectral imaging, sensor fusion, explainable AI, and blockchain-enabled traceability. Limitations in current research include domain coverage biases, data scarcity, and underexplored unsupervised and hybrid methods. Real-world implementation challenges involve integration with legacy systems, regulatory compliance, scalability, and cost–benefit trade-offs. The novelty of this review lies in combining a transparent PRISMA approach, a six-domain thematic framework, and Industry 4.0/5.0 integration, providing cross-domain insights and a roadmap for robust, transparent, and adaptive QC systems in the food industry. Full article
(This article belongs to the Special Issue Artificial Intelligence for the Food Industry)
Show Figures

Figure 1

19 pages, 2063 KB  
Article
Multi-Task NoisyViT for Enhanced Fruit and Vegetable Freshness Detection and Type Classification
by Siavash Esfandiari Fard, Tonmoy Ghosh and Edward Sazonov
Sensors 2025, 25(19), 5955; https://doi.org/10.3390/s25195955 - 24 Sep 2025
Viewed by 1344
Abstract
Freshness is a critical indicator of fruit and vegetable quality, directly affecting nutrition, taste, safety, and reducing waste across supply chains. Accurate detection is essential for quality control, supporting producers during harvesting and storage, and guiding consumers in purchasing decisions. Traditional manual assessment [...] Read more.
Freshness is a critical indicator of fruit and vegetable quality, directly affecting nutrition, taste, safety, and reducing waste across supply chains. Accurate detection is essential for quality control, supporting producers during harvesting and storage, and guiding consumers in purchasing decisions. Traditional manual assessment methods remain subjective, labor-intensive, and susceptible to inconsistencies, highlighting the need for automated, efficient, and scalable solutions, such as the use of imaging sensors and Artificial Intelligence (AI). In this study, the efficacy of the Noisy Vision Transformer (NoisyViT) model was evaluated for fruit and vegetable freshness detection from images. Across five publicly available datasets, the model achieved accuracies exceeding 97% (99.85%, 97.98%, 99.01%, 99.77%, and 98.96%). To enhance generalization, these five datasets were merged into a unified dataset encompassing 44 classes of 22 distinct fruit and vegetable types, named Freshness44. The NoisyViT architecture was further expanded into a multi-task configuration featuring two parallel classification heads: one for freshness detection (binary classification) and the other for fruit and vegetable type classification (22-class classification). The multi-task NoisyViT model, fine-tuned on the Freshness44 dataset, attained outstanding accuracies of 99.60% for freshness detection and 99.86% for type classification, surpassing the single-head NoisyViT model (99.59% accuracy), conventional machine learning and CNN-based state-of-the-art methodologies. In practical terms, such a system can be deployed across supply chains, retail settings, or consumer applications to enable real-time, automated monitoring of fruit and vegetable quality. Overall, the findings underscore the effectiveness of the proposed multi-task NoisyViT model combined with the Freshness44 dataset, presenting a robust and scalable solution for the assessment of fruit and vegetable freshness. Full article
(This article belongs to the Section Sensors Development)
Show Figures

Figure 1

Back to TopTop