Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (6,681)

Search Parameters:
Keywords = automatic evaluation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
13 pages, 630 KB  
Article
Quantitative Texture Analysis of Cervical Cytology Identifies Endometrial Lesions in Atypical Glandular Cells on Liquid-Based Cytology: A Pilot Study
by Toshimichi Onuma, Akiko Shinagawa, Makoto Orisaka and Yoshio Yoshida
Diagnostics 2026, 16(4), 531; https://doi.org/10.3390/diagnostics16040531 - 10 Feb 2026
Abstract
Background/Objectives: Within human papillomavirus (HPV)-based screening, cytology remains essential for cervical cancer detection while also potentially revealing endometrial pathology. This pilot study aimed to distinguish benign (normal) cases from atypical endometrial hyperplasia (AEH) and endometrial cancer (EC) within atypical glandular cell (AGC) [...] Read more.
Background/Objectives: Within human papillomavirus (HPV)-based screening, cytology remains essential for cervical cancer detection while also potentially revealing endometrial pathology. This pilot study aimed to distinguish benign (normal) cases from atypical endometrial hyperplasia (AEH) and endometrial cancer (EC) within atypical glandular cell (AGC) cytology using quantitative analysis of liquid-based cervical cytology. Methods: SurePath and ThinPrep sets included 62 (37 normal, 25 AEH/EC) and 52 (24 normal, 28 AEH/EC) AGC cases, respectively. Semi-automatic QuPath analysis workflow detected cellular clusters; extracted texture, intensity, and geometric features; and produced case-level summaries. A random forest (RF) classifier was used to discriminate AEH/EC from normal cases. Feature subset selection was performed using a beam-search wrapper and joint hyperparameter tuning. Primary performance evaluation comprised stratified 5-fold cross-validation with metrics averaged across these folds. Results: Across both preparations, univariable analyses showed moderate discrimination overall which improved post-menopause. For SurePath and ThinPrep, the highest 10 areas under the curve (AUCs) were 0.701–0.773 (improving to 0.798–0.841 post-menopause) and 0.740–0.778 (improving to 0.832–0.884 post-menopause), respectively. Machine-learning RF models improved performance beyond univariable baselines. Cross-validated AUCs for SurePath and ThinPrep were 0.805 (95% confidence interval [CI], 0.683–0.927) and 0.887 (95% CI, 0.787–0.987), respectively. Features associated with higher AUCs differed between SurePath and ThinPrep, indicating platform-specific signals. Conclusions: Quantitative analysis of routine cervical cytology can augment expert reviews to help distinguish endometrial lesions among AGCs, particularly post-menopause. These software-based readouts can fit within existing workflows and may improve triage when morphology is subtle, including scenarios with HPV-negative screening results. Full article
(This article belongs to the Section Pathology and Molecular Diagnostics)
23 pages, 2201 KB  
Article
Improving Flood Simulation Performance of Distributed Hydrological Model in the Plain–Hilly Transition Zone via DEM Stream Burning and PSO
by Zhiwei Huang, Yangbo Chen and Kai Wang
Remote Sens. 2026, 18(4), 555; https://doi.org/10.3390/rs18040555 - 10 Feb 2026
Abstract
Accurate flood simulation and forecasting in plain–hilly transition zones remain challenging due to limitations of medium- and low-resolution digital elevation models (DEMs), which often produce discontinuous drainage networks and misaligned confluence paths. This study evaluates an integrated improvement framework that combines DEM stream-burning [...] Read more.
Accurate flood simulation and forecasting in plain–hilly transition zones remain challenging due to limitations of medium- and low-resolution digital elevation models (DEMs), which often produce discontinuous drainage networks and misaligned confluence paths. This study evaluates an integrated improvement framework that combines DEM stream-burning and automatic parameter calibration to enhance the flood-simulation performance of a physically based distributed hydrological model (the Liuxihe Model). The framework was tested in the Beimiaoji Watershed (upper Huaihe River Basin) using 12 observed flood events: one event for parameter calibration via Particle Swarm Optimization (PSO) and 11 events for independent validation. Model performance was assessed using multiple metrics, including the Nash–Sutcliffe Efficiency (NSE), peak error (PE), and peak-timing error (PT). Results indicate that stream-burning substantially improves river-network extraction, and that the combined application of DEM correction and PSO-based calibration markedly enhances model performance. The findings suggest that the proposed, cost-effective correction–calibration pathway can improve operational flood simulations in terrain-sensitive regions without relying on costly high-resolution DEMs, and thus provides a practical reference for similar basins. Full article
12 pages, 11313 KB  
Article
Evaluation of the Diagnostic Accuracy of Comercially Available AI-CAD Solution in Mammography Screening in Mexican Women (Mammo-MX Database)
by Blanca Murillo-Ortiz, Luis Carlos Padierna, Luis Fernando Parra-Sánchez, Samanta Medinilla-Orozco, Sergio Meza-Chavolla, Samuel Rivera-Rivera and Aura Rubiela Espejo-Fonseca
Diagnostics 2026, 16(4), 517; https://doi.org/10.3390/diagnostics16040517 - 9 Feb 2026
Abstract
Background/Objectives: The objective of this study was to evaluate the performance of Breast-SlimView®, a deep convolutional neural network for the automatic classification of BI-RADS and breast density in MLO (mediolateral oblique) and CC (craniocaudal) views. Methods: A total of [...] Read more.
Background/Objectives: The objective of this study was to evaluate the performance of Breast-SlimView®, a deep convolutional neural network for the automatic classification of BI-RADS and breast density in MLO (mediolateral oblique) and CC (craniocaudal) views. Methods: A total of 9560 mammographic images from 2390 Mexican women (age: 54.14 ± 8.72 years) were labeled according to ACR (American College of Radiology) density (A-D) and BI-RADS 1, 2, and 3 (low risk), and BI-RADS 4 and 5 (high risk). All mammograms in the test dataset were blinded and read by two radiologists, and the consensus was taken as the reference standard. The accuracy, sensitivity, and specificity of the automated AI-based classification system was evaluated against the consensus reached by expert radiologists. Results: The classification of MLO and CC projections had a mean sensitivity of 0.81 (95% CI: 0.797–0.829), a specificity of 0.70 (95% CI: 0.686–0.722), and an accuracy of 0.71 (95% CI: 0.698–0.734) in differentiating between low and high risk. Good agreement was observed with ACR breast density classifications A, B, C, and D. Agreement between AI and human readers was “substantial” (Pearson’s chi-square, p = 0.001). Conclusions: AI enables accurate, standardized, observer independent classification. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

17 pages, 2700 KB  
Article
Design of a Dual-Chain Synchronization Monitoring System for Scraper Conveyors Based on Magnetic Sensing
by Jiacheng Li, Xishuo Zhu, Han Tian, Junsheng Zhang, Hao Li, Haoting Liu and Junyuan Li
Designs 2026, 10(1), 18; https://doi.org/10.3390/designs10010018 - 9 Feb 2026
Abstract
Chain breakage in dual-chain scraper conveyors poses significant risks to the safe and efficient operation of coal mines. To address the challenges of harsh underground environments and the lack of effective synchronization monitoring, this paper presents the design and implementation of an intelligent [...] Read more.
Chain breakage in dual-chain scraper conveyors poses significant risks to the safe and efficient operation of coal mines. To address the challenges of harsh underground environments and the lack of effective synchronization monitoring, this paper presents the design and implementation of an intelligent monitoring system for conveyor integrity. The system integrates non-contact Hall-effect sensors with a custom-designed intrinsically safe data acquisition unit. A systematic algorithmic framework is designed, comprising an adaptive threshold and plateau seeking (ATPS) module and an adaptive clustering-based identification (ACCI) module, to enable high-accuracy automatic identification of chain elements. Furthermore, a novel synchronization evaluation design based on event correlation and statistical features is introduced to quantify inter-chain timing deviations. This leads to the construction of a Chain Synchronization Index (CSI) for desynchronization anomaly detection. Field experiments conducted under representative operating conditions, including normal operation and controlled single-chain disconnection scenarios, demonstrate that the proposed design achieves a chain element recognition accuracy of 98.2%. Under normal conditions, the CSI remains consistently high, while breakage faults are sensitively detected. The proposed system provides a practical engineering solution for synchronization-aware condition monitoring and anomaly warning of scraper conveyor chains in underground coal mines. Full article
Show Figures

Figure 1

33 pages, 4024 KB  
Article
A Study on Constructing a Dataset for Detecting VHF Signal Propagation Path Error
by Weichen Wang, Xiaoye Wang, Xiaowen Sun, Zhanpeng Yu and Qing Hu
Electronics 2026, 15(4), 726; https://doi.org/10.3390/electronics15040726 - 8 Feb 2026
Viewed by 44
Abstract
This paper presents a dedicated dataset for the measurement and prediction of VHF signal propagation path error, aiming to mitigate their adverse effects on the ranging and positioning accuracy of terrestrial navigation systems. The Automatic Identification System (AIS), as a critical maritime collision-avoidance [...] Read more.
This paper presents a dedicated dataset for the measurement and prediction of VHF signal propagation path error, aiming to mitigate their adverse effects on the ranging and positioning accuracy of terrestrial navigation systems. The Automatic Identification System (AIS), as a critical maritime collision-avoidance technology, enables terrestrial-based positioning using coastal AIS stations, offering significant advantages in terms of deployment and maintenance costs. However, propagation path error remains one of the primary sources of positioning inaccuracies, and no specialized datasets have yet been developed to support its systematic measurement and prediction. To address this limitation, a comprehensive data acquisition and processing framework for AIS-related VHF-band propagation path error is proposed. Based on this framework, a multidimensional dataset is constructed, incorporating temperature, relative humidity, air pressure, instantaneous wind speed, salinity, and measured propagation path error. The measured propagation path error data are collected using a self-developed additional secondary phase correction system. Hydrometeorological parameters obtained from authoritative sources at the same time and location are integrated with the measured data to form experimental samples with rich feature representations. Data cleaning and preprocessing procedures are further applied to improve dataset quality. The final dataset comprises 1,296,000 samples and is suitable for training and evaluating machine learning and deep learning models for VHF signal propagation path error prediction, thereby supporting enhanced positioning accuracy and the improved reliability of maritime navigation systems. Full article
48 pages, 4188 KB  
Article
QUBO Formulation of the Pickup and Delivery Problem with Time Windows for Quantum Annealing
by Cosmin Ștefan Curuliuc and Florin Leon
Appl. Sci. 2026, 16(4), 1690; https://doi.org/10.3390/app16041690 - 8 Feb 2026
Viewed by 34
Abstract
This paper addresses the Pickup and Delivery Problem with Time Windows (PDPTW), an NP-hard combinatorial optimization problem with major practical relevance in logistics and transportation. The study focuses on a quadratic unconstrained binary optimization (QUBO) formulation for quantum annealing and benchmarks it against [...] Read more.
This paper addresses the Pickup and Delivery Problem with Time Windows (PDPTW), an NP-hard combinatorial optimization problem with major practical relevance in logistics and transportation. The study focuses on a quadratic unconstrained binary optimization (QUBO) formulation for quantum annealing and benchmarks it against two classical optimization paradigms. A modular Python framework is developed that encodes PDPTW in three ways: a mixed-integer linear programming (MILP) model that serves as an exact reference, a genetic algorithm (GA) metaheuristic, and a QUBO model that is compatible with quantum annealers. The framework supports test scenarios with increasing structural complexity, with both feasible and intentionally infeasible instances. An additional contribution is the conceptual design and preliminary analysis of an automatic-penalty weight-tuning scheme for the QUBO model. Experimental results show that the proposed QUBO formulation can produce high-quality solutions for simpler PDPTW instances, but its performance strongly depends on the careful calibration of penalty weights. MILP provides optimal baselines on small instances but becomes intractable as problem size grows. The GA scales to the largest scenario and finds feasible solutions of reasonable quality, but they are not necessarily optimal. The evaluation also includes a large number of problem instances and runs on IBM Quantum hardware using the Quantum Approximate Optimization Algorithm (QAOA). Full article
29 pages, 4173 KB  
Article
Comparing Cognitive and Psychological Factors in Virtual Reality and Real Environments: A Cave Automatic Virtual Environment Experimental Study
by Alexander C. Pogmore, Erica M. Vaz, Richard J. Davies and Neil J. Cooke
Appl. Sci. 2026, 16(4), 1688; https://doi.org/10.3390/app16041688 - 8 Feb 2026
Viewed by 44
Abstract
The emergence of Building Information Modelling, Internet of Things, and Cave Automatic Virtual Environments (CAVEs) has created new opportunities for remote monitoring and decision-making in the operational built environment, yet empirical evidence supporting their use as alternatives to on-site observation remains limited. This [...] Read more.
The emergence of Building Information Modelling, Internet of Things, and Cave Automatic Virtual Environments (CAVEs) has created new opportunities for remote monitoring and decision-making in the operational built environment, yet empirical evidence supporting their use as alternatives to on-site observation remains limited. This study evaluates task and human performance in a controlled experiment comparing a CAVE with a real-world setting (n = 26). Situation awareness, workload, anxiety, presence, usability, and user experience were measured across conditions. Participants in the CAVE demonstrated substantially higher situation awareness (M = 92.1%) than those in the real-world condition (M = 56.8%), alongside significantly lower overall workload (NASA-TLX weighted workload = 38.3 vs. 53.8). Anxiety remained consistently low in the CAVE (ΔSTAI = –1.0), whereas participants in the real-world condition exhibited higher baseline anxiety followed by a large reduction during task execution (ΔSTAI = –13.2). The CAVE also elicited high levels of spatial presence, involvement, and realism relative to comparable projection-based systems, while usability ratings (SUS) were above industry benchmarks (M = 74.2). Together, these findings indicate that controlled immersive representations of built environments can support sensemaking and reduce extraneous cognitive load relative to live, uncontrolled on-site observation, with important implications for remote facilities management and operational decision-making. Full article
(This article belongs to the Special Issue Advances in Virtual Reality Applications)
31 pages, 20786 KB  
Article
Multi-Scale Analysis of Ecosystem Service Trade-Off Intensity and Its Drivers Based on Wavelet Transform: A Case Study of the Plain–Mountain Transition Zone in China
by Congyi Li, Penggen Cheng, Xiaojian Wei, Bei Liu, Yunju Nie and Zhanhui Zhao
Land 2026, 15(2), 278; https://doi.org/10.3390/land15020278 - 7 Feb 2026
Viewed by 134
Abstract
Identifying the multi-scale drivers of ecosystem service (ES) trade-off intensity is essential for promoting regional sustainability. However, the existing multi-scale ES studies typically rely on predefined administrative units or fixed grid sizes due to the absence of scientifically sound scale-partitioning approaches, which limits [...] Read more.
Identifying the multi-scale drivers of ecosystem service (ES) trade-off intensity is essential for promoting regional sustainability. However, the existing multi-scale ES studies typically rely on predefined administrative units or fixed grid sizes due to the absence of scientifically sound scale-partitioning approaches, which limits the identification of characteristic scales and obscures scale-dependent interactions. This study broke new ground by combining continuous wavelet transform (CWT) and optimal parameter geographic detector (OPGD) to automatically identify the characteristic scales of trade-offs between ecosystem services, thus opening up a new avenue in multi-scale studies. Taking China’s plain–mountain transition zone as a case study, we evaluate trade-off intensity among four key ecosystem services—water yield (WY), habitat quality (HQ), soil conservation (SC), and carbon storage (CS). The results show that the following: (1) The identification of 36 characteristic scales (ranging from 5 km to 55 km) indicates that ecosystem service trade-offs operate across a wide range of spatial extents, implying that a single management scale cannot effectively address all ES interactions. (2) From 2000 to 2020, CS-HQ, SC-HQ, and WY-HQ trade-off intensities were jointly driven by both natural conditions and human activities, whereas CS-SC was predominantly influenced by natural and climatic factors. The trade-off intensities between CS-WY and WY-SC were mainly controlled by climatic forces. (3) The explanatory power (q value) of each factor varied distinctly with spatial scale, and the interaction effects between multiple factors were substantially stronger than their individual effects. This indicates that ecosystem service trade-offs are primarily governed by coupled processes rather than isolated drivers. Consequently, management strategies targeting single drivers are unlikely to be effective. Instead, ecosystem management should be designed around combinations of drivers that operate at specific spatial scales and provide a concrete pathway for translating trade-off analyses into spatially differentiated management actions. Full article
Show Figures

Figure 1

39 pages, 5210 KB  
Review
An In-Depth Review of Speech Enhancement Algorithms: Classifications, Underlying Principles, Challenges, and Emerging Trends
by Nisreen Talib Abdulhusein and Basheera M. Mahmmod
Algorithms 2026, 19(2), 134; https://doi.org/10.3390/a19020134 - 7 Feb 2026
Viewed by 41
Abstract
Speech enhancement aims to improve speech quality and intelligibility in noisy environments and is important in applications such as hearing aids, mobile communications and automatic speech recognition (ASR). This paper shows a structured review of speech enhancement techniques, classified depending on the channel [...] Read more.
Speech enhancement aims to improve speech quality and intelligibility in noisy environments and is important in applications such as hearing aids, mobile communications and automatic speech recognition (ASR). This paper shows a structured review of speech enhancement techniques, classified depending on the channel configuration and signal processing framework. Both traditional and modern approaches are discussed, including classical signal processing methods, machine learning techniques, and recent deep learning-based models. Furthermore, common noise types, widely used speech datasets, and standard evaluation metrics for evaluating speech quality and intelligibility are reviewed. Key challenges such as non-stationary noise, data limitations, reverberation, and generalization to unseen noise conditions are highlighted. This review presents the advancements in speech enhancement and discusses the challenges and trends of this field. Valuable insights are provided for researchers, engineers, and practitioners in the area. The findings aid in the selection of suitable techniques for improved speech quality and intelligibility, and we concluded that the trend in speech enhancement has shifted from standard algorithms to deep learning methods that can efficiently learn information regarding speech signals. Full article
27 pages, 9745 KB  
Article
A Novel Water-Flow Live-Insect Monitoring Device for Measuring the Light-Trap Attraction Rate of Insects
by Jiarui Fang, Lei Shu, Ru Han, Kailiang Li and Wei Lin
Electronics 2026, 15(3), 714; https://doi.org/10.3390/electronics15030714 - 6 Feb 2026
Viewed by 87
Abstract
The light-trap attraction rate (LTARI) is an important metric for characterizing diel activity patterns and supports studies in insect behavioral ecology and pest management. However, conventional automatic light-trap devices often rely on lethal methods (e.g., high-voltage grids or infrared heating), causing high mortality [...] Read more.
The light-trap attraction rate (LTARI) is an important metric for characterizing diel activity patterns and supports studies in insect behavioral ecology and pest management. However, conventional automatic light-trap devices often rely on lethal methods (e.g., high-voltage grids or infrared heating), causing high mortality of non-target insects and severe image obstruction due to stacking of insect bodies. These issues disturb natural populations and bias attempts to quantify LTARI. Our primary objective is to develop and evaluate a non-lethal monitoring system as a methodological basis for future LTARI research, rather than to provide head-to-head quantitative comparisons with conventional traps. To address the above limitations, we propose a live-insect monitoring instrument that integrates a wind-suction trap with a Water-Flow Dispersion and Transport Structure (WF-DTS). The non-destructive trapping–dispersion–release process limits body stacking, allows captured insects to be released, and yields a community-level post-capture survival rate of 94% under the conditions tested. Experimental results show that the prototype maintains image integrity with clearly isolated single insects and achieves a detection performance of 95.6% (mAP@0.5) using the YOLOv8s model. At the inference stage, only the standard resizing and normalization operations of YOLOv8s are applied, without additional denoising, background subtraction, or data augmentation. These observations suggest that the WF-DTS generates images that are easier to segment and classify than those from conventional devices. The high detection accuracy is largely attributable to the physical dispersion of specimens and the uniform white matte background provided by the hardware design. Overall, the system constitutes a non-lethal hardware–software platform that may reduce backend processing complexity and provide a methodological basis for more accurate LTARI estimation in future, dedicated field studies. Full article
Show Figures

Figure 1

22 pages, 1944 KB  
Article
Automated Radiological Report Generation from Breast Ultrasound Images Using Vision and Language Transformers
by Shaheen Khatoon and Azhar Mahmood
J. Imaging 2026, 12(2), 68; https://doi.org/10.3390/jimaging12020068 - 6 Feb 2026
Viewed by 164
Abstract
Breast ultrasound imaging is widely used for the detection and characterization of breast abnormalities; however, generating detailed and consistent radiological reports remains a labor-intensive and subjective process. Recent advances in deep learning have demonstrated the potential of automated report generation systems to support [...] Read more.
Breast ultrasound imaging is widely used for the detection and characterization of breast abnormalities; however, generating detailed and consistent radiological reports remains a labor-intensive and subjective process. Recent advances in deep learning have demonstrated the potential of automated report generation systems to support clinical workflows, yet most existing approaches focus on chest X-ray imaging and rely on convolutional–recurrent architectures with limited capacity to model long-range dependencies and complex clinical semantics. In this work, we propose a multimodal Transformer-based framework for automatic breast ultrasound report generation that integrates visual and textual information through cross-attention mechanisms. The proposed architecture employs a Vision Transformer (ViT) to extract rich spatial and morphological features from ultrasound images. For textual embedding, pretrained language models (BERT, BioBERT, and GPT-2) are implemented in various encoder–decoder configurations to leverage both general linguistic knowledge and domain-specific biomedical semantics. A multimodal Transformer decoder is implemented to autoregressively generate diagnostic reports by jointly attending to visual features and contextualized textual embeddings. We conducted an extensive quantitative evaluation using standard report generation metrics, including BLEU, ROUGE-L, METEOR, and CIDEr, to assess lexical accuracy, semantic alignment, and clinical relevance. Experimental results demonstrate that BioBERT-based models consistently outperform general domain counterparts in clinical specificity, while GPT-2-based decoders improve linguistic fluency. Full article
(This article belongs to the Section AI in Imaging)
Show Figures

Figure 1

13 pages, 3297 KB  
Article
Effect of a Real-Time Artificial Intelligence-Assisted Ultrasound System on BI-RADS C4 Breast Lesions Based on Breast Density
by Jeeyeon Lee, Won Hwa Kim, Jaeil Kim, Byeongju Kang, Joon Suk Moon, Hye Jung Kim, Soo Jung Lee, In Hee Lee and Ho Yong Park
Cancers 2026, 18(3), 536; https://doi.org/10.3390/cancers18030536 - 6 Feb 2026
Viewed by 128
Abstract
Background: Artificial intelligence-based computer-aided diagnosis (AI-CAD) systems are increasingly used in breast ultrasonography; however, their diagnostic performance may vary with breast density. Given that dense breasts are highly prevalent among Asian women, understanding this relationship is essential for optimizing AI-assisted imaging strategies. Therefore, [...] Read more.
Background: Artificial intelligence-based computer-aided diagnosis (AI-CAD) systems are increasingly used in breast ultrasonography; however, their diagnostic performance may vary with breast density. Given that dense breasts are highly prevalent among Asian women, understanding this relationship is essential for optimizing AI-assisted imaging strategies. Therefore, this study aims to evaluate the effect of breast density on the diagnostic accuracy of an AI-CAD ultrasound system in BI-RADS category 4 (C4) breast lesions. Methods: Overall, 110 consecutive BI-RADS C4 lesions were reviewed between January and December 2023. An AI-CAD ultrasound system automatically assigned BI-RADS categories and calculated the probability of malignancy (POM) using static ultrasound images. Histopathology served as the reference standard, with atypia and malignancy combined into a non-benign category. Diagnostic performance—including sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and overall accuracy—was analyzed based on breast density (BI-RADS B–D), determined using AI-assisted mammography. Results: Overall, the sensitivity and NPV were 81.3% and 87.5%, respectively, while the specificity and PPV were lower at 53.8% and 41.9%. All diagnostic performance metrics improved with increasing breast density. In the density D category, sensitivity (92.3%), specificity (61.5%), NPV (96.0%), and accuracy (69.2%) were highest. Additionally, concordance between AI-assigned BI-RADS categories and histopathologic diagnoses increased with density (B: 50.0%, C: 57.5%, D: 67.3%). Across all density groups, non-benign lesions consistently demonstrated higher POM values. Conclusions: Breast density significantly affects the diagnostic performance of AI-CAD ultrasound in BI-RADS C4 lesions. The AI system demonstrates higher accuracy and concordance in dense breasts, suggesting more consistent lesion interpretation in high-density environments. These findings highlight the potential utility of AI-assisted ultrasound as a diagnostic adjunct, particularly for Asian women, who commonly have dense breast composition. Further multicenter, real-time validation studies are warranted to validate these findings. Full article
(This article belongs to the Special Issue Application of Ultrasound in Cancer Diagnosis and Treatment)
Show Figures

Figure 1

15 pages, 3161 KB  
Article
On the Suitability of Data Augmentation Techniques to Improve Parkinson’s Disease Detection with Speech Recordings
by Cristian David Ríos-Urrego, Tulio Andrés Ruiz-Romero, David Puerta-Lotero, Daniel Escobar-Grisales and Juan Rafael Orozco-Arroyave
Diagnostics 2026, 16(3), 498; https://doi.org/10.3390/diagnostics16030498 - 6 Feb 2026
Viewed by 145
Abstract
Background: Parkinson’s disease (PD) is a neurodegenerative disorder that affects millions of people worldwide. Speech analysis has emerged as a non-invasive tool for automatic PD detection; however, the scarcity and homogeneity of available datasets often limit the generalization capability of machine learning models, [...] Read more.
Background: Parkinson’s disease (PD) is a neurodegenerative disorder that affects millions of people worldwide. Speech analysis has emerged as a non-invasive tool for automatic PD detection; however, the scarcity and homogeneity of available datasets often limit the generalization capability of machine learning models, motivating the use of data augmentation strategies to improve robustness. Methods: This study presents a data augmentation-based methodology for speech-based classification between PD patients and healthy control subjects. A deep learning model trained from scratch on Mel spectrograms is evaluated using augmentation techniques applied at both the waveform and time–frequency levels. Multiple training and model selection strategies are analyzed and model performance is assessed through internal validation as well as using an independent dataset Results: Experimental results show that carefully selected data augmentation techniques improve classification performance with respect to the non-augmented counterpart, achieving gains of up to 3% in accuracy. However, when evaluated on an independent dataset, these improvements do not consistently translate into better generalization. Conclusions: These findings demonstrate that, while data augmentation can effectively enhance model performance within a single dataset, this apparent robustness is not sufficient to guarantee generalization on independent speech corpora for PD detection. Full article
Show Figures

Figure 1

19 pages, 12003 KB  
Article
Low Latency and Multi-Target Camera-Based Safety System for Optical Wireless Power Transmission
by Chen Zuo and Tomoyuki Miyamoto
Photonics 2026, 13(2), 156; https://doi.org/10.3390/photonics13020156 - 6 Feb 2026
Viewed by 82
Abstract
Optical Wireless Power Transmission (OWPT) holds a significant position for enabling cable-free energy delivery in long-distance, high-energy, and mobile scenarios. However, ensuring human and equipment safety under high-power laser exposure remains a critical challenge. This study reports a vision-based OWPT safety system that [...] Read more.
Optical Wireless Power Transmission (OWPT) holds a significant position for enabling cable-free energy delivery in long-distance, high-energy, and mobile scenarios. However, ensuring human and equipment safety under high-power laser exposure remains a critical challenge. This study reports a vision-based OWPT safety system that implements the principle of automatic emission control (AEC)—dynamically modulating laser emission in real time to prevent hazardous exposure. While camera-based OWPT safety systems have been proposed in the concept, there are extremely limited working implementations to date. Moreover, existing systems struggle with response speed and single-object assumptions. To address these gaps, this research presents a low-latency safety architecture based on a customized deep learning-based object detection framework, a dedicated OWPT dataset, and a multi-threaded control stack. The research also introduces a real-time risk factor (RF) metric that evaluates proximity and velocity for each detected intrusion object (IO), enabling dynamic prioritization among multiple threats. The system achieves a minimum response latency of 14 ms (average 29 ms) and maintains reliable performance in complex multi-object scenarios. This work establishes a new benchmark for OWPT safety system and contributes a scalable reference for future development. Full article
Show Figures

Figure 1

10 pages, 1705 KB  
Proceeding Paper
Low-Capital Expenditure AI-Assisted Zero-Trust Control Plane for Brownfield Ethernet Environments
by Hong-Sheng Wang and Reen-Cheng Wang
Eng. Proc. 2025, 120(1), 54; https://doi.org/10.3390/engproc2025120054 - 5 Feb 2026
Viewed by 137
Abstract
We developed an AI-assisted zero-trust control system at low capital expenditure to retrofit brownfield Ethernet environments without disruptive hardware upgrades or costly software-defined networking migration. Legacy network infrastructures in small and medium-sized enterprises (SMEs) lack the flexibility and programmability required by modern zero-trust [...] Read more.
We developed an AI-assisted zero-trust control system at low capital expenditure to retrofit brownfield Ethernet environments without disruptive hardware upgrades or costly software-defined networking migration. Legacy network infrastructures in small and medium-sized enterprises (SMEs) lack the flexibility and programmability required by modern zero-trust architectures, creating a persistent security gap between static Layer-1 deployments and dynamic cyber threats. The developed system addresses this gap through a modular architecture that integrates genetic-algorithm-based virtual local area network (VLAN) optimization, large language model-guided firewall rule synthesis, threat-intelligence-driven policy automation, and telemetry-triggered adaptive isolation. Network assets are enumerated and evaluated through a risk-aware clustering model to enable micro-segmentation that aligns with the principle of least privilege. Optimized segmentation outputs are translated into pfSense firewall policies through structured prompt engineering and dual-stage validation, ensuring syntactic correctness and semantic consistency. A retrieval-augmented generation pipeline connects live telemetry with historical vulnerability intelligence, enabling rapid policy adjustments and automated containment responses. The system operates as an overlay on existing managed switches, orchestrating configuration changes through standards-compliant interfaces such as simple network management protocol and network configuration protocol. Experimental evaluation in a representative SME testbed demonstrates substantial improvements in segmentation granularity, refining seven flat subnets into thirty-four purpose-specific VLANs. Compliance scores improved significantly, with the International Organization for Standardization/International Electrotechnical Commission 27001 rising from 62.3 to 94.7% and the National Institute of Standards and Technology Cybersecurity Framework alignment increasing from 58.9 to 91.2%. All 851 automatically generated firewall rules passed dual-agent validation, ensuring reliable enforcement and enhanced auditability. The results indicate that the system developed provides an operationally feasible pathway for legacy networks to achieve zero-trust segmentation with minimal cost and disruption. Future extensions will explore adaptive learning mechanisms and hybrid cloud support to further enhance scalability and contextual responsiveness. Full article
(This article belongs to the Proceedings of 8th International Conference on Knowledge Innovation and Invention)
Show Figures

Figure 1

Back to TopTop