Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (591)

Search Parameters:
Keywords = visual surveillance

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
12 pages, 400 KB  
Review
Narrow-Band Imaging for the Detection of Oral Potentially Malignant Disorders and Early-Stage Oral Squamous Cell Carcinoma
by Agata Świątek, Adrian Maj and Aida Kusiak
J. Clin. Med. 2026, 15(9), 3382; https://doi.org/10.3390/jcm15093382 - 28 Apr 2026
Abstract
Background: Early detection of oral potentially malignant disorders (OPMDs) and early-stage oral squamous cell carcinoma (OSCC) remains a major clinical challenge, as initial lesions often present with subtle or nonspecific findings during conventional white-light examination. Narrow-band imaging (NBI) enhances visualization of mucosal [...] Read more.
Background: Early detection of oral potentially malignant disorders (OPMDs) and early-stage oral squamous cell carcinoma (OSCC) remains a major clinical challenge, as initial lesions often present with subtle or nonspecific findings during conventional white-light examination. Narrow-band imaging (NBI) enhances visualization of mucosal microvasculature and may improve the identification of dysplastic and malignant transformation. Methods: A narrative review of the literature was conducted in the PubMed, Scopus and Google Scholar databases. Studies published between January 2012 and January 2025 evaluating clinical applications of NBI in oral mucosal lesions, OPMDs, or OSCC were included. Results: NBI enhances visualization of intraepithelial papillary capillary loops (IPCLs), whose morphological alterations correlate with epithelial dysplasia and malignant transformation. Evidence suggests high diagnostic sensitivity (up to 87–100%) and specificity (approximately 83–96%) for detecting high-grade dysplasia and early OSCC. NBI also improves biopsy site selection, reduces sampling error, and supports surveillance of high-risk patients. Conclusions: NBI represents a valuable adjunctive diagnostic tool in oral medicine and dentistry. Although it does not replace histopathological examination, its integration into clinical assessment may enhance early cancer detection and improve management of patients with OPMDs. Full article
24 pages, 29548 KB  
Article
DEMC: A Diffusion-Enhanced Mutual Consistency Framework for Cross-Domain Object Detection in Optical and SAR Imagery
by Cheng Luo, Yueting Zhang, Jiayi Guo, Guangyao Zhou, Hongjian You, Peifeng Li and Xia Ning
Remote Sens. 2026, 18(9), 1358; https://doi.org/10.3390/rs18091358 - 28 Apr 2026
Abstract
Cross-domain object detection from optical to Synthetic Aperture Radar (SAR) imagery addresses the challenges of SAR data scarcity and high annotation costs, enabling crucial capabilities for persistent maritime surveillance and reconnaissance. However, the substantial modality gap resulting from distinct imaging mechanisms and severe [...] Read more.
Cross-domain object detection from optical to Synthetic Aperture Radar (SAR) imagery addresses the challenges of SAR data scarcity and high annotation costs, enabling crucial capabilities for persistent maritime surveillance and reconnaissance. However, the substantial modality gap resulting from distinct imaging mechanisms and severe coherent speckle noise significantly hampers knowledge transfer. Existing Unsupervised Domain Adaptation (UDA) methods, which primarily rely on adversarial feature alignment or static pseudo-labeling, struggle to replicate the physical backscattering properties of SAR data and often fall prey to confirmation bias due to intense background clutter. To overcome these limitations, this paper introduces the Diffusion-Enhanced Mutual Consistency (DEMC) framework. DEMC introduces a novel two-stage adaptation paradigm. The first stage, the Diffusion-Based Domain Alignment (DBDA) module, generates a physics-aware intermediate domain. By integrating step-efficient diffusion generation with physical refinement, this module effectively reduces the cross-modal visual discrepancy while preserving the semantic structure of the optical source. In the second stage, this paper tackles the pervasive issue of pseudo-label noise with the Dual-Student Mutual Verification (DSMV) mechanism. Guided by Cross-Agent Spatial Consensus (CASC) and Adaptive Thresholding (AIT), this mechanism dynamically refines pseudo-labels through geometric overlap validation, effectively recovering faint, low-contrast targets that would typically be discarded by standard thresholds. Extensive evaluations across four benchmark tasks (HRSC2016/ShipRSImageNet to SSDD/HRSID) demonstrate that DEMC establishes a new state-of-the-art. Notably, the framework significantly enhances detection recall and reduces omission errors in complex coastal environments, offering a robust solution for zero-tolerance, all-weather surveillance tasks. Full article
Show Figures

Figure 1

41 pages, 16618 KB  
Article
Multi-Type Ship Detection in Complex Marine Backgrounds Using an Enhanced YOLO-Based Network
by Anran Du, Huiqi Xu and Wenqiang Yao
Sensors 2026, 26(9), 2718; https://doi.org/10.3390/s26092718 - 28 Apr 2026
Abstract
Accurate detection of ship targets in complex marine environments is fundamental to ensuring maritime security and safeguarding maritime rights. With the increasing diversity of vessel types and configurations, achieving precise identification of multiple ship classes amidst dynamic interference and cluttered backgrounds has emerged [...] Read more.
Accurate detection of ship targets in complex marine environments is fundamental to ensuring maritime security and safeguarding maritime rights. With the increasing diversity of vessel types and configurations, achieving precise identification of multiple ship classes amidst dynamic interference and cluttered backgrounds has emerged as a formidable challenge in marine surveillance. To address three pervasive issues in ship target detection—namely, high false-negative rates for small targets, inadequate feature discrimination, and imprecise localization—this paper proposes AK-DSAM-YOLOv13, a multi-scale detection algorithm specifically tailored for complex marine scenarios. Built upon the YOLOv13n architecture, the proposed algorithm implements integrated optimizations across the backbone network, neck structure, and loss function. First, a lightweight cross-scale feature extraction module, AKC3k2, is constructed by incorporating Alterable Kernel Convolutions (AKConv) to reconstruct the feature extraction path, thereby significantly enhancing the representation of multi-scale targets. Second, a Dynamic Up-Sampling Dual-Stream Attention Merging (DyDSAM) structure is designed, which integrates the DySample operator with a Dual-Stream Attention Mechanism (DSAM) to effectively suppress background clutter and improve feature fusion accuracy. Third, an Accuracy-Intersection-over-Union (AIoU) loss function is introduced to jointly optimize overlap area, center distance, and aspect ratio, enhancing localization robustness for small-scale objects. Experimental results on the self-built CM-Ships dataset, as well as the public SeaShips and McShips datasets, demonstrate that AK-DSAM-YOLOv13 significantly outperforms baseline models in detection accuracy, recall, and generalization capability while maintaining a low computational overhead. This research provides an efficient and reliable technical framework for intelligent maritime visual monitoring in complex environments. Full article
Show Figures

Figure 1

59 pages, 49544 KB  
Article
DeepLayer-ID: A Lightweight Multi-Domain Forensic Framework for Real-Time Deepfake Detection in Resource-Constrained UAV Sensor Platforms
by Nayef H. Alshammari and Sami Aziz Alshammari
Sensors 2026, 26(9), 2705; https://doi.org/10.3390/s26092705 - 27 Apr 2026
Viewed by 131
Abstract
Unmanned aerial vehicle (UAV) imaging systems are increasingly deployed in surveillance, infrastructure monitoring, and smart-city applications, where the integrity of captured visual data is critical. Recent advances in generative models enable highly realistic deepfake manipulations that can compromise aerial sensor streams, particularly under [...] Read more.
Unmanned aerial vehicle (UAV) imaging systems are increasingly deployed in surveillance, infrastructure monitoring, and smart-city applications, where the integrity of captured visual data is critical. Recent advances in generative models enable highly realistic deepfake manipulations that can compromise aerial sensor streams, particularly under real-world degradations such as motion blur, sensor noise, and compression artifacts. This paper introduces DeepLayer-ID, a degradation-aware multi-domain forensic framework specifically designed for UAV sensing environments. The proposed architecture decomposes forensic evidence into complementary spatial, frequency, and residual domains. A discrete wavelet transform module captures sub-band energy inconsistencies, while high-pass residual filtering isolates sensor pattern anomalies. A lightweight transformer-based fusion mechanism adaptively integrates cross-domain representations to enhance robustness under heterogeneous acquisition conditions. To emulate operational UAV pipelines, we construct a balanced dataset of 1096 aerial frames derived from the VisDrone2019-DET validation subset, incorporating synthetic manipulations and physics-consistent degradations. The experimental results show that DeepLayer-ID achieves 97.8% accuracy and 0.991 AUC, outperforming ResNet-50 (90.9%, 0.942 AUC), XceptionNet (92.4%, 0.957 AUC), and Noiseprint CNN (93.1%, 0.964 AUC). Notably, the model maintains real-time feasibility, with only 5.4 M parameters and 9.8 ms inference latency. These findings demonstrate that structured multi-domain signal decomposition combined with attention-guided fusion provides a robust and computationally efficient solution for deepfake detection in degraded UAV sensing systems. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

9 pages, 2188 KB  
Case Report
Recurrent Conjunctival Melanoma Managed with Long-Term Eye-Preserving Treatment Followed by Delayed Eyelid Metastasis: A Case Report
by Lidiya Zaduryan, Gabriela Vasileva, Elitsa Hristova, Mladena Radeva, Igor Resnick and Zornitsa Zlatarova
J. Clin. Med. 2026, 15(9), 3334; https://doi.org/10.3390/jcm15093334 - 27 Apr 2026
Viewed by 23
Abstract
Background: Conjunctival melanoma is a rare but potentially aggressive ocular surface malignancy characterized by frequent local recurrence and risk of metastatic spread. In carefully selected cases, depending on tumor extent, clinical course, and patient condition, management may require balancing oncologic control with preservation [...] Read more.
Background: Conjunctival melanoma is a rare but potentially aggressive ocular surface malignancy characterized by frequent local recurrence and risk of metastatic spread. In carefully selected cases, depending on tumor extent, clinical course, and patient condition, management may require balancing oncologic control with preservation of the globe, visual function, and quality of life. Case Presentation: We report the case of a 78-year-old woman with amelanotic conjunctival melanoma of the left eye. Initial treatment consisted of wide local excision using a no-touch technique, conjunctival autograft reconstruction, and adjuvant topical Mitomycin C (MMC). During a 10-year follow-up period, the patient developed multiple local recurrences requiring repeated surgical excisions and additional MMC therapy. Despite the chronic relapsing course, useful visual function and globe preservation were maintained. In December 2024, a subcutaneous lesion of the upper eyelid was detected and histopathologically confirmed as locoregional metastasis from the primary conjunctival melanoma. Given the patient’s advanced age, preserved visual function, absence of documented distant metastatic disease, and overall clinical context, management continued with a conservative, eye-preserving approach. Conclusions: This case illustrates that prolonged eye preservation may be achievable in carefully selected patients with recurrent conjunctival melanoma through repeated conservative management. However, this strategy does not eliminate the risk of delayed progression and requires individualized decision-making together with long-term surveillance. Full article
(This article belongs to the Section Ophthalmology)
11 pages, 269 KB  
Review
Conservative Management of Upper Tract Urothelial Carcinoma: A Narrative Review
by Silvia Proietti, Cristian Axel Hernández-Gaytán, Federico De Leonardis, Stefano Gisone, Riccardo Scalia, Franco Gaboardi and Guido Giusti
J. Clin. Med. 2026, 15(9), 3304; https://doi.org/10.3390/jcm15093304 - 26 Apr 2026
Viewed by 187
Abstract
Upper tract urothelial carcinoma (UTUC) accounts for approximately 5–10% of urothelial malignancies and represents a clinically challenging disease due to its frequent presentation at advanced stages and its association with significant morbidity. Radical nephroureterectomy (RNU) with bladder cuff excision remains the standard treatment [...] Read more.
Upper tract urothelial carcinoma (UTUC) accounts for approximately 5–10% of urothelial malignancies and represents a clinically challenging disease due to its frequent presentation at advanced stages and its association with significant morbidity. Radical nephroureterectomy (RNU) with bladder cuff excision remains the standard treatment for high-risk disease; however, this approach inevitably results in loss of renal function and may significantly affect eligibility for cisplatin-based chemotherapy. In patients with imperative indications for renal preservation—including a solitary kidney, bilateral disease, or advanced chronic kidney disease—Kidney-Sparing Surgery (KSS) represents an essential therapeutic strategy. Technological advances in flexible ureteroscopy, improved visualization systems, and laser energy sources have significantly expanded the feasibility of conservative management. Ureteroscopic tumor ablation has become the cornerstone of KSS, allowing local disease control while preserving renal function. Although recurrence rates remain relatively high, repeated endoscopic treatment combined with strict surveillance protocols can achieve acceptable oncological outcomes in carefully selected patients. This narrative review summarizes the current evidence regarding conservative management of UTUC in imperative clinical situations, with particular emphasis on patient selection, endoscopic treatment modalities, laser technologies, economic implications, patient counselling, and follow-up strategies. Full article
(This article belongs to the Special Issue Novel Diagnostic and Therapeutic Approaches to Urologic Oncology)
20 pages, 1844 KB  
Article
AI-Enhanced Prognostic Model for Predicting Polyp Recurrence and Guiding Post-Polypectomy Surveillance Intervals Using the ERCPMP-V5 Dataset
by Sri Harsha Boppana, Sachin Sravan Kumar Komati, Ritwik Raj, Gautam Maddineni, Raja Chandra Chakinala, Pradeep Yarra, Venkata C. K. Sunkesula and Cyrus David Mintz
J. Clin. Med. 2026, 15(9), 3303; https://doi.org/10.3390/jcm15093303 - 26 Apr 2026
Viewed by 189
Abstract
Introduction: Colorectal cancer remains a leading cause of cancer-related morbidity and mortality, with adenomatous polyps representing a common precursor. Post-polypectomy polyp recurrence represents a significant risk of colorectal cancer, driving periodic colonoscopy surveillance and polypectomy as needed. In this study, we explore a [...] Read more.
Introduction: Colorectal cancer remains a leading cause of cancer-related morbidity and mortality, with adenomatous polyps representing a common precursor. Post-polypectomy polyp recurrence represents a significant risk of colorectal cancer, driving periodic colonoscopy surveillance and polypectomy as needed. In this study, we explore a multimodal machine learning approach that integrates endoscopic imaging with clinical and pathology data to improve recurrence risk prediction and support individualized surveillance planning. Methods: We developed and evaluated a multimodal artificial intelligence (AI) model to predict post-polypectomy colorectal polyp recurrence using the ERCPMP-v5 dataset. The cohort included 217 patients with 796 high-resolution endoscopic RGB images and 21 endoscopic videos; video data were converted to still frames at 2 frames per second. Images and frames were resized to 224 × 224 pixels and normalized. Patient-level demographic, morphological (Paris, Kudo Pit, JNET), anatomical, and pathological variables were encoded using standard scaling for continuous features and one-hot encoding for categorical features. Visual representations were extracted using a pretrained Vision Transformer backbone (ViT-Base-Patch16-224) with frozen weights. Structured metadata (79 variables) was encoded using a multilayer perceptron. A late fusion framework used image and metadata representations to generate a recurrence probability via a sigmoid classifier; probabilities were thresholded at 0.5 for binary prediction. Model performance was evaluated on a held-out test set using accuracy, precision, recall, F1-score, and area under the receiver operating characteristic curve (AUC). We additionally compared fusion performance with image-only and metadata-only baselines. Predicted probabilities were translated to surveillance recommendations using risk tiers: low risk (0.00 ≤ p < 0.20), moderate risk (0.20 ≤ p < 0.50), and high risk (p ≥ 0.50). Results: On the test set, the multimodal fusion model achieved 90.4% accuracy, 86.7% precision, 83.1% recall, 84.9% F1-score, and an AUC of 0.920. The image-only model achieved 84.6% accuracy (AUC 0.880), and the metadata-only model achieved 81.9% accuracy (AUC 0.850), indicating improved performance with multimodal fusion. Risk stratification enabled surveillance recommendations of 1–3 years for low risk, 6–12 months for moderate risk, and 3–6 months for high risk. Conclusions: A late-fusion multimodal model integrating endoscopic imaging with structured clinical and pathology variables demonstrated excellent performance for predicting post-polypectomy recurrence and generated actionable risk-based surveillance intervals. This approach may support individualized follow-up planning and more efficient allocation of surveillance resources, while prioritizing timely evaluation for patients at higher predicted risk. Full article
Show Figures

Graphical abstract

30 pages, 1431 KB  
Article
Feasibility Analysis of Static-Image-Based Traffic Accident Detection Under Domain Shift for Edge-AI Surveillance Systems
by Chien-Chung Wu and Wei-Cheng Chen
Electronics 2026, 15(9), 1803; https://doi.org/10.3390/electronics15091803 - 23 Apr 2026
Viewed by 134
Abstract
Traffic accident detection is a critical component of intelligent transportation systems (ITS), enabling timely incident response and traffic management. While most existing approaches rely on temporal information from video sequences, such methods are not always applicable in resource-constrained surveillance environments. This study investigates [...] Read more.
Traffic accident detection is a critical component of intelligent transportation systems (ITS), enabling timely incident response and traffic management. While most existing approaches rely on temporal information from video sequences, such methods are not always applicable in resource-constrained surveillance environments. This study investigates the feasibility of detecting traffic accidents from single static images by formulating the task as a binary classification problem. Representative architectures, including Vision Transformer (ViT), Swin Transformer, and ResNet-50, are systematically evaluated on the Car Crash Dataset (CCD) under multiple training configurations. To assess generalization capability, cross-domain evaluation is conducted using an external crash video dataset (ECVD) constructed to approximate real-world deployment conditions. Experimental results show that all models achieve strong performance under in-domain evaluation. However, cross-domain testing reveals substantial performance degradation, particularly in recall, indicating limited generalization capability under domain shift. Qualitative analysis further shows that missed detections are associated with weak visual cues, occlusion, and complex traffic environments, while false positives are caused by visually ambiguous patterns resembling accident scenarios. Unlike prior studies that primarily report performance improvements, this work provides empirical evidence that model behavior in static-image-based accident detection is governed by dataset composition rather than architectural design. Therefore, static-image-based accident detection should be interpreted as a coarse-level screening tool rather than a fully reliable decision-making system. This study highlights the importance of data-centric design and cross-domain evaluation for improving real-world applicability. Full article
(This article belongs to the Section Computer Science & Engineering)
20 pages, 6708 KB  
Article
Nighttime Image Dehazing for Urban Monitoring via a Mixed-Norm Variational Model
by Xianglei Liu, Yahao Wu, Runjie Wang and Yuhang Liu
Appl. Sci. 2026, 16(8), 3929; https://doi.org/10.3390/app16083929 - 17 Apr 2026
Viewed by 246
Abstract
As modern urban systems advance, video surveillance has become indispensable for ensuring high-quality urban development. Nighttime images acquired in urban monitoring scenarios are often degraded by haze and non-uniform illumination, resulting in reduced visibility, color distortion, and blurred structural boundaries. To address these [...] Read more.
As modern urban systems advance, video surveillance has become indispensable for ensuring high-quality urban development. Nighttime images acquired in urban monitoring scenarios are often degraded by haze and non-uniform illumination, resulting in reduced visibility, color distortion, and blurred structural boundaries. To address these issues, this paper proposes a nighttime image dehazing framework that combines mixed-norm variational atmospheric-light estimation with adaptive boundary-constrained transmission refinement. Specifically, an L2Lp mixed-norm regularization model is introduced to improve atmospheric-light estimation under complex nighttime illumination and suppress halo diffusion and color distortion around strong light sources. In addition, an adaptive boundary-constrained transmission refinement strategy with weighted soft-threshold shrinkage is developed to reduce residual artifacts while preserving structural edges. Experimental results on synthetic and real nighttime haze datasets demonstrate that the proposed method consistently outperforms representative state-of-the-art methods in both visual quality and quantitative metrics, showing superior robustness and restoration performance for nighttime urban monitoring applications. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

28 pages, 4829 KB  
Article
OH-MEMA: An Integrated One Health Mixed-Effects Modeling Approach for Syndromic Surveillance
by Aseel Basheer, Parisa Masnadi Khiabani, Wolfgang Jentner, Aaron Wendelboe, Jason R. Vogel, Katrin Gaardbo Kuhn, Michael C. Wimberly, Dean Hougen and David Ebert
J. Clin. Med. 2026, 15(8), 2966; https://doi.org/10.3390/jcm15082966 - 14 Apr 2026
Viewed by 402
Abstract
Background/Objectives: Integrating heterogeneous One Health time series into transparent and usable surveillance workflows remains difficult because data preparation, modeling, and interpretation are often separated across tools. In this paper, we introduce OH-MEMA (One Health Mixed-Effects Modeling and Analytics), an interactive visual analytics framework [...] Read more.
Background/Objectives: Integrating heterogeneous One Health time series into transparent and usable surveillance workflows remains difficult because data preparation, modeling, and interpretation are often separated across tools. In this paper, we introduce OH-MEMA (One Health Mixed-Effects Modeling and Analytics), an interactive visual analytics framework that integrates heterogeneous One Health data streams, including human clinical outcomes, environmental factors, and wastewater surveillance data, to support syndromic surveillance and pandemic preparedness. Methods: The system enables users to upload and analyze multi-source datasets through an interactive web-based interface. The modeling component supports fixed effects for multi-source predictors, random effects for spatial, temporal, and demographic grouping variables, optional random slopes, and rolling time-series validation. Model results are visualized as time series comparing observed and predicted outcomes, with evaluation metrics including Mean Absolute Error (MAE), Root Mean Square Error (RMSE), and correlation. To support iterative exploration, the system incorporates analytic provenance through a visual model tree that records prior configurations. Results: OH-MEMA was validated through both quantitative and qualitative evaluations. Quantitatively, mixed-effects models were assessed across multiple counties and outcomes using RMSE, MAE, and correlation, demonstrating robust predictive performance. Qualitatively, expert users, including epidemiologists and disease surveillance analysts, evaluated the system using the NASA Task Load Index and open-ended interviews, indicating improved interpretability, manageable cognitive workload, and effective workflow integration. Conclusions: OH-MEMA provides an interpretable, human-in-the-loop platform for exploratory forecasting and comparative model analysis in syndromic surveillance. The framework effectively bridges data integration, modeling, and interpretation, supporting user-centered analytical reasoning and decision-making in One Health applications. Full article
(This article belongs to the Special Issue New Advances of Infectious Disease Epidemiology)
Show Figures

Figure 1

13 pages, 2452 KB  
Article
A Robust Zn-MOF Integrating Selective Luminescence Detection and On-Site Visual Monitoring of PNP and BNPP in Water
by Jie Dong, Xiang Xiong, Xin-Yu Tian, Man Yu, Ning Wang and Jie-Zheng Li
Inorganics 2026, 14(4), 108; https://doi.org/10.3390/inorganics14040108 - 11 Apr 2026
Viewed by 595
Abstract
p-Nitrophenol (PNP) and bis(4-nitrophenyl) phosphate (BNPP), as typical persistent and toxic organic contaminants, present significant risks to both ecological systems and human health. Accurately quantifying these compounds using luminescent sensors remains a formidable task. In this study, we successfully synthesized a zinc-based metal–organic [...] Read more.
p-Nitrophenol (PNP) and bis(4-nitrophenyl) phosphate (BNPP), as typical persistent and toxic organic contaminants, present significant risks to both ecological systems and human health. Accurately quantifying these compounds using luminescent sensors remains a formidable task. In this study, we successfully synthesized a zinc-based metal–organic framework (Zn-MOF) that functions as a luminescent sensing material. The synthesized Zn-MOF demonstrates exceptional dual-response luminescent detection toward PNP and BNPP, with detection limits as low as 3.49 × 10−6 and 8.43 × 10−6 mol/L, respectively. The sensor maintains high selectivity and functionality even in the presence of various potentially interfering substances commonly found in complex environmental samples. Moreover, the material can be fabricated into a visual sensing film, greatly facilitating its application in on-site rapid detection scenarios. Overall, this work introduces a novel luminescent sensor platform that enables fast and reliable monitoring of PNP and BNPP in environmental contexts, demonstrating strong potential for integration into real-time surveillance and early warning systems. Full article
(This article belongs to the Section Coordination Chemistry)
Show Figures

Figure 1

22 pages, 3840 KB  
Article
An Integrated Vision–Mobile Fusion Framework for Real-Time Smart Parking Navigation
by Oleksandr Laptiev, Ananthakrishnan Thuruthel Murali, Nathalie Saab, Nihad Soltanov and Agnė Paulauskaitė-Tarasevičienė
Logistics 2026, 10(4), 84; https://doi.org/10.3390/logistics10040084 - 9 Apr 2026
Viewed by 787
Abstract
Background: Efficient parking navigation in large and dynamic parking areas requires systems that can adapt to real-time conditions and provide precise vehicle localization. Methods: This paper presents a smart car parking navigation module that integrates camera-based vehicle perception, homography-based ground-plane localization, [...] Read more.
Background: Efficient parking navigation in large and dynamic parking areas requires systems that can adapt to real-time conditions and provide precise vehicle localization. Methods: This paper presents a smart car parking navigation module that integrates camera-based vehicle perception, homography-based ground-plane localization, mobile GNSS positioning, and dynamic route planning into a unified framework. Instance segmentation (YOLOv8n-seg) is used to detect vehicles and extract ground-contact regions, which are associated with parking slots defined in a GeoJSON-based site model. Mobile GNSS data are fused with visual observations via spatio-temporal proximity scoring to enable robust user–vehicle matching without optical identification. An A* routing algorithm dynamically computes and updates navigation paths, adapting to lane obstructions and slot availability in real time. Results: Experimental evaluation on a real six-camera parking facility shows that the proposed segmentation-based localization reduces mean error from 0.732 m to 0.283 m (61.3% improvement), with the 95th-percentile error dropping from 1.892 m to 0.908 m, and outperforming the bounding-box baseline in 85.3% of detections. Conclusions: These results demonstrate that sub-meter vehicle localization and reliable user–vehicle association are achievable using standard surveillance cameras without specialized infrastructure, offering a scalable and cost-effective solution for intelligent parking navigation. Full article
Show Figures

Figure 1

17 pages, 1372 KB  
Article
GastroMalign: Vision Transformer-Based Framework for Early Detection and Malignancy-Risk Stratification for High-Risk Gastrointestinal Lesions
by Sri Harsha Boppana, Sachin Sravan Kumar Komati, Medha Sharath, Aditya Chandrashekar, Gautam Maddineni, Raja Chandra Chakinala, Pradeep Yarra and C. David Mintz
J. Clin. Med. 2026, 15(7), 2701; https://doi.org/10.3390/jcm15072701 - 2 Apr 2026
Viewed by 486
Abstract
Background: Current artificial intelligence (AI) systems in gastrointestinal (GI) endoscopy primarily emphasize binary detection or static classification, providing limited support for the graded assessment of malignant potential that underpins clinical decision-making. We developed GastroMalign, a transformer-based framework designed to stratify GI lesions [...] Read more.
Background: Current artificial intelligence (AI) systems in gastrointestinal (GI) endoscopy primarily emphasize binary detection or static classification, providing limited support for the graded assessment of malignant potential that underpins clinical decision-making. We developed GastroMalign, a transformer-based framework designed to stratify GI lesions according to ordinal disease severity while maintaining clinical interpretability, addressing this unmet need in endoscopic risk assessment. Methods: This retrospective development and validation study used the publicly available GastroVision dataset, comprising 8000 de-identified endoscopic still images from the upper and lower gastrointestinal tract, including the esophagus, stomach, duodenum, colon, rectum, and terminal ileum. GastroMalign integrates a Vision Transformer (ViT) encoder with a Sequential Feature Learner that explicitly models ordinal disease severity along a benign-to-malignant spectrum. The framework produces both categorical risk classification and a continuous malignancy risk score. Images were stratified into training (80%), validation (10%), and test (10%) sets. Performance was compared with convolutional neural network (CNN) baselines and a Swin Transformer. Interpretability was assessed using Score-CAM visualizations reviewed by blinded expert endoscopists. Results: On the held-out test set (n = 800 images), GastroMalign achieved an overall accuracy of 80.06%, precision of 79.65%, recall of 80.06%, and F1-score of 79.17%, with a micro-averaged AUC of 0.98. In comparison, ResNet-50 and DenseNet-121 achieved accuracies of 32.42% and 36.77%, respectively, while the Swin Transformer achieved 60.56% accuracy (AUC = 0.93). Ablation analyses demonstrated a 17% absolute reduction in High-Risk lesion recall when the progression-aware module was removed. Continuous malignancy risk scores increased monotonically across ordinal classes, with mean values < 0.18 for Benign and >0.72 for High-Risk/Malignant lesions. Score-CAM visualizations demonstrated 92% overlap with clinician-annotated lesion regions. Conclusions: GastroMalign delivers an interpretable, progression-aware AI framework for GI lesion risk stratification that outperforms existing CNN- and transformer-based models. Clinically, GastroMalign is intended as an adjunct decision-support tool during endoscopic review to standardize lesion risk stratification (benign to malignant spectrum), support management decisions (biopsy vs. resection vs. surveillance), and reduce operator-dependent variability by pairing ordinal risk outputs with interpretable visual explanations. Full article
Show Figures

Figure 1

13 pages, 3076 KB  
Article
A Rapid Visual Detection Method for Fasciola hepatica Based on RAA-CRISPR/Cas12b
by Jiangying Li, Tao Zhang, Jingkai Ai, Zijuan Zhao, Zhi Li, Yong Fu, Dan Jia, Hong Duo, Xiuying Shen, Ru Meng, Yingna Jian and Xueyong Zhang
Animals 2026, 16(7), 1093; https://doi.org/10.3390/ani16071093 - 2 Apr 2026
Viewed by 397
Abstract
Fascioliasis, a globally prevalent zoonosis, severely threatens public health and livestock security. Current diagnostic approaches, hindered by the need for sophisticated instrumentation and specialized expertise, are inadequate for on-site surveillance in resource-constrained settings. This study developed a rapid, visual detection assay for Fasciola [...] Read more.
Fascioliasis, a globally prevalent zoonosis, severely threatens public health and livestock security. Current diagnostic approaches, hindered by the need for sophisticated instrumentation and specialized expertise, are inadequate for on-site surveillance in resource-constrained settings. This study developed a rapid, visual detection assay for Fasciola hepatica via recombinase-aided amplification (RAA) integrated with CRISPR/Cas12b, addressing critical equipment and operational constraints. Targeting a specific mitochondrial DNA fragment of F. hepatica, recombinant plasmid standards were constructed, RAA primers and sgRNA optimized, and three detection modalities (real-time fluorescence, UV lamp, test strip) integrated. Clinical validation against PCR demonstrated 45 min turnaround time, F. hepatica-specific positivity, and real-time fluorescence sensitivity of 2.6 copies/μL. Results showed high concordance with PCR and qPCR, with substantially reduced assay duration and streamlined workflow. This highly sensitive, specific, multi-visualized method overcomes limitations of conventional techniques, offering an efficient, field-deployable tool for fascioliasis surveillance and control in grassroots and pastoral regions. Full article
(This article belongs to the Section Veterinary Clinical Studies)
Show Figures

Figure 1

21 pages, 6938 KB  
Article
IllumiSIFT: A Cascade Framework for DoG Pyramid Learning in Darkness
by Dewan Fahim Noor, Mohammed Rashid Chowdhury and Sadia Sikder
Sensors 2026, 26(7), 2147; https://doi.org/10.3390/s26072147 - 31 Mar 2026
Viewed by 377
Abstract
In visual object recognition problems, low light exposure and low-quality images present significant challenges in navigation, surveillance, and image retrieval applications, where reliable feature detection is critical. Although recent deep learning–based image enhancement methods improve visual quality in the pixel domain, these improvements [...] Read more.
In visual object recognition problems, low light exposure and low-quality images present significant challenges in navigation, surveillance, and image retrieval applications, where reliable feature detection is critical. Although recent deep learning–based image enhancement methods improve visual quality in the pixel domain, these improvements often do not translate to downstream machine vision performance, as important local gradient structures required for stable key point detection are frequently suppressed. In this work, we propose IllumiSIFT, a task-driven dark image enhancement framework that focuses on preserving Scale-Invariant Feature Transform (SIFT) key points by directly learning the Difference-of-Gaussian (DoG) pyramid from low-light image inputs. Unlike conventional pixel-level recovery approaches, the proposed method employs a cascaded residual learning architecture to predict Gaussian-blurred representations at multiple scales, enabling the generation of enhanced DoG images that are inherently aligned with the SIFT detection process. Extensive experiments conducted on the CDVS, Oxford Buildings, and Paris datasets demonstrate that the proposed approach consistently outperforms state-of-the-art enhancement methods in downstream SIFT matching performance under severe low-light conditions. These results confirm that gradient-domain, task-aligned enhancement provides a more effective and practical solution for recognition-centric low-light imaging applications. Full article
Show Figures

Figure 1

Back to TopTop