Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,713)

Search Parameters:
Keywords = video image

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 1535 KB  
Article
Nighttime Image Dehazing for Urban Monitoring via a Mixed-Norm Variational Model
by Xianglei Liu, Yahao Wu, Runjie Wang and Yuhang Liu
Appl. Sci. 2026, 16(8), 3929; https://doi.org/10.3390/app16083929 - 17 Apr 2026
Abstract
As modern urban systems advance, video surveillance has become indispensable for ensuring high-quality urban development. Nighttime images acquired in urban monitoring scenarios are often degraded by haze and non-uniform illumination, resulting in reduced visibility, color distortion, and blurred structural boundaries. To address these [...] Read more.
As modern urban systems advance, video surveillance has become indispensable for ensuring high-quality urban development. Nighttime images acquired in urban monitoring scenarios are often degraded by haze and non-uniform illumination, resulting in reduced visibility, color distortion, and blurred structural boundaries. To address these issues, this paper proposes a nighttime image dehazing framework that combines mixed-norm variational atmospheric-light estimation with adaptive boundary-constrained transmission refinement. Specifically, an  L2 − Lp mixed-norm regularization model is introduced to improve atmospheric-light estimation under complex nighttime illumination and suppress halo diffusion and color distortion around strong light sources. In addition, an adaptive boundary-constrained transmission refinement strategy with weighted soft-threshold shrinkage is developed to reduce residual artifacts while preserving structural edges. Experimental results on synthetic and real nighttime haze datasets demonstrate that the proposed method consistently outperforms representative state-of-the-art methods in both visual quality and quantitative metrics, showing superior robustness and restoration performance for nighttime urban monitoring applications. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
21 pages, 5042 KB  
Article
Real-Time Traffic Data Analysis on Resource-Constrained Edge Devices
by Dušan Bogićević, Dragan Stojanović, Milan Gnjatović, Ivan Tot and Boriša Jovanović
Electronics 2026, 15(8), 1703; https://doi.org/10.3390/electronics15081703 - 17 Apr 2026
Abstract
This paper evaluates the feasibility of real-time traffic data analysis on resource-constrained edge devices using a hybrid processing approach. The proposed architecture integrates an LF Edge eKuiper complex event processing engine, deployed within Docker containers, with a native YOLO deep learning model for [...] Read more.
This paper evaluates the feasibility of real-time traffic data analysis on resource-constrained edge devices using a hybrid processing approach. The proposed architecture integrates an LF Edge eKuiper complex event processing engine, deployed within Docker containers, with a native YOLO deep learning model for pedestrian detection. The model processes video frames at 480 × 240 resolution on CPU-only Raspberry Pi devices, achieving up to 30 FPS. The research specifically investigates the performance limits of Raspberry Pi 3 and Raspberry Pi 4 platforms when simultaneously processing high-throughput simulated traffic data from the SUMO simulator (Belgrade scenario, with vehicle distributions and densities adjusted for small, medium, and large traffic volumes) and live video streams, respectively. Experimental results indicate that while both platforms can process up to 2600 messages per second in the settings without image processing, the introduction of a camera sensor reveals a significant hardware bottleneck. The Raspberry Pi 4 maintains robust real-time performance with an average complex event detection latency of less than 500 ms. In contrast, the Raspberry Pi 3 exhibits severe performance degradation, with image processing delays exceeding 8 s, rendering it unsuitable for real-time safety alerts. The findings demonstrate that with appropriate hardware selection, edge-based complex event processing can successfully detect critical safety events, such as sudden vehicle acceleration near pedestrians, without relying on cloud infrastructure. Full article
Show Figures

Figure 1

29 pages, 3416 KB  
Article
Enhancing Collaborative AI Learning: A Blockchain-Secured, Edge-Enabled Platform for Multimodal Education in IIoT Environments
by Ahsan Rafiq, Eduard Melnik, Alexey Samoylov, Alexander Kozlovskiy and Irina Safronenkova
Big Data Cogn. Comput. 2026, 10(4), 123; https://doi.org/10.3390/bdcc10040123 - 17 Apr 2026
Abstract
As industries deploy more connected devices in factories, warehouses, and smart facilities, the need for artificial intelligence (AI) systems that can operate securely in distributed, data-intensive environments is growing. Traditional centralized learning and online education platforms struggle when students and systems have to [...] Read more.
As industries deploy more connected devices in factories, warehouses, and smart facilities, the need for artificial intelligence (AI) systems that can operate securely in distributed, data-intensive environments is growing. Traditional centralized learning and online education platforms struggle when students and systems have to process real-time streams (sensors, video, text) with strict latency and privacy requirements. To address this challenge, a blockchain-secured, edge-enabled multimodal federated learning framework tailored for Industrial IoT (IIoT) environments is proposed. The model integrates four key layers: (i) a blockchain layer that provides credentialing, transparency, and token-based incentives; (ii) a multimodal community layer that supports group formation, peer consensus, and cross-modal learning across text, images, audio, and sensor data; (iii) an edge computing layer that enables low-latency task offloading and secure training within Intel SGX enclaves; and (iv) a data layer that applies pre-processing, differential privacy, and synthetic augmentation to safeguard sensitive information. Experiments on industrial multimodal datasets demonstrate 42% faster model aggregation, 78.9% multimodal accuracy, and 1.9% accuracy loss under ε = 1.0 differential privacy. This shows a scalable and practical path for decentralized AI training in next-generation IIoT systems, confirming the possibility of technical support for educational processes. However, the conducted research requires a validation of pedagogical effectiveness. Full article
Show Figures

Figure 1

11 pages, 500 KB  
Proceeding Paper
The Role of Visual Education in Training Processes: A Systematic Review of the Use of Visual Tools to Enhance Learning and Promote the Development of Soft Skills
by Valentina Berardinetti
Proceedings 2026, 139(1), 6; https://doi.org/10.3390/proceedings2026139006 - 17 Apr 2026
Abstract
In recent years, Visual Education has emerged as an innovative and interdisciplinary teaching approach aimed at promoting meaningful learning through the conscious use of visual tools and languages. This educational paradigm helps to facilitate the understanding of complex concepts, translating them into clear [...] Read more.
In recent years, Visual Education has emerged as an innovative and interdisciplinary teaching approach aimed at promoting meaningful learning through the conscious use of visual tools and languages. This educational paradigm helps to facilitate the understanding of complex concepts, translating them into clear and intuitive visual representations, while enhancing memorisation skills, critical information processing and the practical application of acquired knowledge. This systematic review, conducted according to the PRISMA (2020) protocol, analyses the most recent empirical evidence on the effectiveness of Visual Education in educational contexts. The main objective is to assess how the intentional use of visual tools—images, concept maps, educational videos, interactive digital materials, and virtual manipulatives—contributes to enhancing learning processes and developing transversal skills. Through a comparative analysis of fourteen international contributions published between 2020 and 2025, selected from the Scopus, Web of Science and EBSCO databases, the research highlights how Visual Education significantly influences the improvement of academic performance, motivation and cognitive and emotional engagement of students. The results also confirm the inclusive function of visual teaching, which can encourage participation, self-esteem and cooperation even in individuals with special educational needs. The discussion emphasises the need for the systematic integration of Visual Education into school curricula as a strategy to enhance soft skills and promote more equitable, effective learning geared towards the integral development of the individual. Full article
Show Figures

Figure 1

18 pages, 1586 KB  
Article
Fractal Duffing Oscillators with Two Degrees of Freedom and Cubic–Quintic Nonlinear Stiffness
by Guozhong Xiu, Jihuan He, Yusry O. El-Dib and Haifa A. Alyousef
Fractal Fract. 2026, 10(4), 265; https://doi.org/10.3390/fractalfract10040265 - 17 Apr 2026
Abstract
The harmonic equivalent method is a non-perturbative approach to nonlinear vibration issues, aiming to create linearly coupled systems from coupled vibrations. However, there is still much to be discovered about managing interconnected nonlinear components. This paper examines the nonlinear components of a fractal-connected [...] Read more.
The harmonic equivalent method is a non-perturbative approach to nonlinear vibration issues, aiming to create linearly coupled systems from coupled vibrations. However, there is still much to be discovered about managing interconnected nonlinear components. This paper examines the nonlinear components of a fractal-connected system and offers suggestions. This paper explores insights into the principles and uses of nonlinear systems in science and engineering by investigating the dynamic behavior of a connected cubic–quintic damping fractal system analytically using an innovative approach to analytical examination. A two-scale transformation and reformulation of the system into fractal form simplify its governing equations for dynamic and stability analysis. Two analytical scopes are presented: one decouples nonlinear systems using weighted averaging functions, and the other converts even nonlinearities into odd terms using El-Dib’s frequency formulas for linear representation, enabling an equivalent linear representation of the system. The resilience of the decoupled system is verified by numerical simulations using Mathematica, which shows high agreement and minimal relative errors. It also accurately reflects dynamic behavior. Additionally, the work uses the bridging techniques of El-Dib and Elgazery to convert a linear damping fractal coupled system into a classical continuous-space form. A scaling fractal factor is made possible by re-expressing the fractal structure using pseudo-dimensional parameters. The linearly linked damping system has an exact analytical solution. The paper provides valuable insights into the design and control of coupled nonlinear oscillatory systems by validating analytical solutions through numerical simulations using Mathematica. Full article
(This article belongs to the Section Mathematical Physics)
Show Figures

Figure 1

6 pages, 1809 KB  
Proceeding Paper
Real-Time Classification of Guinea Pig Using You Only Look Once Version 9-Small and Raspberry Pi 5
by Jethro Ray P. Antiojo, John Patrick B. Bonilla and John Paul T. Cruz
Eng. Proc. 2026, 134(1), 59; https://doi.org/10.3390/engproc2026134059 - 17 Apr 2026
Abstract
We developed a real-time guinea pig breed classification system using You Only Look Once Version 9 (YOLOv9)-small, deployed on a Raspberry Pi 5 with Camera Module 3 and Hailo-8L acceleration module. The system targeted three breeds, Abyssinian, American, and Peruvian, using a dataset [...] Read more.
We developed a real-time guinea pig breed classification system using You Only Look Once Version 9 (YOLOv9)-small, deployed on a Raspberry Pi 5 with Camera Module 3 and Hailo-8L acceleration module. The system targeted three breeds, Abyssinian, American, and Peruvian, using a dataset of 4500 images split into a 70:20:10 ratio for training, validation, and testing. After optimization for Hailo-8L, the model was tested on live samples, with hamsters included as an unknown class. A total of 600 frame blocks were extracted from the video input and analyzed using a multi-class confusion matrix. Results showed an 89% overall accuracy (94.67% for Abyssinian, 94.33% for American, 98.67% for Peruvian, and 90.33% for unknown classification accuracy). The results showed the feasibility of deploying YOLOv9-small on embedded devices for accurate and real-time animal classification. Full article
Show Figures

Figure 1

12 pages, 991 KB  
Review
Artificial Intelligence in Cardiac Amyloidosis: A State-of-the-Art Review
by Syed Bukhari
J. Clin. Med. 2026, 15(8), 3037; https://doi.org/10.3390/jcm15083037 - 16 Apr 2026
Viewed by 43
Abstract
Cardiac amyloidosis (CA) remains underrecognized due to overlapping features with other cardiovascular conditions, including hypertrophic cardiomyopathy and hypertensive heart disease. Certain ‘red flag’ features across the clinical and imaging spectrum help identify CA. However, these features are often absent, subtle, or inconsistently recognized, [...] Read more.
Cardiac amyloidosis (CA) remains underrecognized due to overlapping features with other cardiovascular conditions, including hypertrophic cardiomyopathy and hypertensive heart disease. Certain ‘red flag’ features across the clinical and imaging spectrum help identify CA. However, these features are often absent, subtle, or inconsistently recognized, particularly in early disease, and are atypical phenotypes. This leads to frequent delays in diagnosis and presentation at advanced stages. Artificial intelligence (AI) offers a promising approach to detect subtle disease signatures by integrating multimodal and longitudinal data beyond human pattern recognition. AI-enhanced electrocardiography has emerged as a scalable screening tool, demonstrating high diagnostic performance and enabling earlier detection. In parallel, echocardiographic AI has evolved toward video-based analysis, improving standardization and reducing inter-reader variability. Similarly, AI applications in cardiac magnetic resonance and nuclear scintigraphy allow for automated quantification and more reproducible assessment of amyloid burden. Beyond diagnosis, emerging models support disease phenotyping, risk stratification, and treatment monitoring. This review synthesizes current applications of AI across multimodal testing in the evaluation and diagnosis of CA. Full article
(This article belongs to the Special Issue Symptoms and Treatment of Cardiac Amyloidosis)
Show Figures

Figure 1

20 pages, 10357 KB  
Article
A Comparative Benchmark of Face Detection Models for Noisy and Dynamic Online Class Environments
by Cesar Isaza, Pamela Rocío Ibarra Tapia, Cristian Felipe Ramirez-Gutierrez, Jonny Paul Zavala de Paz, Jose Amilcar Rizzo Sierra and Karina Anaya
Future Internet 2026, 18(4), 208; https://doi.org/10.3390/fi18040208 - 15 Apr 2026
Viewed by 175
Abstract
Monitoring students’ on-screen availability is increasingly critical for analyzing participation patterns in synchronous online learning, especially under videoconferencing conditions characterized by compressed video streams, low-resolution face regions, fluctuating bandwidth, and dynamically reconfigured grid layouts. This study introduces a practical computer vision pipeline that [...] Read more.
Monitoring students’ on-screen availability is increasingly critical for analyzing participation patterns in synchronous online learning, especially under videoconferencing conditions characterized by compressed video streams, low-resolution face regions, fluctuating bandwidth, and dynamically reconfigured grid layouts. This study introduces a practical computer vision pipeline that integrates deep learning-based face detection, lightweight embedding-based identity matching, and frame-level temporal aggregation to estimate students’ visual presence (VP) during live online classes. A real-world dataset comprising 27 participants and 16,200 frames was collected under authentic conditions, including codec compression, variable image quality, and dynamic layout changes. Four widely used face detection models (Haar Cascade, DSFD, MTCNN, and YuNet) were benchmarked on noisy and low-quality images. Quantitative evaluation on a manually annotated subset of 270 frames demonstrates that MTCNN and YuNet yield lower average VP estimation errors (27.63% and 22.20%, respectively) compared to Haar Cascade (75.34%) and DSFD (47.14%), with YuNet also achieving the shortest average processing time of 0.069 s per frame. While the pipeline is intentionally streamlined to facilitate practical use by instructors, the study provides clearly defined steps and parameter settings, establishing a reproducible procedure for benchmarking face detection performance in synchronous online class environments. Full article
Show Figures

Graphical abstract

11 pages, 1444 KB  
Article
Bubbles of the Dying: Geography and Displacement, History and Erasure
by Nikos Papastergiadis
Arts 2026, 15(4), 80; https://doi.org/10.3390/arts15040080 - 14 Apr 2026
Viewed by 151
Abstract
In this article, I will use the ecological approach to explore the recent videos of Pinar Öğrenci. I will focus on two works: Agit (2022) and Cemetery of the Nameless (2025). In the latter work, there is a complex examination of the interplay [...] Read more.
In this article, I will use the ecological approach to explore the recent videos of Pinar Öğrenci. I will focus on two works: Agit (2022) and Cemetery of the Nameless (2025). In the latter work, there is a complex examination of the interplay between the precarious paths taken by refugees and the climate change crisis. She also explores the multiple layers of history and memorialization in sites that have been scarred by genocide. In Cemetery of the Nameless (2025), Pinar establishes an analogy between missing bodies and the contamination of the water of Lake Van. However, this connection is not linear and there is no direct cause and effect; Lake Van was meant to be a transit zone for the refugees, not a cemetery. I will argue that the function of analogy is in its suggestion of comparisons, rather than the establishment of equivalence. Öğrenci thereby puts the analogy to work in a dual manner—it both amplifies and concentrates our attention. We listen to the narratives of migration while looking at the scenes caused by climate change. The image broadens the horizon of the narrative, and the voice sucks the gaze into a dark hole. In this manner, Öğrenci’s art of witnessing, which both combines and separates voice and image, amplifies and concentrates the transfer of information. I will also frame this commentary on the artworks with a broader discussion on the politics of care and memorialization. Full article
(This article belongs to the Special Issue Rethinking Art History and Culture: Defining an Ecological Approach)
Show Figures

Figure 1

17 pages, 3800 KB  
Article
Ureteral Orifice Detection in Ureteroscopic Images Based on Large-Kernel Convolutional Neural Networks and Attention-Based Feature Fusion
by Liang Li, Chen-Yi Jiang, Xing-Jie Wang, Yuan-Jun Wang and Jian Zhuo
Bioengineering 2026, 13(4), 459; https://doi.org/10.3390/bioengineering13040459 - 14 Apr 2026
Viewed by 220
Abstract
Objective: To enhance the information modeling capacity of large-kernel convolutional neural networks and to build a ureteral orifice detection framework for ureteroscopic imaging. Methods: A retrospective dataset of ureteroscopic images from 222 patients was collected. The patients were randomly divided into [...] Read more.
Objective: To enhance the information modeling capacity of large-kernel convolutional neural networks and to build a ureteral orifice detection framework for ureteroscopic imaging. Methods: A retrospective dataset of ureteroscopic images from 222 patients was collected. The patients were randomly divided into training and testing sets at a ratio of 7:3. Initially, video files were converted into image frames, and feature-relevant images were manually labeled by physicians. Subsequently, a ConvNeXt-based backbone augmented with squeeze-and-excitation (SE) modules was employed to extract diverse deep features. SCConv modules were incorporated across stages to strengthen the network’s feature extraction performance. Lastly, enhanced spatial excitation attention mechanisms were cascaded to achieve superior feature fusion and detection accuracy. Comparative experiments were conducted against baseline models, including ConvNeXt, assessing accuracy, computational overhead, and inference latency. Results: On a test set of 491 ureteroscopic images, all models achieved mAP@50 values above 0.75, whereas the proposed network achieved 0.890, markedly exceeding baseline performance. The model operated at 20 ms per frame, achieving a frame rate of 50 FPS. Conclusions: We developed an improved deep learning framework based on large-kernel convolutional networks for real-time ureteral orifice detection in endoscopic scenarios. This system achieves a favorable balance between detection accuracy and real-time efficiency. The method demonstrates significant potential as a training and feedback tool for residents and junior urologists in clinical environments. Full article
Show Figures

Figure 1

26 pages, 4138 KB  
Article
Self-Supervised Cascade Denoising Auto-Encoder for Accurate Spatial Positioning of Target by Fusing Uncalibrated Video and Low-Cost GNSS
by Xiaofei Zeng, Ruliang He, Songchen Han, Wei Li, Menglong Yang and Binbin Liang
Remote Sens. 2026, 18(8), 1161; https://doi.org/10.3390/rs18081161 - 13 Apr 2026
Viewed by 292
Abstract
Accurate measurement of the spatial position of targets in a fixed camera is critical in remote sensing applications. Visual spatial positioning methods that rely solely on images are susceptible to adverse factors such as inaccurate camera calibration, imprecise image target detection, and incorrect [...] Read more.
Accurate measurement of the spatial position of targets in a fixed camera is critical in remote sensing applications. Visual spatial positioning methods that rely solely on images are susceptible to adverse factors such as inaccurate camera calibration, imprecise image target detection, and incorrect feature point selection. Complementary to images, the ubiquitous Global Navigation Satellite System (GNSS) data can provide spatial positions of targets, but most of them are low-cost GNSSs with significant positioning noise. In order to fuse these two valuable but flawed positioning measurements to improve the accuracy and stability of spatial positioning, we propose a deep learning multi-modal spatial positioning method by fusing sequential uncalibrated video images and low-cost GNSSs. Firstly, a self-supervised cascade denoising auto-encoder (SCDAE) architecture is built to endow the auto-encoder with robustness to noise in the raw inputs. Then, based on the SCDAE and Bayesian optimal estimation, a Bayesian self-supervised multi-modal fusion positioning method SCDAE-MFP is presented to achieve accurate and stable spatial positioning by self-supervised manifold learning. Specifically, to provide visual self-supervision to the SCDAE-MFP, a visual position denoising auto-encoder module based on dual unsupervised learning is proposed. Extensive experimental results on public datasets showed that SCDAE-MFP outperformed five other classical and state-of-the-art baseline methods by an average of 56.79% in reducing positioning errors. Full article
(This article belongs to the Special Issue GNSS and Multi-Sensor Integrated Precise Positioning and Applications)
Show Figures

Figure 1

24 pages, 571 KB  
Article
Color Transformations Resulting in Loss of Performance in Modern Video Compression Software Systems
by Marek Domański, Adam Grzelka and Olgierd Stankiewicz
Information 2026, 17(4), 366; https://doi.org/10.3390/info17040366 - 13 Apr 2026
Viewed by 131
Abstract
Modern video compression is implemented in complex software systems that reuse software modules from various sources. This is particularly evident in experimental software systems designed for researching and standardizing new compression technologies. These systems often incorporate software modules operating in different color spaces. [...] Read more.
Modern video compression is implemented in complex software systems that reuse software modules from various sources. This is particularly evident in experimental software systems designed for researching and standardizing new compression technologies. These systems often incorporate software modules operating in different color spaces. For example, AI-based techniques are often used in video coding experiments. The corresponding software modules often operate on RGB representations, while other modules operate on YCBCR components. In this study, we demonstrate that the quality loss resulting from color transformations is comparable to the respective quantization noise. Consecutive cycles of color transformations do not result in significant additional degradation. However, for image compression, very different results are obtained in different color representations. This aspect must be carefully considered in compression research. This paper supports these considerations with extensive experimental results in the context of ITU Recommendations BT.709 and BT.2020, as well as AVC and HEVC compression. Full article
(This article belongs to the Special Issue Signal Processing and Machine Learning, 2nd Edition)
20 pages, 1491 KB  
Systematic Review
Digital Imaging Technologies for Forensic Orofacial Identification: A Systematic Review and Research Agenda
by Sofia Viegas, Rodrigo Azenha-Gomes, João Abreu, Tiago Nunes and Ana Corte-Real
Appl. Sci. 2026, 16(8), 3766; https://doi.org/10.3390/app16083766 - 12 Apr 2026
Viewed by 266
Abstract
This systematic review critically examines the use of 2D and 3D digital imaging technologies of the face and teeth, with and without integration of artificial intelligence, for human identification in forensic and medicolegal contexts. Following PRISMA 2020 guidelines, Scopus, PubMed and Web of [...] Read more.
This systematic review critically examines the use of 2D and 3D digital imaging technologies of the face and teeth, with and without integration of artificial intelligence, for human identification in forensic and medicolegal contexts. Following PRISMA 2020 guidelines, Scopus, PubMed and Web of Science were systematically searched, identifying 26 studies published between 2011 and 2025 that met predefined eligibility criteria framed by a PECO-style question. Eighteen studies focused on facial imaging, six on dental imaging and two on integrated orofacial workflows, using digital photography, CCTV/video, 3D surface imaging, intraoral scanners, and three-dimensional superimposition methods, sometimes combined with classical algorithms and deep learning models. In controlled or semi-controlled settings, state-of-the-art facial algorithms often reported very high accuracy, with values up to 99.85%. By contrast, studies using real CCTV or other challenging forensic imagery showed more variable performance, with accuracies ranging from about 72.8% to 96.6%. Dental and orofacial studies reported 100% correct identifications for 3D superimposition of intraoral scans in small samples, and around 83% accuracy for automated AI-based dental identification. Crucially, fulfilling the promise of a true orofacial approach, this review proposes a structured research agenda focused on creating realistic multi-modal databases, standardizing protocols, and implementing probabilistic reporting (likelihood ratios) to guide future validation and legal admissibility. Full article
(This article belongs to the Special Issue State-of-the-Art Digital Dentistry)
Show Figures

Figure 1

15 pages, 1467 KB  
Article
Effects of Ripasudil Hydrochloride Hydrate-Brimonidine Tartrate Fixed-Dose Combination Using Rho Kinase Inhibitor and Alpha2 Adrenergic Receptor Agonist on Aqueous Column in the Episcleral Vein: A Randomized, Double-Masked, Crossover Clinical Trial (ROCK Alpha-Aqua Study)
by Marie Suzuki, Shogo Arimura, Kentaro Iwasaki, Yusuke Orii, Hiroshi Kakimoto, Ryohei Komori, Shigeo Yamamura and Masaru Inatani
J. Clin. Med. 2026, 15(8), 2880; https://doi.org/10.3390/jcm15082880 - 10 Apr 2026
Viewed by 241
Abstract
Background/Objectives: Rho-associated protein kinase inhibitors reduce intraocular pressure (IOP) by enhancing aqueous humor outflow through the trabecular meshwork–Schlemm’s canal pathway. However, it remains unclear whether the fixed-dose combination of ripasudil hydrochloride hydrate and brimonidine tartrate (GLAALPHA) enhances conventional aqueous outflow in vivo. [...] Read more.
Background/Objectives: Rho-associated protein kinase inhibitors reduce intraocular pressure (IOP) by enhancing aqueous humor outflow through the trabecular meshwork–Schlemm’s canal pathway. However, it remains unclear whether the fixed-dose combination of ripasudil hydrochloride hydrate and brimonidine tartrate (GLAALPHA) enhances conventional aqueous outflow in vivo. Methods: This single-center randomized clinical trial included healthy adult volunteers who received GLAALPHA, a brimonidine tartrate–brinzolamide fixed-dose combination (Ailamide), or brimonidine tartrate monotherapy (Aiphagan) in a crossover sequence. The aqueous column width in the episcleral veins was assessed at baseline and at 2 h (primary outcome) and 8 h using hemoglobin video imaging. Results: Among 24 participants, analyses included 23 GLAALPHA-treated eyes, 21 Ailamide-treated eyes, and 22 Aiphagan-treated eyes. Two hours after instillation, the aqueous column width significantly increased from baseline only in the GLAALPHA group (p = 0.002). The percent increase in the aqueous column width at 2 h was significantly greater with GLAALPHA than with Ailamide (p = 0.039) and not significantly different between GLAALPHA and Aiphagan (p = 0.114). At 8 h, the aqueous column width did not differ from the baseline in any groups. Conclusions: In healthy adult eyes, GLAALPHA significantly increased the aqueous column width in the episcleral veins 2 h after instillation, indicating enhanced conventional aqueous outflow. These findings provide evidence that GLAALPHA promotes trabecular outflow beyond the effects of brimonidine tartrate-containing comparators and offer mechanistic insights into its action. Full article
(This article belongs to the Section Ophthalmology)
Show Figures

Figure 1

21 pages, 1320 KB  
Article
Adaptive Decision Fusion in Probability Space for Pedestrian Gender Recognition
by Lei Cai, Huijie Zheng, Fang Ruan, Feng Chen, Wenjie Xiang, Qi Lin and Yifan Shi
Appl. Sci. 2026, 16(8), 3640; https://doi.org/10.3390/app16083640 - 8 Apr 2026
Viewed by 196
Abstract
Pedestrian gender recognition plays an important role in pedestrian analysis and intelligent video applications, for example, in demographic statistics, soft biometric analysis, and context-aware person retrieval. However, it remains a challenging task owing to viewpoint variations, illumination changes, occlusions, and low image quality [...] Read more.
Pedestrian gender recognition plays an important role in pedestrian analysis and intelligent video applications, for example, in demographic statistics, soft biometric analysis, and context-aware person retrieval. However, it remains a challenging task owing to viewpoint variations, illumination changes, occlusions, and low image quality in real-world imagery. To address these issues, an effective adaptive decision fusion framework, termed the Decision Fusion Learning Network (DFLN), is proposed in this paper. The key novel aspect of DFLN is that it effectively explores both an appearance-centered view that emphasizes detailed texture and clothing information and a structure-centered view that captures rich contour and structural information for pedestrian gender recognition. To realize DFLN, a Parallel CNN Prediction Probability Learning Module (PCNNM) is first constructed to independently learn modality-specific probabilities from color image and edge maps. Subsequently, a learnable Decision Fusion Module (DFM) is designed to fuse the modality-specific probabilities and explore their complementary merits for realizing accurate pedestrian gender recognition. The DFM can be easily coupled with the PCNNM, forming an end-to-end decision fusion learning framework that simultaneously learns the feature representations and carries out adaptive decision fusion. Experiments on two pedestrian benchmark datasets, named PETA and PA-100K, show that DFLN achieves competitive or superior performance compared with several state-of-the-art pedestrian gender recognition methods. Extensive experimental analysis further confirms the effectiveness of the proposed decision fusion strategy and its favorable generalization ability under domain shift. Full article
Show Figures

Figure 1

Back to TopTop