Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (133)

Search Parameters:
Keywords = image editing models

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 1017 KB  
Review
Advancements in Hematopoietic Stem Cell Therapy: From Biological Pathways to Emerging Therapeutic Strategies
by Viviana Cortiana, Harshal Chorya, Rabab Hunaid Abbas, Jade Gambill, Adhith Theyver, Chandler H. Park and Yan Leyfman
Therapeutics 2025, 2(4), 18; https://doi.org/10.3390/therapeutics2040018 - 15 Oct 2025
Viewed by 336
Abstract
Hematopoietic stem cell (HSC) therapy remains essential in treating blood disorders, autoimmune diseases, neurodegenerative conditions, and cancers. Despite its potential, challenges arise from the inherent heterogeneity of HSCs and the complexity of their regulatory niche. Recent advancements in single-cell RNA sequencing and chromatin [...] Read more.
Hematopoietic stem cell (HSC) therapy remains essential in treating blood disorders, autoimmune diseases, neurodegenerative conditions, and cancers. Despite its potential, challenges arise from the inherent heterogeneity of HSCs and the complexity of their regulatory niche. Recent advancements in single-cell RNA sequencing and chromatin accessibility sequencing have provided deeper insights into HSC markers and chromatin dynamics, highlighting the intricate balance between intrinsic and extrinsic regulatory mechanisms. Zebrafish models have emerged as valuable tools in HSC research, particularly through live imaging and cellular barcoding techniques. These models have allowed us to describe critical interactions between HSCs and embryonic macrophages, involving reactive oxygen species and calreticulin signaling. These are essential for ensuring HSC quality and proper differentiation, with implications for improving HSC transplant outcomes. Furthermore, the review examines clonal hematopoiesis, with a focus on mutations in epigenetic regulators such as DNMT3A, TET2, and ASXL1, which elevate the risk of myelodysplastic syndromes and acute myeloid leukemia. Emerging technologies, including in vivo cellular barcoding and CRISPR-Cas9 gene editing, are being investigated to enhance clonal diversity and target specific mutations, offering potential strategies to mitigate these risks. Additionally, macrophages play a pivotal role in maintaining HSC clonality and ensuring niche localization. Interactions mediated by factors such as VCAM-1 and CXCL12/CXCR4 signaling are crucial for HSC homing and the stress response, opening new therapeutic avenues for enhancing HSC transplantation success and addressing clonal hematopoiesis. This review synthesizes findings from zebrafish models, cutting-edge sequencing technologies, and novel therapeutic strategies, offering a comprehensive framework for advancing HSC biology and improving clinical outcomes in stem cell therapy and the treatment of hematologic diseases. Full article
Show Figures

Figure 1

24 pages, 1149 KB  
Review
Shaping Architecture with Generative Artificial Intelligence: Deep Learning Models in Architectural Design Workflow
by Socrates Yiannoudes
Architecture 2025, 5(4), 94; https://doi.org/10.3390/architecture5040094 - 10 Oct 2025
Viewed by 1407
Abstract
Deep-learning generative AI promises to transform architectural design, yet its potential employment and ready-to-use capacity for professional workflows are unclear. This study presents a systematic review conducted in accordance with PRISMA 2020 guidelines, synthesizing peer-reviewed work from 2015 to 2025 to assess how [...] Read more.
Deep-learning generative AI promises to transform architectural design, yet its potential employment and ready-to-use capacity for professional workflows are unclear. This study presents a systematic review conducted in accordance with PRISMA 2020 guidelines, synthesizing peer-reviewed work from 2015 to 2025 to assess how GenAI methods align with architectural practice. A total of 1566 records were initially retrieved across databases, of which 42 studies met eligibility criteria after structured screening and selection. Each was evaluated using five indicators with a three-tier rubric: Output Representation Type, Pipeline Integration, Workflow Standardization, Tool Readiness, and Technical Skillset. Results show that most outputs are raster images or non-editable objects, with only a minority producing CAD/BIM-ready geometry. Workflow pipelines are often fragmented with manual hand-offs and most GenAI methods map only onto the early conceptual design stage. Prototypes frequently require bespoke coding and advanced expertise. These findings indicate a persistent gap between experimentation with ideation-oriented GenAI and the pragmatism of CAD/BIM-centered delivery. By framing the proposed rubric as a workflow maturity model, this review contributes a replicable benchmark for assessing practice readiness and identifying pathways toward mainstream adoption. For GenAI to move from prototypes to mainstream architectural design practice, it is essential to address not only technical barriers, but also cultural issues such as professional skepticism and reliability concerns, as well as ecosystem challenges of data sharing, authorship, and liability. Full article
(This article belongs to the Special Issue Shaping Architecture with Computation)
Show Figures

Figure 1

19 pages, 29304 KB  
Article
Generating Synthetic Facial Expression Images Using EmoStyle
by Clément Gérard Daniel Darne, Changqin Quan and Zhiwei Luo
Appl. Sci. 2025, 15(19), 10636; https://doi.org/10.3390/app151910636 - 1 Oct 2025
Viewed by 496
Abstract
Synthetic data has emerged as a significant alternative to more costly and time-consuming data collection methods. This assertion is particularly salient in the context of training facial expression recognition (FER) and generation models. The EmoStyle model represents a state-of-the-art method for editing images [...] Read more.
Synthetic data has emerged as a significant alternative to more costly and time-consuming data collection methods. This assertion is particularly salient in the context of training facial expression recognition (FER) and generation models. The EmoStyle model represents a state-of-the-art method for editing images of facial expressions in the latent space of StyleGAN2, using a continuous valence–arousal (VA) representation of emotions. While the model has demonstrated promising results in terms of high-quality image generation and strong identity preservation, its accuracy in reproducing facial expressions across the VA space remains to be systematically examined. To address this gap, the present study proposes a systematic evaluation of EmoStyle’s ability to generate facial expressions across the full VA space, including four levels of emotional intensity. While prior work on expression manipulation has mainly focused its evaluations on perceptual quality, diversity, identity preservation, or classification accuracy, to the best of our knowledge, no study to date has systematically evaluated the accuracy of generated expressions across the VA space. The evaluation’s findings include a consistent weakness in the VA direction range of 242–329°, where EmoStyle demonstrates the inability to produce distinct expressions. Building on these findings, we outline recommendations for enhancing the generation pipeline and release an open-source EmoStyle-based toolkit that integrates fixes to the original EmoStyle repository, an API wrapper, and our experiment scripts. Collectively, these contributions furnish both novel insights into the model’s capacities and practical resources for further research. Full article
Show Figures

Figure 1

21 pages, 4397 KB  
Article
Splatting the Cat: Efficient Free-Viewpoint 3D Virtual Try-On via View-Decomposed LoRA and Gaussian Splatting
by Chong-Wei Wang, Hung-Kai Huang, Tzu-Yang Lin, Hsiao-Wei Hu and Chi-Hung Chuang
Electronics 2025, 14(19), 3884; https://doi.org/10.3390/electronics14193884 - 30 Sep 2025
Viewed by 468
Abstract
As Virtual Try-On (VTON) technology matures, 2D VTON methods based on diffusion models can now rapidly generate diverse and high-quality try-on results. However, with rising user demands for realism and immersion, many applications are shifting towards 3D VTON, which offers superior geometric and [...] Read more.
As Virtual Try-On (VTON) technology matures, 2D VTON methods based on diffusion models can now rapidly generate diverse and high-quality try-on results. However, with rising user demands for realism and immersion, many applications are shifting towards 3D VTON, which offers superior geometric and spatial consistency. Existing 3D VTON approaches commonly face challenges such as barriers to practical deployment, substantial memory requirements, and cross-view inconsistencies. To address these issues, we propose an efficient 3D VTON framework with robust multi-view consistency, whose core design is to decouple the monolithic 3D editing task into a four-stage cascade as follows: (1) We first reconstruct an initial 3D scene using 3D Gaussian Splatting, integrating the SMPL-X model at this stage as a strong geometric prior. By computing a normal-map loss and a geometric consistency loss, we ensure the structural stability of the initial human model across different views. (2) We employ the lightweight CatVTON to generate 2D try-on images, that provide visual guidance for the subsequent personalized fine-tuning tasks. (3) To accurately represent garment details from all angles, we partition the 2D dataset into three subsets—front, side, and back—and train a dedicated LoRA module for each subset on a pre-trained diffusion model. This strategy effectively mitigates the issue of blurred details that can occur when a single model attempts to learn global features. (4) An iterative optimization process then uses the generated 2D VTON images and specialized LoRA modules to edit the 3DGS scene, achieving 360-degree free-viewpoint VTON results. All our experiments were conducted on a single consumer-grade GPU with 24 GB of memory, a significant reduction from the 32 GB or more typically required by previous studies under similar data and parameter settings. Our method balances quality and memory requirement, significantly lowering the adoption barrier for 3D VTON technology. Full article
(This article belongs to the Special Issue 2D/3D Industrial Visual Inspection and Intelligent Image Processing)
Show Figures

Figure 1

25 pages, 12087 KB  
Article
MSHEdit: Enhanced Text-Driven Image Editing via Advanced Diffusion Model Architecture
by Mingrui Yang, Jian Yuan, Jiahui Xu and Weishu Yan
Electronics 2025, 14(19), 3758; https://doi.org/10.3390/electronics14193758 - 23 Sep 2025
Viewed by 456
Abstract
To address limitations in structural preservation and detail fidelity in existing text-driven image editing methods, we propose MSHEdit—a novel editing framework built upon a pre-trained diffusion model. MSHEdit is designed to achieve high semantic alignment during image editing without the need for additional [...] Read more.
To address limitations in structural preservation and detail fidelity in existing text-driven image editing methods, we propose MSHEdit—a novel editing framework built upon a pre-trained diffusion model. MSHEdit is designed to achieve high semantic alignment during image editing without the need for additional training or fine-tuning. The framework integrates two key components: the High-Order Stable Diffusion Sampler (HOS-DEIS) and the Multi-Scale Window Residual Bridge Attention Module (MS-WRBA). HOS-DEIS enhances sampling precision and detail recovery by employing high-order integration and dynamic error compensation, while MS-WRBA improves editing region localization and edge blending through multi-scale window partitioning and dual-path normalization. Extensive experiments on public datasets including DreamBench-v2 and DreamBench++ demonstrate that compared to recent mainstream models, MSHEdit reduces structural distance by 2% and background LPIPS by 1.2%. These results demonstrate its ability to achieve natural transitions between edited regions and backgrounds in complex scenes while effectively mitigating object edge blurring. MSHEdit exhibits excellent structural preservation, semantic consistency, and detail restoration, providing an efficient and generalizable solution for high-quality text-driven image editing. Full article
Show Figures

Figure 1

17 pages, 23379 KB  
Article
FreeMix: Personalized Structure and Appearance Control Without Finetuning
by Mingyu Kang and Yong Suk Choi
Appl. Sci. 2025, 15(18), 9889; https://doi.org/10.3390/app15189889 - 9 Sep 2025
Viewed by 510
Abstract
Personalized image generation has gained significant attention with the advancement of text-to-image diffusion models. However, existing methods face challenges in effectively mixing multiple visual attributes, such as structure and appearance, from separate reference images. Finetuning-based methods are time-consuming and prone to overfitting, while [...] Read more.
Personalized image generation has gained significant attention with the advancement of text-to-image diffusion models. However, existing methods face challenges in effectively mixing multiple visual attributes, such as structure and appearance, from separate reference images. Finetuning-based methods are time-consuming and prone to overfitting, while finetuning-free approaches often suffer from feature entanglement, leading to distortions. To address these challenges, we propose FreeMix, a finetuning-free approach for multi-concept mixing in personalized image generation. Given separate references for structure and appearance, FreeMix generates a new image that integrates both. This is achieved through Disentangle-Mixing Self-Attention (DMSA). DMSA first disentangles the two concepts by applying spatial normalization to remove residual appearance from structure features, and then selectively injects appearance details via self-attention, guided by a cross-attention-derived mask to prevent background leakage. This mechanism ensures precise structural preservation and faithful appearance transfer. Extensive qualitative and quantitative experiments demonstrate that our method achieves superior structural consistency and appearance transfer compared to existing approaches. In addition to personalization, FreeMix can be adapted to exemplar-based image editing. Full article
Show Figures

Figure 1

27 pages, 14347 KB  
Data Descriptor
Chu-Style Lacquerware Dataset: A Dataset for Digital Preservation and Inheritance of Chu-Style Lacquerware
by Haoming Bi, Yelei Chen, Chanjuan Chen and Lei Shu
Sensors 2025, 25(17), 5558; https://doi.org/10.3390/s25175558 - 5 Sep 2025
Viewed by 1328
Abstract
The Chu-style lacquerware (CSL) dataset is a digital resource specifically developed for the digital preservation and inheritance of Chu-style lacquerware, which constitutes an important component of global intangible handicraft heritage. The dataset systematically integrates on-site photographic images from the Hubei Provincial Museum and [...] Read more.
The Chu-style lacquerware (CSL) dataset is a digital resource specifically developed for the digital preservation and inheritance of Chu-style lacquerware, which constitutes an important component of global intangible handicraft heritage. The dataset systematically integrates on-site photographic images from the Hubei Provincial Museum and official digital resources from the same institution, comprising 582 high-resolution images of Chu-style lacquerware, 72 videos of artifacts, and 37 images of traditional Chinese patterns. It comprehensively demonstrates the artistic characteristics of Chu-style lacquerware and provides support for academic research and cultural dissemination. The construction process of the dataset includes data screening, image standardization, Photoshop-based editing and adjustment, image inpainting, and image annotation. Based on this dataset, this study employs the Low-Rank Adaptation (LoRA) technique to train three core models and five style models, and systematically verifies the usability of the CSL dataset from five aspects. Experimental results show that the CSL dataset not only improves the accuracy and detail restoration of Artificial Intelligence (AI)-generated images of Chu-style lacquerware, but also optimizes the generative effect of innovative patterns, thereby validating its application value. This study represents the first dedicated dataset developed for AI generative models of Chu-style lacquerware. It not only provides a new technological pathway for the digital preservation and inheritance of cultural heritage, but also supports interdisciplinary research in archeology, art history, and cultural communication, highlighting the importance of cross-disciplinary collaboration in safeguarding and transmitting Intangible Cultural Heritage (ICH). Full article
(This article belongs to the Section Cross Data)
Show Figures

Figure 1

22 pages, 4355 KB  
Article
Deriving the A/B Cells Policy as a Robust Multi-Object Cell Pipeline for Time-Lapse Microscopy
by Ilya Larin, Egor Panferov, Maria Dodina, Diana Shaykhutdinova, Sofia Larina, Ekaterina Minskaia and Alexander Karabelsky
Int. J. Mol. Sci. 2025, 26(17), 8455; https://doi.org/10.3390/ijms26178455 - 30 Aug 2025
Viewed by 707
Abstract
Time-lapse microscopy of mesenchymal stem cell (MSC) cultures allows for the quantitative observation of their self-renewal, proliferation, and differentiation. However, the rigorous comparison of two conditions, baseline (A) versus perturbation (B) (the addition of molecular factors, environmental shifts, genetic modification, etc.), remains difficult [...] Read more.
Time-lapse microscopy of mesenchymal stem cell (MSC) cultures allows for the quantitative observation of their self-renewal, proliferation, and differentiation. However, the rigorous comparison of two conditions, baseline (A) versus perturbation (B) (the addition of molecular factors, environmental shifts, genetic modification, etc.), remains difficult because morphology, division timing, and migratory behavior are highly heterogeneous at the single-cell scale. MSCs can be used as an in vitro model to study cell morphology and kinetics in order to assess the effect of, for example, gene therapy and prime editing in the near future. By combining static, frame-wise morphology with dynamic descriptors, we can obtain weight profiles that highlight which morphological and behavioral dimensions drive divergence. In this study, we present A/B Cells Policy: a modular, open-source Python package implementing a robust cell tracking pipeline. It integrates a YOLO-based architecture as a two-stage assignment framework with fallback and recovery passes, re-identification of lost tracks, and lineage reconstruction. The framework links descriptive statistics to a transferable system, opening up avenues for regenerative medicine, pharmacology, and early translational pipelines. It does this by providing an interpretable, measurement-based bridge between in vitro imaging and in silico intervention strategy planning. Full article
Show Figures

Figure 1

22 pages, 6785 KB  
Article
Spatiality–Frequency Domain Video Forgery Detection System Based on ResNet-LSTM-CBAM and DCT Hybrid Network
by Zihao Liao, Sheng Hong and Yu Chen
Appl. Sci. 2025, 15(16), 9006; https://doi.org/10.3390/app15169006 - 15 Aug 2025
Viewed by 763
Abstract
As information technology advances, digital content has become widely adopted across diverse fields such as news broadcasting, entertainment, commerce, and forensic investigation. However, the availability of sophisticated multimedia editing tools has significantly increased the risk of video and image forgery, raising serious concerns [...] Read more.
As information technology advances, digital content has become widely adopted across diverse fields such as news broadcasting, entertainment, commerce, and forensic investigation. However, the availability of sophisticated multimedia editing tools has significantly increased the risk of video and image forgery, raising serious concerns about content authenticity at both societal and individual levels. To address the growing need for robust and accurate detection methods, this study proposes a novel video forgery detection model that integrates both spatial and frequency-domain features. The model is built on a ResNet-LSTM framework enhanced by a Convolutional Block Attention Module (CBAM) for spatial feature extraction, and further incorporates Discrete Cosine Transform (DCT) to capture frequency domain information. Comprehensive experiments were conducted on several mainstream benchmark datasets, encompassing a wide range of forgery scenarios. The results demonstrate that the proposed model achieves superior performance in distinguishing between authentic and manipulated videos. Additional ablation and comparative studies confirm the contribution of each component in the architecture, offering deeper insight into the model’s capacity. Overall, the findings support the proposed approach as a promising solution for enhancing the reliability of video authenticity analysis under complex conditions. Full article
Show Figures

Figure 1

14 pages, 2032 KB  
Article
Surface Reading Model via Haptic Device: An Application Based on Internet of Things and Cloud Environment
by Andreas P. Plageras, Christos L. Stergiou, Vasileios A. Memos, George Kokkonis, Yutaka Ishibashi and Konstantinos E. Psannis
Electronics 2025, 14(16), 3185; https://doi.org/10.3390/electronics14163185 - 11 Aug 2025
Viewed by 556
Abstract
In this research paper, we have implemented a computer program thanks to the XML language to sense the differences in image color depth by using haptic/tactile devices. With the use of “Bump Map” and tools such as “Autodesk’s 3D Studio Max”, “Adobe Photoshop”, [...] Read more.
In this research paper, we have implemented a computer program thanks to the XML language to sense the differences in image color depth by using haptic/tactile devices. With the use of “Bump Map” and tools such as “Autodesk’s 3D Studio Max”, “Adobe Photoshop”, and “Adobe Illustrator”, we were able to obtain the desired results. The haptic devices used for the experiments were the “PHANTOM Touch” and the “PHANTOM Omni R” of “3D Systems”. The programs that were installed and configured properly so as to model the surfaces, run the experiments, and finally achieve the desired goal are “H3D Api”, “Geomagic_OpenHaptics”, and “OpenHaptics_Developer_Edition”. The purpose of this project was to feel different textures, shapes, and objects in images by using a haptic device. The primary objective was to create a system from the ground up to render visuals on the screen and facilitate interaction with them via the haptic device. The main focus of this work is to propose a novel pattern of images that we can classify as different textures so that they can be identified by people with reduced vision. Full article
Show Figures

Graphical abstract

20 pages, 21076 KB  
Article
Domain-Aware Reinforcement Learning for Prompt Optimization
by Mengqi Gao, Bowen Sun, Tong Wang, Ziyu Fan, Tongpo Zhang and Zijun Zheng
Mathematics 2025, 13(16), 2552; https://doi.org/10.3390/math13162552 - 9 Aug 2025
Viewed by 1440
Abstract
Prompt engineering provides an efficient way to adapt large language models (LLMs) to downstream tasks without retraining model parameters. However, designing effective prompts can be challenging, especially when model gradients are unavailable and human expertise is required. Existing automated methods based on gradient [...] Read more.
Prompt engineering provides an efficient way to adapt large language models (LLMs) to downstream tasks without retraining model parameters. However, designing effective prompts can be challenging, especially when model gradients are unavailable and human expertise is required. Existing automated methods based on gradient optimization or heuristic search exhibit inherent limitations under black box or limited-query conditions. We propose Domain-Aware Reinforcement Learning for Prompt Optimization (DA-RLPO), which treats prompt editing as a sequential decision process and leverages structured domain knowledge to constrain candidate edits. Our experimental results show that DA-RLPO achieves higher accuracy than baselines on text classification tasks and maintains robust performance with limited API calls, while also demonstrating effectiveness on text-to-image and reasoning tasks. Full article
(This article belongs to the Special Issue Multi-Criteria Decision Making Under Uncertainty)
Show Figures

Figure 1

19 pages, 12069 KB  
Article
Evaluation of StyleGAN-CLIP Models in Text-to-Image Generation of Faces
by Asma Fejjari, Aaron Abela, Marc Tanti and Adrian Muscat
Appl. Sci. 2025, 15(15), 8692; https://doi.org/10.3390/app15158692 - 6 Aug 2025
Viewed by 2220
Abstract
In this paper, we explore the generation of face images conditioned on a textual description, as well as the capabilities of the models in editing a machine-generated image on the basis of additional text prompts. We leverage open source state-of-the-art face image generators, [...] Read more.
In this paper, we explore the generation of face images conditioned on a textual description, as well as the capabilities of the models in editing a machine-generated image on the basis of additional text prompts. We leverage open source state-of-the-art face image generators, StyleGAN models and couple these with the open source multimodal embedding space, CLIP, in an optimisation loop using the method in StyleCLIP to set up our experimental system. We make use of automatic metrics and human ratings to evaluate the results and, in addition, obtain insight into how much automatic metrics are correlated with human ratings. We found compelling evidence that both the text-to-image and editing models based on StyleGAN2 stand out as the better options. In addition, the automatic evaluation metrics are only weakly correlated with human ratings. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

58 pages, 1238 KB  
Review
The Collapse of Brain Clearance: Glymphatic-Venous Failure, Aquaporin-4 Breakdown, and AI-Empowered Precision Neurotherapeutics in Intracranial Hypertension
by Matei Șerban, Corneliu Toader and Răzvan-Adrian Covache-Busuioc
Int. J. Mol. Sci. 2025, 26(15), 7223; https://doi.org/10.3390/ijms26157223 - 25 Jul 2025
Cited by 1 | Viewed by 2714
Abstract
Although intracranial hypertension (ICH) has traditionally been framed as simply a numerical escalation of intracranial pressure (ICP) and usually dealt with in its clinical form and not in terms of its complex underlying pathophysiology, an emerging body of evidence indicates that ICH is [...] Read more.
Although intracranial hypertension (ICH) has traditionally been framed as simply a numerical escalation of intracranial pressure (ICP) and usually dealt with in its clinical form and not in terms of its complex underlying pathophysiology, an emerging body of evidence indicates that ICH is not simply an elevated ICP process but a complex process of molecular dysregulation, glymphatic dysfunction, and neurovascular insufficiency. Our aim in this paper is to provide a complete synthesis of all the new thinking that is occurring in this space, primarily on the intersection of glymphatic dysfunction and cerebral vein physiology. The aspiration is to review how glymphatic dysfunction, largely secondary to aquaporin-4 (AQP4) dysfunction, can lead to delayed cerebrospinal fluid (CSF) clearance and thus the accumulation of extravascular fluid resulting in elevated ICP. A range of other factors such as oxidative stress, endothelin-1, and neuroinflammation seem to significantly impair cerebral autoregulation, making ICH challenging to manage. Combining recent studies, we intend to provide a revised conceptualization of ICH that recognizes the nuance and complexity of ICH that is understated by previous models. We wish to also address novel diagnostics aimed at better capturing the dynamic nature of ICH. Recent advances in non-invasive imaging (i.e., 4D flow MRI and dynamic contrast-enhanced MRI; DCE-MRI) allow for better visualization of dynamic changes to the glymphatic and cerebral blood flow (CBF) system. Finally, wearable ICP monitors and AI-assisted diagnostics will create opportunities for these continuous and real-time assessments, especially in limited resource settings. Our goal is to provide examples of opportunities that exist that might augment early recognition and improve personalized care while ensuring we realize practical challenges and limitations. We also consider what may be therapeutically possible now and in the future. Therapeutic opportunities discussed include CRISPR-based gene editing aimed at restoring AQP4 function, nano-robotics aimed at drug targeting, and bioelectronic devices purposed for ICP modulation. Certainly, these proposals are innovative in nature but will require ethically responsible confirmation of long-term safety and availability, particularly to low- and middle-income countries (LMICs), where the burdens of secondary ICH remain preeminent. Throughout the review, we will be restrained to a balanced pursuit of innovative ideas and ethical considerations to attain global health equity. It is not our intent to provide unequivocal answers, but instead to encourage informed discussions at the intersections of research, clinical practice, and the public health field. We hope this review may stimulate further discussion about ICH and highlight research opportunities to conduct translational research in modern neuroscience with real, approachable, and patient-centered care. Full article
(This article belongs to the Special Issue Latest Review Papers in Molecular Neurobiology 2025)
Show Figures

Figure 1

32 pages, 16988 KB  
Article
From Photogrammetry to Virtual Reality: A Framework for Assessing Visual Fidelity in Structural Inspections
by Xiangxiong Kong, Terry F. Pettijohn and Hovhannes Torikyan
Sensors 2025, 25(14), 4296; https://doi.org/10.3390/s25144296 - 10 Jul 2025
Viewed by 2179
Abstract
Civil structures carry significant service loads over long times but are prone to deterioration due to various natural impacts. Traditionally, these structures are inspected in situ by qualified engineers, a method that is high-cost, risky, time-consuming, and prone to error. Recently, researchers have [...] Read more.
Civil structures carry significant service loads over long times but are prone to deterioration due to various natural impacts. Traditionally, these structures are inspected in situ by qualified engineers, a method that is high-cost, risky, time-consuming, and prone to error. Recently, researchers have explored innovative practices by using virtual reality (VR) technologies as inspection platforms. Despite such efforts, a critical question remains: can VR models accurately reflect real-world structural conditions? This study presents a comprehensive framework for assessing the visual fidelity of VR models for structural inspection. To make it viable, we first introduce a novel workflow that integrates UAV-based photogrammetry, computer graphics, and web-based VR editing to establish interactive VR user interfaces. We then propose a visual fidelity assessment methodology that quantitatively evaluates the accuracy of the VR models through image alignment, histogram matching, and pixel-level deviation mapping between rendered images from the VR models and UAV-captured images under matched viewpoints. The proposed frameworks are validated using two case studies: a historic stone arch bridge and a campus steel building. Overall, this study contributes to the growing body of knowledge on VR-based structural inspections, providing a foundation for our peers for their further research in this field. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Graphical abstract

35 pages, 2865 KB  
Article
eyeNotate: Interactive Annotation of Mobile Eye Tracking Data Based on Few-Shot Image Classification
by Michael Barz, Omair Shahzad Bhatti, Hasan Md Tusfiqur Alam, Duy Minh Ho Nguyen, Kristin Altmeyer, Sarah Malone and Daniel Sonntag
J. Eye Mov. Res. 2025, 18(4), 27; https://doi.org/10.3390/jemr18040027 - 7 Jul 2025
Viewed by 960
Abstract
Mobile eye tracking is an important tool in psychology and human-centered interaction design for understanding how people process visual scenes and user interfaces. However, analyzing recordings from head-mounted eye trackers, which typically include an egocentric video of the scene and a gaze signal, [...] Read more.
Mobile eye tracking is an important tool in psychology and human-centered interaction design for understanding how people process visual scenes and user interfaces. However, analyzing recordings from head-mounted eye trackers, which typically include an egocentric video of the scene and a gaze signal, is a time-consuming and largely manual process. To address this challenge, we develop eyeNotate, a web-based annotation tool that enables semi-automatic data annotation and learns to improve from corrective user feedback. Users can manually map fixation events to areas of interest (AOIs) in a video-editing-style interface (baseline version). Further, our tool can generate fixation-to-AOI mapping suggestions based on a few-shot image classification model (IML-support version). We conduct an expert study with trained annotators (n = 3) to compare the baseline and IML-support versions. We measure the perceived usability, annotations’ validity and reliability, and efficiency during a data annotation task. We asked our participants to re-annotate data from a single individual using an existing dataset (n = 48). Further, we conducted a semi-structured interview to understand how participants used the provided IML features and assessed our design decisions. In a post hoc experiment, we investigate the performance of three image classification models in annotating data of the remaining 47 individuals. Full article
Show Figures

Figure 1

Back to TopTop