Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (9,561)

Search Parameters:
Keywords = image modulation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 3101 KB  
Article
A Real-Time Pedestrian Situation Detection Method Using CNN and DeepSORT with Rule-Based Analysis for Autonomous Mobility
by Yun Hee Lee and Manbok Park
Electronics 2026, 15(3), 532; https://doi.org/10.3390/electronics15030532 - 26 Jan 2026
Abstract
This paper presents a real-time pedestrian situation detection framework for autonomous mobility platforms. The proposed approach extracts pedestrians from images acquired by a camera mounted on an autonomous mobility system, classifies their postures, tracks their trajectories, and subsequently detects pedestrian situations. A convolutional [...] Read more.
This paper presents a real-time pedestrian situation detection framework for autonomous mobility platforms. The proposed approach extracts pedestrians from images acquired by a camera mounted on an autonomous mobility system, classifies their postures, tracks their trajectories, and subsequently detects pedestrian situations. A convolutional neural network (CNN) is employed for pedestrian detection and posture classification, where the YOLOv12 model is fine-tuned via transfer learning for this purpose. To improve detection and classification performance, a region of interest (ROI) is defined using camera calibration data, enabling robust detection of small-scale pedestrians over long distances. Using a custom-labeled dataset, the proposed method achieves a precision of 96.6% and a recall of 97.0% for pedestrian detection and posture classification. The detected pedestrians are tracked using the DeepSORT algorithm, and their situations are inferred through a rule-based analysis module. Experimental results demonstrate that the proposed system operates at an execution speed of 58.11 ms per frame, corresponding to 17.2 fps, thereby satisfying the real-time requirements for autonomous mobility applications. These results confirm that the proposed framework enables reliable real-time pedestrian extraction and situation awareness in real-world autonomous mobility environments. Full article
28 pages, 32574 KB  
Article
CauseHSI: Counterfactual-Augmented Domain Generalization for Hyperspectral Image Classification via Causal Disentanglement
by Xin Li, Zongchi Yang and Wenlong Li
J. Imaging 2026, 12(2), 57; https://doi.org/10.3390/jimaging12020057 - 26 Jan 2026
Abstract
Cross-scene hyperspectral image (HSI) classification under single-source domain generalization (DG) is a crucial yet challenging task in remote sensing. The core difficulty lies in generalizing from a limited source domain to unseen target scenes. We formalize this through the causal theory, where different [...] Read more.
Cross-scene hyperspectral image (HSI) classification under single-source domain generalization (DG) is a crucial yet challenging task in remote sensing. The core difficulty lies in generalizing from a limited source domain to unseen target scenes. We formalize this through the causal theory, where different sensing scenes are viewed as distinct interventions on a shared physical system. This perspective reveals two fundamental obstacles: interventional distribution shifts arising from varying acquisition conditions, and confounding biases induced by spurious correlations driven by domain-specific factors. Taking the above considerations into account, we propose CauseHSI, a causality-inspired framework that offers new insights into cross-scene HSI classification. CauseHSI consists of two key components: a Counterfactual Generation Module (CGM) that perturbs domain-specific factors to generate diverse counterfactual variants, simulating cross-domain interventions while preserving semantic consistency, and a Causal Disentanglement Module (CDM) that separates invariant causal semantics from spurious correlations through structured constraints under a structural causal model, ultimately guiding the model to focus on domain-invariant and generalizable representations. By aligning model learning with causal principles, CauseHSI enhances robustness against domain shifts. Extensive experiments on the Pavia, Houston, and HyRANK datasets demonstrate that CauseHSI outperforms existing DG methods. Full article
(This article belongs to the Special Issue Multispectral and Hyperspectral Imaging: Progress and Challenges)
Show Figures

Figure 1

25 pages, 4900 KB  
Article
Multimodal Feature Fusion and Enhancement for Function Graph Data
by Yibo Ming, Lixin Bai, Jialu Zhao and Yanmin Chen
Appl. Sci. 2026, 16(3), 1246; https://doi.org/10.3390/app16031246 - 26 Jan 2026
Abstract
Recent years have witnessed performance improvements in Multimodal Large Language Models (MLLMs) on downstream natural image understanding tasks. However, when applied to the function graph reasoning task, which is highly information-dense and abundant in fine-grained structural details, these models face pronounced performance degradation. [...] Read more.
Recent years have witnessed performance improvements in Multimodal Large Language Models (MLLMs) on downstream natural image understanding tasks. However, when applied to the function graph reasoning task, which is highly information-dense and abundant in fine-grained structural details, these models face pronounced performance degradation. The challenges are primarily characterized by several core issues: the static projection bottleneck, inadequate cross-modal interaction, and insufficient visual context in text embeddings. To address these problems, this study proposes a multimodal feature fusion enhancement method for function graph reasoning and constructs the FuncFusion-Math model. The core innovation of this model resides in its design of a dual-path feature fusion mechanism for both image and text. Specifically, the image fusion module adopts cross-attention and self-attention mechanisms to optimize visual feature representations under the guidance of textual semantics, effectively mitigating fine-grained information loss. The text fusion module, through feature concatenation and Transformer encoding layers, deeply integrates structured mathematical information from the image into the textual embedding space, significantly reducing semantic deviation. Furthermore, this study utilizes a four-stage progressive training strategy and incorporates the LoRA technique for parameter-efficient optimization. Experimental results demonstrate that the FuncFusion-Math model, with 3B parameters, achieves an accuracy of 43.58% on the FunctionQA subset of the MathVista test set, outperforming a 7B-scale baseline model by 13.15%, which validates the feasibility and effectiveness of the proposed method. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

26 pages, 1806 KB  
Review
CXCR4: A Promising Novel Strategy for Lung Cancer Treatment
by Mengting Liao, Jianmin Wu, Tengkun Dai, Guiyan Liu, Jiayi Zhang, Yiling Zhu, Lin Xu and Juanjuan Zhao
Biomolecules 2026, 16(2), 188; https://doi.org/10.3390/biom16020188 - 26 Jan 2026
Abstract
Lung cancer remains a major public health challenge due to high incidence and mortality. The chemokine receptor CXCR4 and its ligand CXCL12 (SDF-1) constitute a critical axis in tumor biology, influencing tumor cell proliferation, invasion, angiogenesis, and immune evasion. Aberrant CXCR4 expression is [...] Read more.
Lung cancer remains a major public health challenge due to high incidence and mortality. The chemokine receptor CXCR4 and its ligand CXCL12 (SDF-1) constitute a critical axis in tumor biology, influencing tumor cell proliferation, invasion, angiogenesis, and immune evasion. Aberrant CXCR4 expression is frequently observed in lung cancer and is closely associated with adverse prognosis, enhanced metastatic potential, and therapeutic resistance. Mechanistically, CXCR4 activates signaling pathways including PI3K/AKT, MAPK/ERK, JAK/STAT, and FAK/Src, promoting epithelial–mesenchymal transition, stemness, and survival. The CXCL12/CXCR4 axis also orchestrates interactions with the tumor microenvironment, facilitating chemotaxis toward CXCL12-rich niches (e.g., bone marrow and brain) and modulating anti-tumor immunity via regulatory cells. Regulation of CXCR4 occurs at transcriptional, epigenetic, and post-transcriptional levels, with modulation by hypoxia, inflammatory signals, microRNAs, and post-translational modifications. Clinically, high CXCR4 expression correlates with metastasis, poor prognosis, and reduced response to certain therapies, underscoring its potential as a prognostic biomarker and therapeutic target. Therapeutic strategies targeting CXCR4 include small-molecule antagonists (e.g., AMD3100/plerixafor; balixafortide), anti-CXCR4 antibodies, and CXCL12 decoys, as well as imaging probes for patient selection and response monitoring (e.g., 68Ga-pentixafor PET). Preclinical and early clinical studies suggest that CXCR4 blockade can impair tumor growth, limit metastatic spread, and enhance chemotherapy and immunotherapy efficacy, although hematopoietic side effects and infection risk necessitate careful therapeutic design. This review synthesizes the molecular features, regulatory networks, and translational potential of CXCR4 in lung cancer and discusses future directions for precision therapy and biomarker-guided intervention. Full article
(This article belongs to the Section Biomacromolecules: Proteins, Nucleic Acids and Carbohydrates)
Show Figures

Figure 1

13 pages, 2027 KB  
Article
An Improved Diffusion Model for Generating Images of a Single Category of Food on a Small Dataset
by Zitian Chen, Zhiyong Xiao, Dinghui Wu and Qingbing Sang
Foods 2026, 15(3), 443; https://doi.org/10.3390/foods15030443 - 26 Jan 2026
Abstract
In the era of the digital food economy, high-fidelity food images are critical for applications ranging from visual e-commerce presentation to automated dietary assessment. However, developing robust computer vision systems for food analysis is often hindered by data scarcity for long-tail or regional [...] Read more.
In the era of the digital food economy, high-fidelity food images are critical for applications ranging from visual e-commerce presentation to automated dietary assessment. However, developing robust computer vision systems for food analysis is often hindered by data scarcity for long-tail or regional dishes. To address this challenge, we propose a novel high-fidelity food image synthesis framework as an effective data augmentation tool. Unlike generic generative models, our method introduces an Ingredient-Aware Diffusion Model based on the Masked Diffusion Transformer (MaskDiT) architecture. Specifically, we design a Label and Ingredients Encoding (LIE) module and a Cross-Attention (CA) mechanism to explicitly model the relationship between food composition and visual appearance, simulating the “cooking” process digitally. Furthermore, to stabilize training on limited data samples, we incorporate a linear interpolation strategy into the diffusion process. Extensive experiments on the Food-101 and VireoFood-172 datasets demonstrate that our method achieves state-of-the-art generation quality even in data-scarce scenarios. Crucially, we validate the practical utility of our synthetic images: utilizing them for data augmentation improved the accuracy of downstream food classification tasks from 95.65% to 96.20%. This study provides a cost-effective solution for generating diverse, controllable, and realistic food data to advance smart food systems. Full article
Show Figures

Figure 1

16 pages, 3393 KB  
Article
Far-Field Super-Resolution via Longitudinal Nano-Optical Field: A Combined Theoretical and Numerical Investigation
by Aiqin Zhang, Kunyang Li and Jianying Zhou
Photonics 2026, 13(2), 114; https://doi.org/10.3390/photonics13020114 - 26 Jan 2026
Abstract
We present a theoretical and numerical investigation of a far-field super-resolution dark-field microscopy technique based on longitudinal nano-optical field excitation and detection. This method is implemented by integrating vector optical field modulation into a back-scattering confocal laser scanning microscope. A complete forward theoretical [...] Read more.
We present a theoretical and numerical investigation of a far-field super-resolution dark-field microscopy technique based on longitudinal nano-optical field excitation and detection. This method is implemented by integrating vector optical field modulation into a back-scattering confocal laser scanning microscope. A complete forward theoretical imaging framework that rigorously accounts for light–matter interactions is adopted and validated. The weak interaction model and general model are both considered. For the weak interaction model, e.g., multiple discrete dipole sources with a uniform or modulated responding intensity are utilized to fundamentally demonstrate the relationship between the sample and the imaging information. For continuous nanostructures, the finite-difference time-domain simulation results of the interaction-induced optical fields in the imaging model show that the captured image information is not determined solely by system resolution and sample geometry, but also arises from a combination of sample-dependent factors, including material composition, the local density of optical states, and intrinsic physical properties such as the complex refractive index. Unlike existing studies, which predominantly focus on system design or rely on simplified assumptions of weak interactions, this paper achieves quantitative characterization and precise regulation of nanoscale vector optical fields and samples under strong interactions through a comprehensive analytical–numerical imaging model based on rigorous vector diffraction theory and strong near-field coupling interactions, thereby overcoming the limitations of traditional methods. Full article
(This article belongs to the Special Issue Optical Imaging Innovations and Applications)
Show Figures

Figure 1

30 pages, 8651 KB  
Article
Disease-Seg: A Lightweight and Real-Time Segmentation Framework for Fruit Leaf Diseases
by Liying Cao, Donghui Jiang, Yunxi Wang, Jiankun Cao, Zhihan Liu, Jiaru Li, Xiuli Si and Wen Du
Agronomy 2026, 16(3), 311; https://doi.org/10.3390/agronomy16030311 - 26 Jan 2026
Abstract
Accurate segmentation of fruit tree leaf diseases is critical for yield protection and precision crop management, yet it is challenging due to complex field conditions, irregular leaf morphology, and diverse lesion patterns. To address these issues, Disease-Seg, a lightweight real-time segmentation framework, is [...] Read more.
Accurate segmentation of fruit tree leaf diseases is critical for yield protection and precision crop management, yet it is challenging due to complex field conditions, irregular leaf morphology, and diverse lesion patterns. To address these issues, Disease-Seg, a lightweight real-time segmentation framework, is proposed. It integrates CNN and Transformer with a parallel fusion architecture to capture local texture and global semantic context. The Extended Feature Module (EFM) enlarges the receptive field while retaining fine details. A Deep Multi-scale Attention mechanism (DM-Attention) allocates channel weights across scales to reduce redundancy, and a Feature-weighted Fusion Module (FWFM) optimizes integration of heterogeneous feature maps, enhancing multi-scale representation. Experiments show that Disease-Seg achieves 90.32% mIoU and 99.52% accuracy, outperforming representative CNN, Transformer, and hybrid-based methods. Compared with HRNetV2, it improves mIoU by 6.87% and FPS by 31, while using only 4.78 M parameters. It maintains 69 FPS on 512 × 512 crops and requires approximately 49 ms per image on edge devices, demonstrating strong deployment feasibility. On two grape leaf diseases from the PlantVillage dataset, it achieves 91.19% mIoU, confirming robust generalization. These results indicate that Disease-Seg provides an accurate, efficient, and practical solution for fruit leaf disease segmentation, enabling real-time monitoring and smart agriculture applications. Full article
Show Figures

Figure 1

25 pages, 7286 KB  
Article
High-Altitude UAV-Based Detection of Rice Seedlings in Large-Area Paddy Fields
by Zhenhua Li, Xinfeng Yao, Songtao Ban, Dong Hu, Minglu Tian, Tao Yuan and Linyi Li
Agriculture 2026, 16(3), 307; https://doi.org/10.3390/agriculture16030307 - 26 Jan 2026
Abstract
Accurate quantification of field-grown rice seedlings is essential for evaluating yield potential and guiding precision field management. Unmanned aerial vehicle (UAV)-based remote sensing, with its high spatial resolution and broad coverage, provides a robust basis for accurate seedling detection and population density estimation. [...] Read more.
Accurate quantification of field-grown rice seedlings is essential for evaluating yield potential and guiding precision field management. Unmanned aerial vehicle (UAV)-based remote sensing, with its high spatial resolution and broad coverage, provides a robust basis for accurate seedling detection and population density estimation. However, in previous studies, UAVs were typically employed at relatively low altitudes, which provided high-resolution imagery and facilitated seedling recognition but limited efficiency. To enable large-area monitoring, higher flight altitudes are required, which reduces image resolution and adversely affects rice seedling recognition accuracy. In this study, UAVs were flown at a height of 30 m, and the resulting lower-resolution imagery, combined with the small size of seedlings, their dense spatial distribution, and the complex field background, necessitated algorithmic improvements for accurate detection. To address these challenges, we propose an enhanced You Only Look Once version 8 nano (YOLOv8n)-based detection model specifically designed to improve seedling recognition under high-altitude UAV imagery. The model incorporates an improved Bidirectional Feature Pyramid Network (BiFPN) for multi-scale feature fusion and small-object detection, a Global-to-Local Spatial Aggregation (GLSA) module for enriched spatial context modeling, and a Content-Guided Attention Fusion (CGAFusion) module to enhance discriminative feature learning. Experiments on high-altitude UAV imagery demonstrate that the proposed model achieves an mAP@0.5 of 94.7%, a precision of 91.0%, and a recall of 91.2%, representing a 2.3% improvement over the original YOLOv8n. These results highlight the model’s innovation in handling high-altitude UAV imagery for large-area rice seedling detection, demonstrating its effectiveness and practical potential under complex field conditions. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

22 pages, 4317 KB  
Article
Non-Contact Temperature Monitoring in Dairy Cattle via Thermal Infrared Imaging and Environmental Parameters
by Kaixuan Zhao, Shaojuan Ge, Yinan Chen, Qianwen Li, Mengyun Guo, Yue Nian and Wenkai Ren
Agriculture 2026, 16(3), 306; https://doi.org/10.3390/agriculture16030306 - 26 Jan 2026
Abstract
Core body temperature is a critical physiological indicator for assessing and diagnosing animal health status. In bovines, continuously monitoring this metric enables accurate evaluation of their physiological condition; however, traditional rectal measurements are labor-intensive and cause stress in animals. To achieve intelligent, contactless [...] Read more.
Core body temperature is a critical physiological indicator for assessing and diagnosing animal health status. In bovines, continuously monitoring this metric enables accurate evaluation of their physiological condition; however, traditional rectal measurements are labor-intensive and cause stress in animals. To achieve intelligent, contactless temperature monitoring in cattle, we proposed a non-invasive method based on thermal imaging combined with environmental data fusion. First, thermal infrared images of the cows’ faces were collected, and the You Only Look Once (YOLO) object detection model was used to locate the head region. Then, the YOLO segmentation network was enhanced with the Online Convolutional Re-parameterization (OREPA) and High-level Screening-feature Fusion Pyramid Network (HS-FPN) modules to perform instance segmentation of the eye socket area. Finally, environmental variables—ambient temperature, humidity, wind speed, and light intensity—were integrated to compensate for eye socket temperature, and a random forest algorithm was used to construct a predictive model of rectal temperature. The experiments were conducted using a thermal infrared image dataset comprising 33,450 frontal-view images of dairy cows with a resolution of 384 × 288 pixels, along with 1471 paired samples combining thermal and environmental data for model development. The proposed method achieved a segmentation accuracy (mean average precision, mAP50–95) of 86.59% for the eye socket region, ensuring reliable temperature extraction. The rectal temperature prediction model demonstrated a strong correlation with the reference rectal temperature (R2 = 0.852), confirming its robustness and predictive reliability for practical applications. These results demonstrate that the proposed method is practical for non-contact temperature monitoring of cattle in large-scale farms, particularly those operating under confined or semi-confined housing conditions. Full article
(This article belongs to the Section Farm Animal Production)
Show Figures

Figure 1

17 pages, 2939 KB  
Article
Industrial-Grade Differential Interference Contrast Inspection System for Unpatterned Wafers
by Youwei Huang, Kangjun Zhao, Lu Chen, Long Zhang, Yingjian Liu, Yanming Zhu, Jianlong Wang, Ji Zhang, Xiaojun Tian, Guangrui Wen and Zihao Lei
Electronics 2026, 15(3), 518; https://doi.org/10.3390/electronics15030518 - 26 Jan 2026
Abstract
In the field of optical inspection for unpatterned wafer surfaces, this paper presents a novel inspection system designed to meet the semiconductor industry’s growing demand for high efficiency and cost-effectiveness. The system is built around the principles of simplicity, stability, speed, and low [...] Read more.
In the field of optical inspection for unpatterned wafer surfaces, this paper presents a novel inspection system designed to meet the semiconductor industry’s growing demand for high efficiency and cost-effectiveness. The system is built around the principles of simplicity, stability, speed, and low cost. Its core is a low-speed stepping rotary line-scan architecture. This architecture is integrated with a two-step phase-shifting algorithm. The combination leverages line-scan differential interference contrast (DIC) technology. This aims to transform DIC technology—traditionally used for detailed observation—into an industrialized solution capable of rapid, accurate quantitative measurement. Experimental validation on an equivalent platform confirms strong performance. The system achieves an imaging uniformity exceeding 85% across dual channels. Its Modulation Transfer Function (MTF) value is greater than 0.55 at 71.8 lp/mm. The vertical detection clearly resolves 3 nm standard height steps. Additionally, the throughput exceeds 80 wafers per hour. The proposed line-scan DIC system achieves both high inspection accuracy and industrial-grade scanning speed, delivering robust performance and reliable operation. Full article
Show Figures

Figure 1

19 pages, 778 KB  
Review
Hepatic Sinusoidal Obstruction Syndrome Induced by Pyrrolizidine Alkaloids from Gynura segetum: Mechanisms and Therapeutic Advances
by Zheng Zhou, Dongfan Yang, Tong Chu, Dayuan Zheng, Kuanyun Zhang, Shaokui Liang, Lu Yang, Yanchao Yang and Wenzhe Ma
Molecules 2026, 31(3), 410; https://doi.org/10.3390/molecules31030410 - 25 Jan 2026
Abstract
The traditional Chinese medicinal herb Gynura segetum is increasingly recognized for its hepatotoxic potential, primarily attributed to its pyrrolizidine alkaloid (PA) content. PAs are a leading cause of herb-induced liver injury (HILI) in China and are strongly linked to hepatic sinusoidal obstruction syndrome [...] Read more.
The traditional Chinese medicinal herb Gynura segetum is increasingly recognized for its hepatotoxic potential, primarily attributed to its pyrrolizidine alkaloid (PA) content. PAs are a leading cause of herb-induced liver injury (HILI) in China and are strongly linked to hepatic sinusoidal obstruction syndrome (HSOS). This review systematically summarizes the pathogenesis, diagnostic advancements, and therapeutic strategies for PA-induced HSOS. Molecular mechanisms of PA metabolism are detailed, encompassing cytochrome P450-mediated bioactivation and the subsequent formation of pyrrole–protein adducts, which trigger sinusoidal endothelial cell injury and hepatocyte apoptosis. Advances in diagnostic criteria, including the Nanjing Criteria and the Roussel Uclaf Causality Assessment Method (RUCAM)-integrated Drum Tower Severity Scoring System, are discussed. Furthermore, emerging biomarkers, such as circulating microRNAs and pyrrole–protein adducts, are examined. Imaging modalities, such as contrast-enhanced computed tomography (CT) and gadolinium ethoxybenzyl diethylenetriamine pentaacetic acid (Gd-EOB-DTPA) magnetic resonance imaging (MRI), have evolved from descriptive tools into quantitative and prognostic instruments. Therapeutic approaches have evolved from supportive care to precision interventions, including anticoagulation, transjugular intrahepatic portosystemic shunt (TIPS), and autophagy-modulating agents. A comprehensive literature review, utilizing databases such as PubMed and Web of Science, was conducted to summarize progress since the introduction of the “Nanjing Guidelines”. Ultimately, this review underscores the critical need for integrated diagnostic and therapeutic frameworks, alongside enhanced public awareness and regulatory oversight, to effectively mitigate PA-related liver injury. Full article
14 pages, 2030 KB  
Article
A Modular AI Workflow for Architectural Facade Style Transfer: A Deep-Style Synergy Approach Based on ComfyUI and Flux Models
by Chong Xu and Chongbao Qu
Buildings 2026, 16(3), 494; https://doi.org/10.3390/buildings16030494 - 25 Jan 2026
Abstract
This study focuses on the transfer of architectural facade styles. Using the node-based visual deep learning platform ComfyUI, the system integrates the Flux Redux and Flux Depth models to establish a modular workflow. This workflow achieved style transfer of building facades guided by [...] Read more.
This study focuses on the transfer of architectural facade styles. Using the node-based visual deep learning platform ComfyUI, the system integrates the Flux Redux and Flux Depth models to establish a modular workflow. This workflow achieved style transfer of building facades guided by deep perception, encompassing key stages such as style feature extraction, depth information extraction, positive prompt input, and style image generation. The core innovation of this study lies in two aspects: Methodologically, a modular low-code visual workflow has been established. Through the coordinated operation of different modules, it ensures the visual stability of architectural forms during style conversion. In response to the novel challenges posed by generative AI in altering architectural forms, the evaluation framework innovatively introduces a “semantic inheritance degree” assessment system. This elevates the evaluation perspective beyond traditional “geometric similarity” to a new level of “semantic and imagery inheritance.” It should be clarified that the framework proposed by this research primarily provides innovative tools for architectural education, early design exploration, and visualization analysis. This workflow introduces an efficient “style-space” cognitive and generative tool for teaching architectural design. Students can use this tool to rapidly conduct comparative experiments to generate multiple stylistic facades, intuitively grasping the intrinsic relationships among different styles and architectural volumes/spatial structures. This approach encourages bold formal exploration and deepens understanding of architectural formal language. Full article
Show Figures

Figure 1

30 pages, 12207 KB  
Article
Automatic Identification and Segmentation of Diffuse Aurora from Untrimmed All-Sky Auroral Videos
by Qian Wang, Peiqi Hao and Han Pan
Remote Sens. 2026, 18(3), 402; https://doi.org/10.3390/rs18030402 - 25 Jan 2026
Abstract
Diffuse aurora is a widespread and long-lasting auroral emission that plays an important role in diagnosing magnetosphere-ionosphere coupling and magnetospheric plasma transport. Despite its scientific significance, diffuse aurora remains challenging to identify automatically in all-sky imager (ASI) observations due to its weak optical [...] Read more.
Diffuse aurora is a widespread and long-lasting auroral emission that plays an important role in diagnosing magnetosphere-ionosphere coupling and magnetospheric plasma transport. Despite its scientific significance, diffuse aurora remains challenging to identify automatically in all-sky imager (ASI) observations due to its weak optical intensity, indistinct boundaries, and gradual temporal evolution. These characteristics, together with frequent cloud contamination, limit the effectiveness of conventional keogram-based or morphology-driven detection approaches and hinder large-scale statistical analyses based on long-term optical datasets. In this study, we propose an automated framework for the identification and temporal segmentation of diffuse aurora from untrimmed all-sky auroral videos. The framework consists of a frame-level coarse identification module that combines weak morphological information with inter-frame temporal dynamics to detect candidate diffuse-auroral intervals, and a snippet-level segmentation module that dynamically aggregates temporal information to capture the characteristic gradual onset-plateau-decay evolution of diffuse aurora. Bidirectional temporal modeling is employed to improve boundary localization, while an adaptive mixture-of-experts mechanism reduces redundant temporal variations and enhances discriminative features relevant to diffuse emission. The proposed method is evaluated using multi-year 557.7 nm ASI observations acquired at the Arctic Yellow River Station. Quantitative experiments demonstrate state-of-the-art performance, achieving 96.3% frame-wise accuracy and an Edit score of 87.7%. Case studies show that the method effectively distinguishes diffuse aurora from cloud-induced pseudo-diffuse structures and accurately resolves gradual transition boundaries that are ambiguous in keograms. Based on the automated identification results, statistical distributions of diffuse aurora occurrence, duration, and diurnal variation are derived from continuous observations spanning 2003–2009. The proposed framework enables robust and fully automated processing of large-scale all-sky auroral images, providing a practical tool for remote sensing-based auroral monitoring and supporting objective statistical studies of diffuse aurora and related magnetospheric processes. Full article
Show Figures

Figure 1

12 pages, 434 KB  
Article
Beyond Improvement of Motor Symptoms: Central Effects of Botulinum Toxin on Anxiety and Depression in Focal Dystonia, Hemifacial Spasm, and Blepharospasm
by Tihana Gilman Kuric, Zvonimir Popovic, Sara Matosa, Eleonora Strujic, Ivana Gacic, Tea Mirosevic Zubonja, Stjepan Juric, Melita Pecek Prpic, Vera Jelusic, Dubravka Biuk and Svetlana Tomic
Toxins 2026, 18(2), 62; https://doi.org/10.3390/toxins18020062 - 25 Jan 2026
Abstract
Cervical dystonia (CD), blepharospasm (BSP), and idiopathic hemifacial spasm (HFS) are focal hyperkinetic movement disorders with distinct underlying mechanisms. While CD and BSP involve central network dysfunctions within the basal ganglia-thalamo-cortical and cerebellar circuits, HFS primarily results from peripheral facial nerve hyperexcitability. Still, [...] Read more.
Cervical dystonia (CD), blepharospasm (BSP), and idiopathic hemifacial spasm (HFS) are focal hyperkinetic movement disorders with distinct underlying mechanisms. While CD and BSP involve central network dysfunctions within the basal ganglia-thalamo-cortical and cerebellar circuits, HFS primarily results from peripheral facial nerve hyperexcitability. Still, people living with all three conditions often struggle with mood issues like depression and anxiety, which can originate from both the burden of illness and changes in brain biology. We studied 61 patients (CD, n = 30; BSP, n = 9; HFS, n = 22) and assessed depression and anxiety before and three weeks after botulinum neurotoxin type A (BoNT-A) therapy, considering injection site and dose. BoNT-A significantly reduced depressive and anxiety symptoms across all groups, regardless of disease type, dose, or glabellar injection. These psychiatric improvements were not associated with the degree of motor symptom reduction, suggesting a partially independent mechanism of mood modulation. Our findings indicate that BoNT-A’s mood benefits may extend beyond local motor effects, possibly involving broader sensorimotor-limbic interactions. These results highlight the therapeutic potential of BoNT-A for addressing non-motor symptoms in both dystonic and non-dystonic hyperkinetic disorders. Future studies employing imaging and neurophysiological methods are necessary to explain the neural pathways underlying these effects. Full article
(This article belongs to the Section Bacterial Toxins)
Show Figures

Figure 1

28 pages, 5166 KB  
Article
Hyperspectral Image Classification Using SIFANet: A Dual-Branch Structure Combining CNN and Transformer
by Yuannan Gui, Lu Xu, Dongping Ming, Yanfei Wei and Ming Huang
Remote Sens. 2026, 18(3), 398; https://doi.org/10.3390/rs18030398 - 24 Jan 2026
Viewed by 55
Abstract
The hyperspectral image (HSI) is rich in spectral information and has important applications in the field of ground objects classification. However, HSI data have high dimensions and variable spatial–spectral features, which make it difficult for some models to adequately extract the effective features. [...] Read more.
The hyperspectral image (HSI) is rich in spectral information and has important applications in the field of ground objects classification. However, HSI data have high dimensions and variable spatial–spectral features, which make it difficult for some models to adequately extract the effective features. Recent studies have shown that fusing spatial and spectral features can significantly improve accuracy by exploiting multi-dimensional correlations. Based on this, this article proposes a spectral integration and focused attention network (SIFANet) with a two-branch structure. SIFANet captures the local spatial features and global spectral dependencies through the parallel-designed spatial feature extractor (SFE) and spectral sequence Transformer (SST), respectively. A cross-module attention fusion (CMAF) mechanism dynamically integrates features from both branches before final classification. Experiments on the Salinas dataset and Xiong’an hyperspectral dataset show that the overall accuracy on these two datasets is 99.89% and 99.79%, which is higher than the other models compared. The proposed method also had the lowest standard deviation of category accuracy and optimal computational efficiency metrics, demonstrating robust spatial–spectral feature integration for improved classification. Full article
Show Figures

Figure 1

Back to TopTop