Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (16,602)

Search Parameters:
Keywords = image maps

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 3732 KB  
Review
The Elias University Hospital Approach: A Visual Guide to Ultrasound-Guided Botulinum Toxin Injection in Spasticity, Part IV—Distal Lower Limb Muscles
by Marius Nicolae Popescu, Claudiu Căpeț, Cristina Popescu and Mihai Berteanu
Toxins 2025, 17(10), 508; https://doi.org/10.3390/toxins17100508 (registering DOI) - 16 Oct 2025
Abstract
Spasticity of the distal lower limb substantially impairs stance, gait, and quality of life in patients with upper motor neuron lesions. Although ultrasound-guided botulinum toxin A (BoNT-A) injections are increasingly employed, structured, muscle-specific visual guidance for the distal lower limb remains limited. This [...] Read more.
Spasticity of the distal lower limb substantially impairs stance, gait, and quality of life in patients with upper motor neuron lesions. Although ultrasound-guided botulinum toxin A (BoNT-A) injections are increasingly employed, structured, muscle-specific visual guidance for the distal lower limb remains limited. This study provides a comprehensive guide for ultrasound-guided BoNT-A injections across ten key distal lower limb muscles: gastrocnemius, soleus, tibialis posterior, flexor hallucis longus, flexor digitorum longus, tibialis anterior, extensor hallucis longus, flexor digitorum brevis, flexor hallucis brevis, and extensor digitorum longus. For each muscle, we present (1) Anatomical positioning relative to osseous landmarks; (2) Sonographic identification cues and dynamic features; (3) Zones of intramuscular neural arborization optimal for injection; (4) Practical injection protocols derived from literature and clinical experience. High-resolution ultrasound images and dynamic videos illustrate real-life muscle behavior and guide injection site selection. This guide facilitates precise targeting by correlating sonographic signs with optimal injection zones, addresses common spastic patterns—including equinus, varus, claw toe, and hallux deformities—and integrates fascial anatomy with motor-point mapping. This article completes the Elias University Hospital visual series, providing clinicians with a unified framework for effective spasticity management to improve gait, posture, and patient autonomy. Full article
Show Figures

Figure 1

25 pages, 34242 KB  
Article
ImbDef-GAN: Defect Image-Generation Method Based on Sample Imbalance
by Dengbiao Jiang, Nian Tao, Kelong Zhu, Yiming Wang and Haijian Shao
J. Imaging 2025, 11(10), 367; https://doi.org/10.3390/jimaging11100367 - 16 Oct 2025
Abstract
In industrial settings, defect detection using deep learning typically requires large numbers of defective samples. However, defective products are rare on production lines, creating a scarcity of defect samples and an overabundance of samples that contain only background. We introduce ImbDef-GAN, a sample [...] Read more.
In industrial settings, defect detection using deep learning typically requires large numbers of defective samples. However, defective products are rare on production lines, creating a scarcity of defect samples and an overabundance of samples that contain only background. We introduce ImbDef-GAN, a sample imbalance generative framework, to address three persistent limitations in defect image generation: unnatural transitions at defect background boundaries, misalignment between defects and their masks, and out-of-bounds defect placement. The framework operates in two stages: (i) background image generation and (ii) defect image generation conditioned on the generated background. In the background image-generation stage, a lightweight StyleGAN3 variant jointly generates the background image and its segmentation mask. A Progress-coupled Gated Detail Injection module uses global scheduling driven by training progress and per-pixel gating to inject high-frequency information in a controlled manner, thereby enhancing background detail while preserving training stability. In the defect image-generation stage, the design augments the background generator with a residual branch that extracts defect features. By blending defect features with a smoothing coefficient, the resulting defect boundaries transition more naturally and gradually. A mask-aware matching discriminator enforces consistency between each defect image and its mask. In addition, an Edge Structure Loss and a Region Consistency Loss strengthen morphological fidelity and spatial constraints within the valid mask region. Extensive experiments on the MVTec AD dataset demonstrate that ImbDef-GAN surpasses existing methods in both the realism and diversity of generated defects. When the generated data are used to train a downstream detector, YOLOv11 achieves a 5.4% improvement in mAP@0.5, indicating that the proposed approach effectively improves detection accuracy under sample imbalance. Full article
(This article belongs to the Section Image and Video Processing)
18 pages, 1898 KB  
Article
Computer Vision-Based Deep Learning Modeling for Salmon Part Segmentation and Defect Identification
by Chunxu Zhang, Yuanshan Zhao, Wude Yang, Liuqian Gao, Wenyu Zhang, Yang Liu, Xu Zhang and Huihui Wang
Foods 2025, 14(20), 3529; https://doi.org/10.3390/foods14203529 - 16 Oct 2025
Abstract
Accurate cutting of salmon parts and surface defect detection are the key steps to enhance the added value of its processing. At present, mainstream manual inspection methods have low accuracy and efficiency, making it difficult to meet the demands of industrialized production. A [...] Read more.
Accurate cutting of salmon parts and surface defect detection are the key steps to enhance the added value of its processing. At present, mainstream manual inspection methods have low accuracy and efficiency, making it difficult to meet the demands of industrialized production. A machine vision inspection method based on a two-stage fusion network is proposed in this paper, aiming to achieve accurate cutting of salmon parts and efficient recognition of defects. The fish body image is collected by building a visual inspection system, and the dataset is constructed by preprocessing and data enhancement. For the part cutting, the improved U-Net model that introduces the CBAM attention mechanism is used to strengthen the extraction ability of the fish body texture features. For defect detection, the two-stage fusion architecture is designed to quickly locate the defective region by adding the YOLOv5 of the P2 small target detection layer first, and then the cropped region is fed into the improved U-Net for accurate cutting. The experimental results demonstrate that the improved U-Net achieves a mean average precision (mAP) of 96.87% and a mean intersection over union (mIoU) of 94.33% in part cutting, representing improvements of 2.44% and 1.06%, respectively, over the base model. In defect detection, the fusion model attains an mAP of 94.28% with a processing speed of 7.30 fps, outperforming the single U-Net by 28.02% in accuracy and 236.4% in efficiency. This method provides a high-precision, high-efficiency solution for intelligent salmon processing, offering significant value for advancing automation in the aquatic product processing industry. Full article
Show Figures

Figure 1

13 pages, 1718 KB  
Review
Are We Underestimating Zygomaticus Variability in Midface Surgery?
by Ingrid C. Landfald and Łukasz Olewnik
J. Clin. Med. 2025, 14(20), 7311; https://doi.org/10.3390/jcm14207311 (registering DOI) - 16 Oct 2025
Abstract
The zygomaticus major and minor (ZMa/ZMi) are key determinants of smile dynamics and midface contour, yet they exhibit substantial morphological variability—including bifid or multibellied bellies, accessory slips, and atypical insertions. Such variants can alter force vectors, fat-compartment boundaries, and SMAS planes, increasing the [...] Read more.
The zygomaticus major and minor (ZMa/ZMi) are key determinants of smile dynamics and midface contour, yet they exhibit substantial morphological variability—including bifid or multibellied bellies, accessory slips, and atypical insertions. Such variants can alter force vectors, fat-compartment boundaries, and SMAS planes, increasing the risk of asymmetry, contour irregularities, or “joker smile” following facelifts, fillers, thread lifts, and smile reconstruction. To our knowledge, this is the first review to integrate the Landfald classification of ZMa/ZMi variants with a standardized dynamic imaging-based workflow for aesthetic and reconstructive midface procedures. We conducted a narrative literature synthesis of anatomical and imaging studies. Bifid or multibellied variants have been reported in up to 35% of cadaveric specimens. We synthesize anatomical, biomechanical, and imaging evidence (MRI, dynamic US, 3D analysis) to propose a practical protocol: (1) focused history and dynamic examination, (2) US/EMG mapping of contraction vectors, (3) optional high-resolution MRI for complex cases, and (4) individualized adjustment of surgical vectors, injection planes, and dosing. Procedure-specific adaptations are outlined for deep-plane releases, thread-lift trajectories, filler depth selection, and muscle-transfer orientation. We emphasize that standardizing preoperative dynamic mapping and adopting a “patient-specific mimetic profile” can enhance safety, predictability, and preservation of authentic expression, ultimately improving patient satisfaction across diverse midface interventions. Full article
Show Figures

Figure 1

21 pages, 2309 KB  
Review
Joint Acidosis and Acid-Sensing Receptors and Ion Channels in Osteoarthritis Pathobiology and Therapy
by William N. Martin, Colette Hyde, Adam Yung, Ryan Taffe, Bhakti Patel, Ajay Premkumar, Pallavi Bhattaram, Hicham Drissi and Nazir M. Khan
Cells 2025, 14(20), 1605; https://doi.org/10.3390/cells14201605 - 16 Oct 2025
Abstract
Osteoarthritis (OA) lacks disease-modifying therapies, in part because key features of the joint microenvironment remain underappreciated. One such feature is localized acidosis, characterized by sustained reductions in extracellular pH within the cartilage, meniscus, and the osteochondral interface despite near-neutral bulk synovial fluid. We [...] Read more.
Osteoarthritis (OA) lacks disease-modifying therapies, in part because key features of the joint microenvironment remain underappreciated. One such feature is localized acidosis, characterized by sustained reductions in extracellular pH within the cartilage, meniscus, and the osteochondral interface despite near-neutral bulk synovial fluid. We synthesize current evidence on the origins, sensing, and consequences of joint acidosis in OA. Metabolic drivers include hypoxia-biased glycolysis in avascular cartilage, cytokine-driven reprogramming in the synovium, and limits in proton/lactate extrusion (e.g., monocarboxylate transporters (MCTs)), with additional contributions from fixed-charge matrix chemistry and osteoclast-mediated acidification at the osteochondral junction. Acidic niches shift proteolysis toward cathepsins, suppress anabolic control, and trigger chondrocyte stress responses (calcium overload, autophagy, senescence, apoptosis). In the nociceptive axis, protons engage ASIC3 and sensitize TRPV1, linking acidity to pain. Joint cells detect pH through two complementary sensor classes: proton-sensing GPCRs (GPR4, GPR65/TDAG8, GPR68/OGR1, GPR132/G2A), which couple to Gs, Gq/11, and G12/13 pathways converging on MAPK, NF-κB, CREB, and RhoA/ROCK; and proton-gated ion channels (ASIC1a/3, TRPV1), which convert acidity into electrical and Ca2+ signals. Therapeutic implications include inhibition of acid-enabled proteases (e.g., cathepsin K), pharmacologic modulation of pH-sensing receptors (with emerging interest in GPR68 and GPR4), ASIC/TRPV1-targeted analgesia, metabolic control of lactate generation, and pH-responsive intra-articular delivery systems. We outline research priorities for pH-aware clinical phenotyping and imaging, cell-type-resolved signaling maps, and targeted interventions in ‘acidotic OA’ endotypes. Framing acidosis as an actionable component of OA pathogenesis provides a coherent basis for mechanism-anchored, locality-specific disease modification. Full article
(This article belongs to the Special Issue Molecular Mechanisms Underlying Inflammatory Pain)
Show Figures

Figure 1

15 pages, 4651 KB  
Article
Improvement of Construction Workers’ Drowsiness Detection and Classification via Text-to-Image Augmentation and Computer Vision
by Daegyo Jung, Yejun Lee, Kihyun Jeong, Jeehee Lee, Jinwoo Kim, Hyunjung Park and Jungho Jeon
Sustainability 2025, 17(20), 9158; https://doi.org/10.3390/su17209158 (registering DOI) - 16 Oct 2025
Abstract
Detecting and classifying construction workers’ drowsiness is critical in the construction safety management domain. Research efforts to increase the reliability of drowsiness detection through image augmentation and computer vision approaches face two key challenges: the related size constraints and the number of manual [...] Read more.
Detecting and classifying construction workers’ drowsiness is critical in the construction safety management domain. Research efforts to increase the reliability of drowsiness detection through image augmentation and computer vision approaches face two key challenges: the related size constraints and the number of manual tasks associated with creating input images necessary for training vision algorithms. Although text-to-image (T2I) has emerged as a promising alternative, the dynamic relationship between T2I-driven image characteristics (e.g., contextual relevance), different computer vision algorithms, and the resulting performance remains lacking. To address the gap, this study proposes T2I-centered computer vision approaches for enhanced drowsiness detection by creating four separate image sets (e.g., construction vs. non-construction) labeled using the polygon method, developing two detection models (YOLOv8 and YOLO11), and comparing the performance. The results showed that the use of construction domain-specific images for training both YOLOv8 and YOLO11 led to higher mAP@50 of 68.2% and 56.6%, respectively, compared to those trained using non-construction images (53.4% and 53.5%). Also, increasing the number of T2I-generated training images improved mAP@50 from 68.2% (baseline) to 95.3% for YOLOv8 and 56.6% to 93.3% for YOLO11. The findings demonstrate the effectiveness of leveraging the T2I augmentation approach for improved construction workers’ drowsiness detection. Full article
(This article belongs to the Special Issue Advances in Sustainable Construction Engineering and Management)
Show Figures

Figure 1

21 pages, 6062 KB  
Article
Apple Orchard Mapping in China Based on an Automatic Sample Generation Algorithm and Random Forest
by Chunxiao Wu, Jianyu Yang, Han Zhou, Shuoji Zhang, Xiangyi Xiao, Kaixuan Tang, Xinyi Zhang, Nannan Zhang and Dongping Ming
Remote Sens. 2025, 17(20), 3449; https://doi.org/10.3390/rs17203449 - 16 Oct 2025
Abstract
Accurate apple orchard mapping plays a vital role in managing agricultural resources. However, national-scale apple orchard mapping faces challenges such as the “same spectrum with different objects” phenomenon between apple trees and other crops, as well as difficulties in sample collection. To address [...] Read more.
Accurate apple orchard mapping plays a vital role in managing agricultural resources. However, national-scale apple orchard mapping faces challenges such as the “same spectrum with different objects” phenomenon between apple trees and other crops, as well as difficulties in sample collection. To address the above issues, this study proposes a knowledge-assisted apple mapping framework that automatically generates samples using agronomic knowledge and employs a random forest algorithm for classification. Firstly, an apple mapping composite index (AMCI) was developed by integrating the chlorophyll content and leaf structural characteristics of apple trees. In a single Sentinel-2 image, a novel natural vegetation phenolic compounds index was applied to systematically exclude natural vegetation, and based on this, the AMCI was used to generate an initial apple distribution map. Using this initial map, apple samples were obtained through random point selection and visual interpretation, and other samples were constructed based on land cover products. Finally, a 10 m-resolution apple orchard map of China was generated with the random forest algorithm. The results show an overall accuracy of 90.7% and a kappa of 0.814. Moreover, the extracted area shows an 82.11% consistency with official statistical data, demonstrating the effectiveness of the proposed method. This simple and robust framework provides a valuable reference for large-scale crop mapping. Full article
(This article belongs to the Special Issue Innovations in Remote Sensing Image Analysis)
Show Figures

Figure 1

23 pages, 4965 KB  
Article
Direct Estimation of Electric Field Distribution in Circular ECT Sensors Using Graph Convolutional Networks
by Robert Banasiak, Zofia Stawska and Anna Fabijańska
Sensors 2025, 25(20), 6371; https://doi.org/10.3390/s25206371 - 15 Oct 2025
Abstract
The Electrical Capacitance Tomography (ECT) imaging pipeline relies on accurate estimation of electric field distributions to compute electrode capacitances and reconstruct permittivity maps. Traditional ECT forward model methods based on the Finite Element Method (FEM) offer high accuracy but are computationally intensive, limiting [...] Read more.
The Electrical Capacitance Tomography (ECT) imaging pipeline relies on accurate estimation of electric field distributions to compute electrode capacitances and reconstruct permittivity maps. Traditional ECT forward model methods based on the Finite Element Method (FEM) offer high accuracy but are computationally intensive, limiting their use in real-time applications. In this proof-of-concept study, we investigate the use of Graph Convolutional Networks (GCNs) for direct, one-step prediction of electric field distributions associated with a circular ECT sensor numerical model. The network is trained on FEM-simulated data and outputs of full 2D electric field maps for all excitation patterns. To evaluate physical fidelity, we compute capacitance matrices using both GCN-predicted and FEM-based fields. Our results show strong agreement in both direct field prediction and derived quantities, demonstrating the feasibility of replacing traditional solvers with fast, learned approximators. This approach has significant implications for further real-time ECT imaging and control applications. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

29 pages, 5388 KB  
Article
Bio-Inspired Structural Design for Enhanced Crashworthiness of Electric Vehicles’ Battery Frame
by Arefeh Salimi Beni and Hossein Taheri
Appl. Sci. 2025, 15(20), 11052; https://doi.org/10.3390/app152011052 - 15 Oct 2025
Abstract
The increasing reliance on lithium-ion batteries (LIBs) in electric vehicles (EVs) has intensified the need for structurally resilient and lightweight protective enclosures that can withstand mechanical abuse during crashes. This study addresses the challenge by drawing inspiration from the hierarchical geometry of bighorn [...] Read more.
The increasing reliance on lithium-ion batteries (LIBs) in electric vehicles (EVs) has intensified the need for structurally resilient and lightweight protective enclosures that can withstand mechanical abuse during crashes. This study addresses the challenge by drawing inspiration from the hierarchical geometry of bighorn sheep horns to design a bio-inspired battery frame with improved crashworthiness. A multilayered structure, replicating both the internal and external features of the horn, was fabricated using Fused Deposition Modeling (FDM) with Acrylonitrile Butadiene Styrene (ABS) and carbon fiber composite (CFC) materials. The experimental evaluation involved tensile and compression testing, Izod impact tests, digital image correlation (DIC), and acoustic emission (AE) monitoring for full-field strain mapping, aiming to assess structural performance under various loading scenarios. Results demonstrate that the bioinspired designs exhibit enhanced energy absorption, mechanical strength, and strain distribution compared to conventional configurations. The improved vibration response and damage tolerance observed in structured samples suggest their potential for application in battery protection systems. This work underscores the feasibility of leveraging natural design principles to engineer robust, lightweight enclosures for advanced energy storage systems, contributing to safer and more reliable EV technologies. Full article
Show Figures

Figure 1

15 pages, 2232 KB  
Article
Image-Based Deep Learning for Brain Tumour Transcriptomics: A Benchmark of DeepInsight, Fotomics, and Saliency-Guided CNNs
by Ali Alyatimi, Vera Chung, Muhammad Atif Iqbal and Ali Anaissi
Mach. Learn. Knowl. Extr. 2025, 7(4), 119; https://doi.org/10.3390/make7040119 - 15 Oct 2025
Abstract
Classifying brain tumour transcriptomic data is crucial for precision medicine but remains challenging due to high dimensionality and limited interpretability of conventional models. This study benchmarks three image-based deep learning approaches, DeepInsight, Fotomics, and a novel saliency-guided convolutional neural network (CNN), for transcriptomic [...] Read more.
Classifying brain tumour transcriptomic data is crucial for precision medicine but remains challenging due to high dimensionality and limited interpretability of conventional models. This study benchmarks three image-based deep learning approaches, DeepInsight, Fotomics, and a novel saliency-guided convolutional neural network (CNN), for transcriptomic classification. DeepInsight utilises dimensionality reduction to spatially arrange gene features, while Fotomics applies Fourier transforms to encode expression patterns into structured images. The proposed method transforms each single-cell gene expression profile into an RGB image using PCA, UMAP, or t-SNE, enabling CNNs such as ResNet to learn spatially organised molecular features. Gradient-based saliency maps are employed to highlight gene regions most influential in model predictions. Evaluation is conducted on two biologically and technologically different datasets: single-cell RNA-seq from glioblastoma GSM3828672 and bulk microarray data from medulloblastoma GSE85217. Outcomes demonstrate that image-based deep learning methods, particularly those incorporating saliency guidance, provide a robust and interpretable framework for uncovering biologically meaningful patterns in complex high-dimensional omics data. For instance, ResNet-18 achieved the highest accuracy of 97.25% on the GSE85217 dataset and 91.02% on GSM3828672, respectively, outperforming other baseline models across multiple metrics. Full article
Show Figures

Graphical abstract

18 pages, 6196 KB  
Article
MSIMG: A Density-Aware Multi-Channel Image Representation Method for Mass Spectrometry
by Fengyi Zhang, Boyong Gao, Yinchu Wang, Lin Guo, Wei Zhang and Xingchuang Xiong
Sensors 2025, 25(20), 6363; https://doi.org/10.3390/s25206363 - 15 Oct 2025
Abstract
Extracting key features for phenotype classification from high-dimensional and complex mass spectrometry (MS) data presents a significant challenge. Conventional data representation methods, such as traditional peak lists or grid-based imaging strategies, are often hampered by information loss and compromised signal integrity, thereby limiting [...] Read more.
Extracting key features for phenotype classification from high-dimensional and complex mass spectrometry (MS) data presents a significant challenge. Conventional data representation methods, such as traditional peak lists or grid-based imaging strategies, are often hampered by information loss and compromised signal integrity, thereby limiting the performance of downstream deep learning models. To address this issue, we propose a novel data representation framework named MSIMG. Inspired by object detection in computer vision, MSIMG introduces a data-driven, “density-peak-centric” patch selection strategy. This strategy employs density map estimation and non-maximum suppression algorithms to locate the centers of signal-dense regions, which serve as anchors for dynamic, content-aware patch extraction. This process transforms raw mass spectrometry data into a multi-channel image representation with higher information fidelity. Extensive experiments conducted on two public clinical mass spectrometry datasets demonstrate that MSIMG significantly outperforms both the traditional peak list method and the grid-based MetImage approach. This study confirms that the MSIMG framework, through its content-aware patch selection, provides a more information-dense and discriminative data representation paradigm for deep learning models. Our findings highlight the decisive impact of data representation on model performance and successfully demonstrate the immense potential of applying computer vision strategies to analytical chemistry data, paving the way for the development of more robust and precise clinical diagnostic models. Full article
(This article belongs to the Section Chemical Sensors)
Show Figures

Figure 1

18 pages, 1126 KB  
Article
Generative Implicit Steganography via Message Mapping
by Yangjie Zhong, Jia Liu, Peng Luo, Yan Ke and Mingshu Zhang
Appl. Sci. 2025, 15(20), 11041; https://doi.org/10.3390/app152011041 - 15 Oct 2025
Abstract
Generative steganography (GS) generates stego-media via secret messages, but existing GS only targets single-type multimedia data with poor universality. The generator and extractor sizes are highly coupled with resolution. Message mapping converts secret messages and noise, yet current GS schemes based on it [...] Read more.
Generative steganography (GS) generates stego-media via secret messages, but existing GS only targets single-type multimedia data with poor universality. The generator and extractor sizes are highly coupled with resolution. Message mapping converts secret messages and noise, yet current GS schemes based on it use gridded data, failing to generate diverse multimedia universally. Inspired by implicit neural representation (INR), we propose generative implicit steganography via message mapping (GIS). We designed single-bit and multi-bit message mapping schemes in function domains. The scheme’s function generator eliminates the coupling between model and gridded data sizes, enabling diverse multimedia generation and breaking resolution limits. A dedicated point cloud extractor is trained for adaptability. Through a literature review, this scheme is the first to perform message mapping in the functional domain. During the experiment, taking images as an example, methods such as PSNR, StegExpose, and neural pruning were used to demonstrate that the generated image quality is almost indistinguishable from the real image. At the same time, the generated image is robust. The accuracy of message extraction can reach 96.88% when the embedding capacity is 1 bpp, 89.84% when the embedding capacity is 2 bpp, and 82.21% when the pruning rate is 0.3. Full article
Show Figures

Figure 1

23 pages, 11567 KB  
Article
Georeferenced UAV Localization in Mountainous Terrain Under GNSS-Denied Conditions
by Inseop Lee, Chang-Ky Sung, Hyungsub Lee, Seongho Nam, Juhyun Oh, Keunuk Lee and Chansik Park
Drones 2025, 9(10), 709; https://doi.org/10.3390/drones9100709 - 14 Oct 2025
Abstract
In Global Navigation Satellite System (GNSS)-denied environments, unmanned aerial vehicles (UAVs) relying on Vision-Based Navigation (VBN) in high-altitude, mountainous terrain face severe challenges due to geometric distortions in aerial imagery. This paper proposes a georeferenced localization framework that integrates orthorectified aerial imagery with [...] Read more.
In Global Navigation Satellite System (GNSS)-denied environments, unmanned aerial vehicles (UAVs) relying on Vision-Based Navigation (VBN) in high-altitude, mountainous terrain face severe challenges due to geometric distortions in aerial imagery. This paper proposes a georeferenced localization framework that integrates orthorectified aerial imagery with Scene Matching (SM) to achieve robust positioning. The method employs a camera projection model combined with Digital Elevation Model (DEM) to orthorectify UAV images, thereby mitigating distortions from central projection and terrain relief. Pre-processing steps enhance consistency with reference orthophoto maps, after which template matching is performed using normalized cross-correlation (NCC). Sensor fusion is achieved through extended Kalman filters (EKFs) incorporating Inertial Navigation System (INS), GNSS (when available), barometric altimeter, and SM outputs. The framework was validated through flight tests with an aircraft over 45 km trajectories at altitudes of 2.5 km and 3.5 km in mountainous terrain. The results demonstrate that orthorectification improves image similarity and significantly reduces localization error, yielding lower 2D RMSE compared to conventional rectification. The proposed approach enhances VBN by mitigating terrain-induced distortions, providing a practical solution for UAV localization in GNSS-denied scenarios. Full article
(This article belongs to the Special Issue Autonomous Drone Navigation in GPS-Denied Environments)
Show Figures

Figure 1

19 pages, 12813 KB  
Article
Remote Sensing of American Revolutionary War Fortification at Butts Hill (Portsmouth, Rhode Island)
by James G. Keppeler, Marcus Rodriguez, Samuel Koontz, Alexander Wise, Philip Mink, George Crothers, Paul R. Murphy, John K. Robertson, Hugo Reyes-Centeno and Alexandra Uhl
Heritage 2025, 8(10), 430; https://doi.org/10.3390/heritage8100430 - 14 Oct 2025
Abstract
The Battle of Rhode Island in 1778 was an important event in the revolutionary war leading to the international recognition of U.S. American independence following the 1776 declaration. It culminated in a month-long campaign against British forces occupying Aquidneck Island, serving as the [...] Read more.
The Battle of Rhode Island in 1778 was an important event in the revolutionary war leading to the international recognition of U.S. American independence following the 1776 declaration. It culminated in a month-long campaign against British forces occupying Aquidneck Island, serving as the first combined operation of the newly formed Franco-American alliance. The military fortification at Butts Hill in Portsmouth, Rhode Island, served as a strategic point during the conflict and remains well-conserved today. While LiDAR has assisted in the geospatial surface reconstruction of the site’s earthwork fortifications, it is unknown whether other historically documented buildings within the fort remain preserved underground. We therefore conducted a ground-penetrating radar (GPR) survey to ascertain the presence or absence of architectural features, hypothesizing that GPR imaging could reveal structural remnants from the military barracks constructed in 1777. To test this hypothesis, we used public satellite and LiDAR imagery alongside historical maps to target the location of the historical barracks, creating a grid to survey the area with a GPR module in 0.5 m transects. Our results, superimposing remote sensing imagery with historical maps, indicate that the remains of a barracks building are likely present between circa 5–50 cm beneath today’s surface, warranting future investigations. Full article
(This article belongs to the Section Archaeological Heritage)
Show Figures

Figure 1

24 pages, 2634 KB  
Article
Supervised Focused Feature Network for Steel Strip Surface Defect Detection
by Wentao Liu and Weiqi Yuan
Mathematics 2025, 13(20), 3285; https://doi.org/10.3390/math13203285 - 14 Oct 2025
Abstract
Accurate detection of strip steel surface defects is a critical step to ensure product quality and prevent potential safety hazards. In practical inspection scenarios, defects on strip steel surfaces typically exhibit sparse distributions, diverse morphologies, and irregular shapes, while background regions dominate the [...] Read more.
Accurate detection of strip steel surface defects is a critical step to ensure product quality and prevent potential safety hazards. In practical inspection scenarios, defects on strip steel surfaces typically exhibit sparse distributions, diverse morphologies, and irregular shapes, while background regions dominate the images, exhibiting highly similar texture characteristics. These characteristics pose challenges for detection algorithms to efficiently and accurately localize and extract defect features. To address these challenges, this study proposes a Supervised Focused Feature Network for steel strip surface defect detection. Firstly, the network constructs a supervised range based on annotation information and introduces supervised convolution operations in the backbone network, limiting feature extraction within the supervised range to improve feature learning effectiveness. Secondly, a supervised deformable convolution layer is designed to achieve adaptive feature extraction within the supervised range, enhancing the detection capability for irregularly shaped defects. Finally, a supervised region proposal strategy is proposed to optimize the sample allocation process using the supervised range, improving the quality of candidate regions. Experimental results demonstrate that the proposed method achieves a mean Average Precision (mAP) of 81.2% on the NEU-DET dataset and 72.5% mAP on the GC10-DET dataset. Ablation studies confirm the contribution of each proposed module to feature extraction efficiency and detection accuracy. Results indicate that the proposed network effectively enhances the efficiency of sparse defect feature extraction and improves detection accuracy. Full article
Show Figures

Figure 1

Back to TopTop