Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (116)

Search Parameters:
Keywords = panoramic mapping

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 3743 KB  
Article
Porcine Skeletal Muscle-Specific lncRNA-ssc.37456 Regulates Myoblast Proliferation and Differentiation
by Xia He, Yangshuo Hu, Yangli Pei, Yilong Yao and Shen Liu
Animals 2026, 16(3), 361; https://doi.org/10.3390/ani16030361 - 23 Jan 2026
Abstract
Long-chain non-coding RNAs (lncRNAs) play important regulatory roles in the growth and development of skeletal muscle, but systematic identification and functional studies of lncRNAs related to porcine skeletal muscle development remain limited. Based on a previously constructed panoramic map of porcine skeletal muscle [...] Read more.
Long-chain non-coding RNAs (lncRNAs) play important regulatory roles in the growth and development of skeletal muscle, but systematic identification and functional studies of lncRNAs related to porcine skeletal muscle development remain limited. Based on a previously constructed panoramic map of porcine skeletal muscle lncRNAs, lncRNA-ssc.37456 was identified as differentially expressed in porcine skeletal muscle before and after birth. Its function and potential mechanisms were investigated using a porcine skeletal muscle regeneration model, a primary skeletal muscle cell differentiation model, and knockdown and overexpression experiments in vitro. lncRNA-ssc.37456 was upregulated on day 7 of regeneration, with expression positively correlated with the muscle differentiation marker MYHC and negatively correlated with the proliferation marker PAX7. During differentiation of porcine primary myoblasts, expression continuously increased, peaking on day 4. Knockdown of lncRNA-ssc.37456 by small interfering RNA (siRNA) significantly increased cell proliferation, upregulated mRNA and protein levels of proliferation-related genes KI67 and PCNA, and increased the proportion of EdU-positive cells. Conversely, expression of differentiation-related genes MYOG and MYHC decreased, and immunofluorescence analysis revealed reduced myotube formation and differentiation index. Overexpression of lncRNA-ssc.37456 promoted differentiation and inhibited proliferation, showing effects opposite to those observed in knockdown experiments. Nucleocytoplasmic fractionation indicated predominant cytoplasmic localization, suggesting potential function through a ceRNA mechanism. An interaction network with miRNAs was constructed based on the miRDB database, indicating a potential miRNA “sponge” regulatory mechanism. These results indicate that lncRNA-ssc.37456 participates in porcine skeletal muscle development by regulating the transition of muscle cells from proliferation to differentiation, providing molecular insights and potential targets for muscle biology research and the molecular breeding of growth traits. Full article
(This article belongs to the Section Pigs)
Show Figures

Figure 1

22 pages, 7096 KB  
Article
An Improved ORB-KNN-Ratio Test Algorithm for Robust Underwater Image Stitching on Low-Cost Robotic Platforms
by Guanhua Yi, Tianxiang Zhang, Yunfei Chen and Dapeng Yu
J. Mar. Sci. Eng. 2026, 14(2), 218; https://doi.org/10.3390/jmse14020218 - 21 Jan 2026
Viewed by 60
Abstract
Underwater optical images often exhibit severe color distortion, weak texture, and uneven illumination due to light absorption and scattering in water. These issues result in unstable feature detection and inaccurate image registration. To address these challenges, this paper proposes an underwater image stitching [...] Read more.
Underwater optical images often exhibit severe color distortion, weak texture, and uneven illumination due to light absorption and scattering in water. These issues result in unstable feature detection and inaccurate image registration. To address these challenges, this paper proposes an underwater image stitching method that integrates ORB (Oriented FAST and Rotated BRIEF) feature extraction with a fixed-ratio constraint matching strategy. First, lightweight color and contrast enhancement techniques are employed to restore color balance and improve local texture visibility. Then, ORB descriptors are extracted and matched via a KNN (K-Nearest Neighbors) nearest-neighbor search, and Lowe’s ratio test is applied to eliminate false matches caused by weak texture similarity. Finally, the geometric transformation between image frames is estimated by incorporating robust optimization, ensuring stable homography computation. Experimental results on real underwater datasets show that the proposed method significantly improves stitching continuity and structural consistency, achieving 40–120% improvements in SSIM (Structural Similarity Index) and PSNR (peak signal-to-noise ratio) over conventional Harris–ORB + KNN, SIFT (scale-invariant feature transform) + BF (brute force), SIFT + KNN, and AKAZE (accelerated KAZE) + BF methods while maintaining processing times within one second. These results indicate that the proposed method is well-suited for real-time underwater environment perception and panoramic mapping on low-cost, micro-sized underwater robotic platforms. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

14 pages, 3527 KB  
Article
Robust Intraoral Image Stitching via Deep Feature Matching: Framework Development and Acquisition Parameter Optimization
by Jae-Seung Jeong, Dong-Jun Seong and Seong Wook Choi
Appl. Sci. 2026, 16(2), 1064; https://doi.org/10.3390/app16021064 - 20 Jan 2026
Viewed by 100
Abstract
Low-cost RGB intraoral cameras are accessible alternatives to intraoral scanners; however, generating panoramic images is challenging due to narrow fields of view, textureless surfaces, and specular highlights. This study proposes a robust stitching framework and identifies optimal acquisition parameters to overcome these limitations. [...] Read more.
Low-cost RGB intraoral cameras are accessible alternatives to intraoral scanners; however, generating panoramic images is challenging due to narrow fields of view, textureless surfaces, and specular highlights. This study proposes a robust stitching framework and identifies optimal acquisition parameters to overcome these limitations. All experiments were conducted exclusively on a mandibular dental phantom model. Geometric consistency was further validated using repeated physical measurements of mandibular arch dimensions as ground-truth references. We employed a deep learning-based approach using SuperPoint and SuperGlue to extract and match features in texture-poor environments, enhanced by a central-reference stitching strategy to minimize cumulative drift errors. To validate the feasibility in a controlled setting, we conducted experiments on dental phantoms varying working distances (1.5–3.0 cm) and overlap ratios. The proposed method detected approximately 19–20 times more valid inliers than SIFT, significantly improving matching stability. Experimental results indicated that a working distance of 2.5 cm offers the optimal balance between stitching success rate and image detail for handheld operation, while a 1/3 overlap ratio yielded superior geometric integrity. This system demonstrates that robust 2D dental mapping is achievable with consumer-grade sensors when combined with advanced deep feature matching and optimized acquisition protocols. Full article
(This article belongs to the Special Issue AI for Medical Systems: Algorithms, Applications, and Challenges)
Show Figures

Figure 1

26 pages, 5287 KB  
Article
Low-Altitude UAV-Based Recognition of Porcine Facial Expressions for Early Health Monitoring
by Zhijiang Wang, Ruxue Mi, Haoyuan Liu, Mengyao Yi, Yanjie Fan, Guangying Hu and Zhenyu Liu
Animals 2025, 15(23), 3426; https://doi.org/10.3390/ani15233426 - 27 Nov 2025
Viewed by 359
Abstract
Pigs’ facial regions encode a wealth of biological trait information; detecting their facial poses can provide robust support for individual identification and behavioral analysis. However, in large-scale swine-herd settings, variable lighting within pigsties and the close proximity of animals impose significant challenges on [...] Read more.
Pigs’ facial regions encode a wealth of biological trait information; detecting their facial poses can provide robust support for individual identification and behavioral analysis. However, in large-scale swine-herd settings, variable lighting within pigsties and the close proximity of animals impose significant challenges on facial-pose detection. This study adopts an aerial-inspection approach—distinct from conventional ground or hoist inspections—leveraging the high-efficiency, panoramic coverage of unmanned aerial vehicles (UAVs). UAV-captured video frames from real herding environments, involving a total of 600 pigs across 50 pens, serve as the data source. The final dataset comprises 2800 original images, expanded to 5600 after augmentation, to dissect how different facial expressions reflect pig physiological states.ion. 1. The proposed Detect_FASFF detection head achieves mean average precision at 50% IoU (mAP50) of 95.5% for large targets and 95.9% for small targets, effectively overcoming cross-scale feature loss and the accuracy shortcomings of the baseline YOLOv8s in detecting pig facial targets. 2. Addressing the excessive computation and sluggish inference of standard YOLOv8, we incorporate a Partial_Conv module that maintains mAP while substantially reducing Runtime. 3. We introduce an improved exponential moving-average scheme (iEMA) with second-order attention to improve small-target accuracy and mitigate interference from the piggery environment. This yields an mAP50 of 96.4%. 4. Comprehensive comparison–The refined YOLOv8 is benchmarked against traditional YOLO variants (YOLOv5s, YOLOv8s, YOLOv11s, YOLOv12s, YOLOv13s), Rt-DeTR, and Faster-R-CNN. Relative to these models, the enhanced YOLOv8 shows a statistically significant increase in overall mAP. These results highlight the potential of the upgraded model to transform pig facial-expression recognition accuracy, advancing more humane and informed livestock-management practices. Full article
(This article belongs to the Section Pigs)
Show Figures

Figure 1

24 pages, 3816 KB  
Article
Geomorphodynamic Controls on the Distribution and Abundance of the Federally Threatened Puritan Tiger Beetle (Ellipsoptera puritana) Along the Maryland Chesapeake Bay Coast and Implications for Conservation
by Michael S. Fenster and C. Barry Knisley
Geosciences 2025, 15(12), 444; https://doi.org/10.3390/geosciences15120444 - 22 Nov 2025
Viewed by 404
Abstract
The federally threatened Puritan tiger beetle (Ellipsoptera puritana; PTB) inhabits Upper Chesapeake Bay bluffs, beaches and Connecticut River point bars. This study focuses on Maryland’s Chesapeake Bay population (Calvert County and Sassafras River), where adult PTBs prey on beach arthropods but [...] Read more.
The federally threatened Puritan tiger beetle (Ellipsoptera puritana; PTB) inhabits Upper Chesapeake Bay bluffs, beaches and Connecticut River point bars. This study focuses on Maryland’s Chesapeake Bay population (Calvert County and Sassafras River), where adult PTBs prey on beach arthropods but establish larval habitat on the adjacent bluffs. A combination of panoramic photography, GIS mapping, and field and laboratory measurements of sedimentological and ecological characteristics were measured across 17 high- and low-density Maryland beetle sites to identify the geologic and biological controls on population distribution and abundance. Results indicate that temporal and spatial fluctuations in PTB abundance are governed by bluff face quality, which in turn, is shaped by antecedent geology (medium-compacted, fine-to-medium, well-sorted sands) and bluff dynamics. We present a four-stage, multi-decadal geomorphodynamic conceptual model in which long-term bluff recession and short-term storm-driven colluvium removal periodically expose fresh bluff surfaces required for larval establishment. By integrating geomorphic, geologic, and ecological perspectives, this study highlights the role of sedimentary processes in maintaining critical estuarine habitats and provides a framework for predicting species persistence in dynamic coastal landscapes. Full article
(This article belongs to the Section Biogeosciences)
Show Figures

Figure 1

19 pages, 8615 KB  
Article
Panoramic Radiograph-Based Deep Learning Models for Diagnosis and Clinical Decision Support of Furcation Lesions in Primary Molars
by Nevra Karamüftüoğlu, Ayşe Bulut, Murat Akın and Şeref Sağıroğlu
Children 2025, 12(11), 1517; https://doi.org/10.3390/children12111517 - 9 Nov 2025
Viewed by 942
Abstract
Background/Aim: Furcation lesions in primary molars are critical in pediatric dentistry, often guiding treatment decisions between root canal therapy and extraction. This study introduces a deep learning-based clinical decision-support system that directly maps radiographic lesion characteristics to corresponding treatment recommendations—a novel contribution in [...] Read more.
Background/Aim: Furcation lesions in primary molars are critical in pediatric dentistry, often guiding treatment decisions between root canal therapy and extraction. This study introduces a deep learning-based clinical decision-support system that directly maps radiographic lesion characteristics to corresponding treatment recommendations—a novel contribution in the context of pediatric dental imaging, also represents the first integration of panoramic radiographic classification of primary molar furcation lesions with treatment planning in pediatric dentistry. Materials and Methods: A total of 387 anonymized panoramic radiographs from children aged 3–13 was labeled into five distinct bone lesion categories. Three object detection models (YOLOv12x, RT-DETR-L, and RT-DETR-X) were trained and evaluated using stratified train-validation-test splits. Diagnostic performance was assessed using precision, recall, mAP@0.5, and mAP@0.5–0.95. Additionally, qualitative accuracy was evaluated with expert-annotated samples. Results: Among the models, RT-DETR-X achieved the highest performance (mAP@0.5 = 0.434), representing modest but clinically promising diagnostic capability, despite the limitations of a relatively small, single-center dataset. Specifically, RT-DETR-X achieved the highest diagnostic accuracy (mAP@0.5 = 0.434, Recall = 0.483, Precision = 0.440), followed by YOLOv12x (mAP@0.5 = 0.397, Precision = 0.442) and RT-DETR-L (mAP@0.5 = 0.326). All models successfully identified lesion types and supported corresponding clinical decisions. The system reduced diagnostic ambiguity and showed promise in supporting clinicians with varying levels of experience. Conclusions: The proposed models have potential for standardizing diagnostic outcomes, especially in resource-limited settings and mobile clinical environments. Full article
(This article belongs to the Section Pediatric Dentistry & Oral Medicine)
Show Figures

Figure 1

22 pages, 59687 KB  
Article
Multi-View Omnidirectional Vision and Structured Light for High-Precision Mapping and Reconstruction
by Qihui Guo, Maksim A. Grigorev, Zihan Zhang, Ivan Kholodilin and Bing Li
Sensors 2025, 25(20), 6485; https://doi.org/10.3390/s25206485 - 20 Oct 2025
Viewed by 1357
Abstract
Omnidirectional vision systems enable panoramic perception for autonomous navigation and large-scale mapping, but physical testbeds are costly, resource-intensive, and carry operational risks. We develop a virtual simulation platform for multi-view omnidirectional vision that supports flexible camera configuration and cross-platform data streaming for efficient [...] Read more.
Omnidirectional vision systems enable panoramic perception for autonomous navigation and large-scale mapping, but physical testbeds are costly, resource-intensive, and carry operational risks. We develop a virtual simulation platform for multi-view omnidirectional vision that supports flexible camera configuration and cross-platform data streaming for efficient processing. Building on this platform, we propose and validate a reconstruction and ranging method that fuses multi-view omnidirectional images with structured-light projection. The method achieves high-precision obstacle contour reconstruction and distance estimation without extensive physical calibration or rigid hardware setups. Experiments in simulation and the real world demonstrate distance errors within 8 mm and robust performance across diverse camera configurations, highlighting the practicality of the platform for omnidirectional vision research. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Graphical abstract

21 pages, 5019 KB  
Article
Real-Time Parking Space Detection Based on Deep Learning and Panoramic Images
by Wu Wei, Hongyang Chen, Jiayuan Gong, Kai Che, Wenbo Ren and Bin Zhang
Sensors 2025, 25(20), 6449; https://doi.org/10.3390/s25206449 - 18 Oct 2025
Viewed by 2066
Abstract
In the domain of automatic parking systems, parking space detection and localization represent fundamental challenges that must be addressed. As a core research focus within the field of intelligent automatic parking, they constitute the essential prerequisite for the realization of fully autonomous parking. [...] Read more.
In the domain of automatic parking systems, parking space detection and localization represent fundamental challenges that must be addressed. As a core research focus within the field of intelligent automatic parking, they constitute the essential prerequisite for the realization of fully autonomous parking. Accurate and effective detection of parking spaces is still the core problem that needs to be solved in automatic parking systems. In this study, building upon existing public parking space datasets, a comprehensive panoramic parking space dataset named PSEX (Parking Slot Extended) with complex environmental diversity was constructed by integrating the concept of GAN (Generative Adversarial Network)-based image style transfer. Meanwhile, an improved algorithm based on PP-Yoloe (Paddle-Paddle Yoloe) is used to detect the state (free or occupied) and angle (T-shaped or L-shaped) of the parking space in real-time. For the many and small labels of the parking space, the ResSpp in it is replaced by the ResSimSppf module, the SimSppf structure is introduced at the neck end, and Silu is replaced by Relu in the basic structure of the CBS (Conv-BN-SiLU), and finally an auxiliary detector head is added at the prediction head. Experimental results show that the proposed SimSppf_mepre-Yoloe model achieves an average improvement of 4.5% in mAP50 and 2.95% in mAP50:95 over the baseline PP-Yoloe across various parking space detection tasks. In terms of efficiency, the model maintains comparable inference latency with the baseline, reaching up to 33.7 FPS on the Jetson AGX Xavier platform under TensorRT optimization. And the improved enhancement algorithm can greatly enrich the diversity of parking space data. These results demonstrate that the proposed model achieves a better balance between detection accuracy and real-time performance, making it suitable for deployment in intelligent vehicle and robotic perception systems. Full article
(This article belongs to the Special Issue Robot Swarm Collaboration in the Unstructured Environment)
Show Figures

Figure 1

16 pages, 1340 KB  
Article
Artificial Intelligence-Aided Tooth Detection and Segmentation on Pediatric Panoramic Radiographs in Mixed Dentition Using a Transfer Learning Approach
by Serena Incerti Parenti, Giorgio Tsiotas, Alessandro Maglioni, Giulia Lamberti, Andrea Fiordelli, Davide Rossi, Luciano Bononi and Giulio Alessandri-Bonetti
Diagnostics 2025, 15(20), 2615; https://doi.org/10.3390/diagnostics15202615 - 16 Oct 2025
Viewed by 1448
Abstract
Background/Objectives: Accurate identification of deciduous and permanent teeth on panoramic radiographs (PRs) during mixed dentition is fundamental for early detection of eruption disturbances, yet relies heavily on clinician experience due to developmental variability. This study aimed to develop a deep learning model [...] Read more.
Background/Objectives: Accurate identification of deciduous and permanent teeth on panoramic radiographs (PRs) during mixed dentition is fundamental for early detection of eruption disturbances, yet relies heavily on clinician experience due to developmental variability. This study aimed to develop a deep learning model for automated tooth detection and segmentation in pediatric PRs during mixed dentition. Methods: A retrospective dataset of 250 panoramic radiographs from patients aged 6–13 years was analyzed. A customized YOLOv11-based model was developed using a novel hybrid pre-annotation strategy leveraging transfer learning from 650 publicly available adult radiographs, followed by expert manual refinement. Performance evaluation utilized mean average precision (mAP), F1-score, precision, and recall metrics. Results: The model demonstrated robust performance with mAP0.5 = 0.963 [95%CI: 0.944–0.983] and macro-averaged F1-score = 0.953 [95%CI: 0.922–0.965] for detection. Segmentation achieved mAP0.5 = 0.890 [95%CI: 0.857–0.923]. Stratified analysis revealed excellent performance for permanent teeth (F1 = 0.977) and clinically acceptable accuracy for deciduous teeth (F1 = 0.884). Conclusions: The automated system achieved near-expert accuracy in detecting and segmenting teeth during mixed dentition using an innovative transfer learning approach. This framework establishes reliable infrastructure for AI-assisted diagnostic applications targeting eruption or developmental anomalies, potentially facilitating earlier detection while reducing clinician-dependent variability in mixed dentition evaluation. Full article
(This article belongs to the Special Issue Advances in Diagnosis and Treatment in Pediatric Dentistry)
Show Figures

Figure 1

24 pages, 16680 KB  
Article
Research on Axle Type Recognition Technology for Under-Vehicle Panorama Images Based on Enhanced ORB and YOLOv11
by Xiaofan Feng, Lu Peng, Yu Tang, Chang Liu and Huazhen An
Sensors 2025, 25(19), 6211; https://doi.org/10.3390/s25196211 - 7 Oct 2025
Viewed by 980
Abstract
With the strict requirements of national policies on truck dimensions, axle loads, and weight limits, along with the implementation of tolls based on vehicle types, rapid and accurate identification of vehicle axle types has become essential for toll station management. To address the [...] Read more.
With the strict requirements of national policies on truck dimensions, axle loads, and weight limits, along with the implementation of tolls based on vehicle types, rapid and accurate identification of vehicle axle types has become essential for toll station management. To address the limitations of existing methods in distinguishing between drive and driven axles, complex equipment setup, and image evidence retention, this article proposes a panoramic image detection technology for vehicle chassis based on enhanced ORB and YOLOv11. A portable vehicle chassis image acquisition system, based on area array cameras, was developed for rapid on-site deployment within 20 min, eliminating the requirement for embedded installation. The FeatureBooster (FB) module was employed to optimize the ORB algorithm’s feature matching, and combined with keyframe technology to achieve high-quality panoramic image stitching. After fine-tuning the FB model on a domain-specific area scan dataset, the number of feature matches increased to 151 ± 18, substantially outperforming both the pre-trained FB model and the baseline ORB. Experimental results on axle type recognition using the YOLOv11 algorithm combined with ORB and FB features demonstrated that the integrated approach achieved superior performance. On the overall test set, the model attained an mAP@50 of 0.989 and an mAP@50:95 of 0.780, along with a precision (P) of 0.98 and a recall (R) of 0.99. In nighttime scenarios, it maintained an mAP@50 of 0.977 and an mAP@50:95 of 0.743, with precision and recall both consistently at 0.98 and 0.99, respectively. The field verification shows that the real-time and accuracy of the system can provide technical support for the axle type recognition of toll stations. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

25 pages, 35400 KB  
Article
Detection and Continuous Tracking of Breeding Pigs with Ear Tag Loss: A Dual-View Synergistic Method
by Weijun Duan, Fang Wang, Honghui Li, Na Liu and Xueliang Fu
Animals 2025, 15(19), 2787; https://doi.org/10.3390/ani15192787 - 24 Sep 2025
Viewed by 747
Abstract
The lossof ear tags in breeding pigs can lead to the loss or confusion of individual identity information. Timely and accurate detection, along with continuous tracking of breeding pigs that have lost their ear tags, is crucial for improving the precision of farm [...] Read more.
The lossof ear tags in breeding pigs can lead to the loss or confusion of individual identity information. Timely and accurate detection, along with continuous tracking of breeding pigs that have lost their ear tags, is crucial for improving the precision of farm management. However, considering the real-time requirements for the detection of ear tag-lost breeding pigs, coupled with tracking challenges such as similar appearances, clustered occlusion, and rapid movements of breeding pigs, this paper proposed a dual-view synergistic method for detecting ear tag-lost breeding pigs and tracking individuals. First, a lightweight ear tag loss detector was developed by combining the Cascade-TagLossDetector with a channel pruning algorithm. Second, a synergistic architecture was designed that integrates a localized top-down view with a panoramic oblique view, where the detection results of ear tag-lost breeding pigs from the localized top-down view were mapped to the panoramic oblique view for precise localization. Finally, an enhanced tracker incorporating Motion Attention was proposed to continuously track the localized ear tag-lost breeding pigs. Experimental results indicated that, during the ear tag loss detection stage for breeding pigs, the pruned detector achieved a mean average precision of 94.03% for bounding box detection and 90.16% for instance segmentation, with a parameter count of 28.04 million and a detection speed of 37.71 fps. Compared to the unpruned model, the parameter count was reduced by 20.93 million, and the detection speed increased by 12.38 fps while maintaining detection accuracy. In the tracking stage, the success rate, normalized precision, and precision of the proposed tracker reached 86.91%, 92.68%, and 89.74%, respectively, representing improvements of 4.39, 3.22, and 4.77 percentage points, respectively, compared to the baseline model. These results validated the advantages of the proposed method in terms of detection timeliness, tracking continuity, and feasibility of deployment on edge devices, providing significant reference value for managing livestock identity in breeding farms. Full article
Show Figures

Figure 1

28 pages, 8325 KB  
Article
Tunnel Rapid AI Classification (TRaiC): An Open-Source Code for 360° Tunnel Face Mapping, Discontinuity Analysis, and RAG-LLM-Powered Geo-Engineering Reporting
by Seyedahmad Mehrishal, Junsu Leem, Jineon Kim, Yulong Shao, Il-Seok Kang and Jae-Joon Song
Remote Sens. 2025, 17(16), 2891; https://doi.org/10.3390/rs17162891 - 20 Aug 2025
Cited by 1 | Viewed by 3118
Abstract
Accurate and efficient rock mass characterization is essential in geotechnical engineering, yet traditional tunnel face mapping remains time consuming, subjective, and potentially hazardous. Recent advances in digital technologies and AI offer automation opportunities, but many existing solutions are hindered by slow 3D scanning, [...] Read more.
Accurate and efficient rock mass characterization is essential in geotechnical engineering, yet traditional tunnel face mapping remains time consuming, subjective, and potentially hazardous. Recent advances in digital technologies and AI offer automation opportunities, but many existing solutions are hindered by slow 3D scanning, computationally intensive processing, and limited integration flexibility. This paper presents Tunnel Rapid AI Classification (TRaiC), an open-source MATLAB-based platform for rapid and automated tunnel face mapping. TRaiC integrates single-shot 360° panoramic photography, AI-powered discontinuity detection, 3D textured digital twin generation, rock mass discontinuity characterization, and Retrieval-Augmented Generation with Large Language Models (RAG-LLM) for automated geological interpretation and standardized reporting. The modular eight-stage workflow includes simplified 3D modeling, trace segmentation, 3D joint network analysis, and rock mass classification using RMR, with outputs optimized for Geo-BIM integration. Initial evaluations indicate substantial reductions in processing time and expert assessment workload. Producing a lightweight yet high-fidelity digital twin, TRaiC enables computational efficiency, transparency, and reproducibility, serving as a foundation for future AI-assisted geotechnical engineering research. Its graphical user interface and well-structured open-source code make it accessible to users ranging from beginners to advanced researchers. Full article
Show Figures

Figure 1

20 pages, 2316 KB  
Article
Detection of Dental Anomalies in Digital Panoramic Images Using YOLO: A Next Generation Approach Based on Single Stage Detection Models
by Uğur Şevik and Onur Mutlu
Diagnostics 2025, 15(15), 1961; https://doi.org/10.3390/diagnostics15151961 - 5 Aug 2025
Cited by 1 | Viewed by 2108
Abstract
Background/Objectives: The diagnosis of pediatric dental conditions from panoramic radiographs is uniquely challenging due to the dynamic nature of the mixed dentition phase, which can lead to subjective and inconsistent interpretations. This study aims to develop and rigorously validate an advanced deep [...] Read more.
Background/Objectives: The diagnosis of pediatric dental conditions from panoramic radiographs is uniquely challenging due to the dynamic nature of the mixed dentition phase, which can lead to subjective and inconsistent interpretations. This study aims to develop and rigorously validate an advanced deep learning model to enhance diagnostic accuracy and efficiency in pediatric dentistry, providing an objective tool to support clinical decision-making. Methods: An initial comparative study of four state-of-the-art YOLO variants (YOLOv8, v9, v10, and v11) was conducted to identify the optimal architecture for detecting four common findings: Dental Caries, Deciduous Tooth, Root Canal Treatment, and Pulpotomy. A stringent two-tiered validation strategy was employed: a primary public dataset (n = 644 images) was used for training and model selection, while a completely independent external dataset (n = 150 images) was used for final testing. All annotations were validated by a dual-expert team comprising a board-certified pediatric dentist and an experienced oral and maxillofacial radiologist. Results: Based on its leading performance on the internal validation set, YOLOv11x was selected as the optimal model, achieving a mean Average Precision (mAP50) of 0.91. When evaluated on the independent external test set, the model demonstrated robust generalization, achieving an overall F1-Score of 0.81 and a mAP50 of 0.82. It yielded clinically valuable recall rates for therapeutic interventions (Root Canal Treatment: 88%; Pulpotomy: 86%) and other conditions (Deciduous Tooth: 84%; Dental Caries: 79%). Conclusions: Validated through a rigorous dual-dataset and dual-expert process, the YOLOv11x model demonstrates its potential as an accurate and reliable tool for automated detection in pediatric panoramic radiographs. This work suggests that such AI-driven systems can serve as valuable assistive tools for clinicians by supporting diagnostic workflows and contributing to the consistent detection of common dental findings in pediatric patients. Full article
Show Figures

Figure 1

23 pages, 7371 KB  
Article
A Novel Method for Estimating Building Height from Baidu Panoramic Street View Images
by Shibo Ge, Jiping Liu, Xianghong Che, Yong Wang and Haosheng Huang
ISPRS Int. J. Geo-Inf. 2025, 14(8), 297; https://doi.org/10.3390/ijgi14080297 - 30 Jul 2025
Viewed by 2088
Abstract
Building height information plays an important role in many urban-related applications, such as urban planning, disaster management, and environmental studies. With the rapid development of real scene maps, street view images are becoming a new data source for building height estimation, considering their [...] Read more.
Building height information plays an important role in many urban-related applications, such as urban planning, disaster management, and environmental studies. With the rapid development of real scene maps, street view images are becoming a new data source for building height estimation, considering their easy collection and low cost. However, existing studies on building height estimation primarily utilize remote sensing images, with little exploration of height estimation from street-view images. In this study, we proposed a deep learning-based method for estimating the height of a single building in Baidu panoramic street view imagery. Firstly, the Segment Anything Model was used to extract the region of interest image and location features of individual buildings from the panorama. Subsequently, a cross-view matching algorithm was proposed by combining Baidu panorama and building footprint data with height information to generate building height samples. Finally, a Two-Branch feature fusion model (TBFF) was constructed to combine building location features and visual features, enabling accurate height estimation for individual buildings. The experimental results showed that the TBFF model had the best performance, with an RMSE of 5.69 m, MAE of 3.97 m, and MAPE of 0.11. Compared with two state-of-the-art methods, the TBFF model exhibited robustness and higher accuracy. The Random Forest model had an RMSE of 11.83 m, MAE of 4.76 m, and MAPE of 0.32, and the Pano2Geo model had an RMSE of 10.51 m, MAE of 6.52 m, and MAPE of 0.22. The ablation analysis demonstrated that fusing building location and visual features can improve the accuracy of height estimation by 14.98% to 69.99%. Moreover, the accuracy of the proposed method meets the LOD1 level 3D modeling requirements defined by the OGC (height error ≤ 5 m), which can provide data support for urban research. Full article
Show Figures

Figure 1

23 pages, 20932 KB  
Article
Robust Small-Object Detection in Aerial Surveillance via Integrated Multi-Scale Probabilistic Framework
by Youyou Li, Yuxiang Fang, Shixiong Zhou, Yicheng Zhang and Nuno Antunes Ribeiro
Mathematics 2025, 13(14), 2303; https://doi.org/10.3390/math13142303 - 18 Jul 2025
Cited by 1 | Viewed by 1383
Abstract
Accurate and efficient object detection is essential for aerial airport surveillance, playing a critical role in aviation safety and the advancement of autonomous operations. Although recent deep learning approaches have achieved notable progress, significant challenges persist, including severe object occlusion, extreme scale variation, [...] Read more.
Accurate and efficient object detection is essential for aerial airport surveillance, playing a critical role in aviation safety and the advancement of autonomous operations. Although recent deep learning approaches have achieved notable progress, significant challenges persist, including severe object occlusion, extreme scale variation, dense panoramic clutter, and the detection of very small targets. In this study, we introduce a novel and unified detection framework designed to address these issues comprehensively. Our method integrates a Normalized Gaussian Wasserstein Distance loss for precise probabilistic bounding box regression, Dilation-wise Residual modules for improved multi-scale feature extraction, a Hierarchical Screening Feature Pyramid Network for effective hierarchical feature fusion, and DualConv modules for lightweight yet robust feature representation. Extensive experiments conducted on two public airport surveillance datasets, ASS1 and ASS2, demonstrate that our approach yields substantial improvements in detection accuracy. Specifically, the proposed method achieves an improvement of up to 14.6 percentage points in mean Average Precision (mAP@0.5) compared to state-of-the-art YOLO variants, with particularly notable gains in challenging small-object categories such as personnel detection. These results highlight the effectiveness and practical value of the proposed framework in advancing aviation safety and operational autonomy in airport environments. Full article
Show Figures

Graphical abstract

Back to TopTop