Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (7)

Search Parameters:
Keywords = automatic panorama generation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 12177 KiB  
Article
An Efficient Hybrid 3D Computer-Aided Cephalometric Analysis for Lateral Cephalometric and Cone-Beam Computed Tomography (CBCT) Systems
by Laurine A. Ashame, Sherin M. Youssef, Mazen Nabil Elagamy and Sahar M. El-Sheikh
Computers 2025, 14(6), 223; https://doi.org/10.3390/computers14060223 - 7 Jun 2025
Viewed by 545
Abstract
Lateral cephalometric analysis is commonly used in orthodontics for skeletal classification to ensure an accurate and reliable diagnosis for treatment planning. However, most current research depends on analyzing different type of radiographs, which requires more computational time than 3D analysis. Consequently, this study [...] Read more.
Lateral cephalometric analysis is commonly used in orthodontics for skeletal classification to ensure an accurate and reliable diagnosis for treatment planning. However, most current research depends on analyzing different type of radiographs, which requires more computational time than 3D analysis. Consequently, this study addresses fully automatic orthodontics tracing based on the usage of artificial intelligence (AI) applied to 2D and 3D images, by designing a cephalometric system that analyzes the significant landmarks and regions of interest (ROI) needed in orthodontics tracing, especially for the mandible and maxilla teeth. In this research, a computerized system is developed to automate the tasks of orthodontics evaluation during 2D and Cone-Beam Computed Tomography (CBCT or 3D) systems measurements. This work was tested on a dataset that contains images of males and females obtained from dental hospitals with patient-informed consent. The dataset consists of 2D lateral cephalometric, panorama and CBCT radiographs. Many scenarios were applied to test the proposed system in landmark prediction and detection. Moreover, this study integrates the Grad-CAM (Gradient-Weighted Class Activation Mapping) technique to generate heat maps, providing transparent visualization of the regions the model focuses on during its decision-making process. By enhancing the interpretability of deep learning predictions, Grad-CAM strengthens clinical confidence in the system’s outputs, ensuring that ROI detection aligns with orthodontic diagnostic standards. This explainability is crucial in medical AI applications, where understanding model behavior is as important as achieving high accuracy. The experimental results achieved an accuracy exceeding 98.9%. This research evaluates and differentiates between the two-dimensional and the three-dimensional tracing analyses applied to measurements based on the practices of the European Board of Orthodontics. The results demonstrate the proposed methodology’s robustness when applied to cephalometric images. Furthermore, the evaluation of 3D analysis usage provides a clear understanding of the significance of integrated deep-learning techniques in orthodontics. Full article
(This article belongs to the Special Issue Machine Learning Applications in Pattern Recognition)
Show Figures

Figure 1

18 pages, 9754 KiB  
Article
Bridge Surface Defect Localization Based on Panoramic Image Generation and Deep Learning-Assisted Detection Method
by Tao Yin, Guodong Shen, Liang Yin and Guigang Shi
Buildings 2024, 14(9), 2964; https://doi.org/10.3390/buildings14092964 - 19 Sep 2024
Cited by 4 | Viewed by 1793
Abstract
Applying unmanned aerial vehicles (UAVs) and vision-based analysis methods to detect bridge surface damage significantly improves inspection efficiency, but the existing techniques have difficulty in accurately locating damage, making it difficult to use the results to assess a bridge’s degree of deterioration. Therefore, [...] Read more.
Applying unmanned aerial vehicles (UAVs) and vision-based analysis methods to detect bridge surface damage significantly improves inspection efficiency, but the existing techniques have difficulty in accurately locating damage, making it difficult to use the results to assess a bridge’s degree of deterioration. Therefore, this study proposes a method to generate panoramic bridge surface images using multi-view images captured by UAVs, in order to automatically identify and locate damage. The main contributions are as follows: (1) We propose a UAV-based image-capturing method for various bridge sections to collect close-range, multi-angle, and overlapping images of the surface; (2) we propose a 3D reconstruction method based on multi-view images to reconstruct a textured bridge model, through which an ultra-high resolution panoramic unfolded image of the bridge surface can be obtained by projecting from multiple angles; (3) we applied the Swin Transformer to optimize the YOLOv8 network and improve the detection accuracy of small-scale damages based on the established bridge damage dataset and employed sliding window segmentation to detect damage in the ultra-high resolution panoramic image. The proposed method was applied to detect surface damage on a three-span concrete bridge. The results indicate that this method automatically generates panoramic images of the bridge bottom, deck, and sides with hundreds of millions of pixels and recognizes damage in the panoramas. In addition, the damage detection accuracy reached 98.7%, which is improved by 13.6% when compared with the original network. Full article
Show Figures

Figure 1

18 pages, 42289 KiB  
Article
Automatic Sequential Stitching of High-Resolution Panorama for Android Devices Using Precapture Feature Detection and the Orientation Sensor
by Yaseen, Oh-Jin Kwon, Jinhee Lee, Faiz Ullah, Sonain Jamil and Jae Soo Kim
Sensors 2023, 23(2), 879; https://doi.org/10.3390/s23020879 - 12 Jan 2023
Cited by 2 | Viewed by 4040
Abstract
Image processing on smartphones, which are resource-limited devices, is challenging. Panorama generation on modern mobile phones is a requirement of most mobile phone users. This paper presents an automatic sequential image stitching algorithm with high-resolution panorama generation and addresses the issue of stitching [...] Read more.
Image processing on smartphones, which are resource-limited devices, is challenging. Panorama generation on modern mobile phones is a requirement of most mobile phone users. This paper presents an automatic sequential image stitching algorithm with high-resolution panorama generation and addresses the issue of stitching failure on smartphone devices. A robust method is used to automatically control the events involved in panorama generation from image capture to image stitching on Android operating systems. The image frames are taken in a firm spatial interval using the orientation sensor included in smartphone devices. The features-based stitching algorithm is used for panorama generation, with a novel modification to address the issue of stitching failure (inability to find local features causes this issue) when performing sequential stitching over mobile devices. We also address the issue of distortion in sequential stitching. Ultimately, in this study, we built an Android application that can construct a high-resolution panorama sequentially with automatic frame capture based on an orientation sensor and device rotation. We present a novel research methodology (called “Sense-Panorama”) for panorama construction along with a development guide for smartphone developers. Based on our experiments, performed by Samsung Galaxy SM-N960N, which carries system on chip (SoC) as Qualcomm Snapdragon 845 and a CPU of 4 × 2.8 GHz Kyro 385, our method can generate a high-resolution panorama. Compared to the existing methods, the results show improvement in visual quality for both subjective and objective evaluation. Full article
(This article belongs to the Topic Lightweight Deep Neural Networks for Video Analytics)
Show Figures

Figure 1

5 pages, 234 KiB  
Proceeding Paper
A QFT Approach to Data Streaming in Natural and Artificial Neural Networks
by Gianfranco Basti and Giuseppe Vitiello
Proceedings 2022, 81(1), 106; https://doi.org/10.3390/proceedings2022081106 - 19 Sep 2021
Cited by 1 | Viewed by 1396
Abstract
In the actual panorama of machine learning (ML) algorithms, the issue of the real-time information extraction/classification/manipulation/analysis of data streams (DS) is acquiring an ever-growing relevance. They arrive generally at high speed and always require an unsupervised real-time analysis for individuating long-range and higher [...] Read more.
In the actual panorama of machine learning (ML) algorithms, the issue of the real-time information extraction/classification/manipulation/analysis of data streams (DS) is acquiring an ever-growing relevance. They arrive generally at high speed and always require an unsupervised real-time analysis for individuating long-range and higher order correlations among data that are continuously changing over time (phase transitions). This emphasizes the infinitary character of the issue, i.e., the continuous change of the signifying number of degrees of freedom characterizing the statistical representation function, challenging the classical ML algorithms, both in their classical and quantum versions, as far as all are based on the (stochastic) search for the global minimum of some cost/energy function. The physical analogue must be studied in the realm of quantum field theory (QFT) for dissipative systems as biological and neural systems, which are able to map between different phases of quantum fields, using the formalism of the Bogoliubov transform (BT). By applying the BT in a reversed way, on the system-thermal bath energetically balanced states, it is possible to define the powerful computational tool of the “doubling of the degrees of freedom” (DDF), making the choice of the signifying finite number of the degrees of freedom dynamic and then automatic, so to suggest a different class of unsupervised ML algorithms for solving the DS issue. Full article
26 pages, 124329 KiB  
Article
Performance Evaluation of Bundle Adjustment with Population Based Optimization Algorithms Applied to Panoramic Image Stitching
by Maria Júlia R. Aguiar, Tiago da Rocha Alves, Leonardo M. Honório, Ivo C. S. Junior and Vinícius F. Vidal
Sensors 2021, 21(15), 5054; https://doi.org/10.3390/s21155054 - 26 Jul 2021
Cited by 8 | Viewed by 3595
Abstract
The image stitching process is based on the alignment and composition of multiple images that represent parts of a 3D scene. The automatic construction of panoramas from multiple digital images is a technique of great importance, finding applications in different areas such as [...] Read more.
The image stitching process is based on the alignment and composition of multiple images that represent parts of a 3D scene. The automatic construction of panoramas from multiple digital images is a technique of great importance, finding applications in different areas such as remote sensing and inspection and maintenance in many work environments. In traditional automatic image stitching, image alignment is generally performed by the Levenberg–Marquardt numerical-based method. Although these traditional approaches only present minor flaws in the final reconstruction, the final result is not appropriate for industrial grade applications. To improve the final stitching quality, this work uses a RGBD robot capable of precise image positing. To optimize the final adjustment, this paper proposes the use of bio-inspired algorithms such as Bat Algorithm, Grey Wolf Optimizer, Arithmetic Optimization Algorithm, Salp Swarm Algorithm and Particle Swarm Optimization in order verify the efficiency and competitiveness of metaheuristics against the classical Levenberg–Marquardt method. The obtained results showed that metaheuristcs have found better solutions than the traditional approach. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

22 pages, 12662 KiB  
Article
Automatic 360° Mono-Stereo Panorama Generation Using a Cost-Effective Multi-Camera System
by Hayat Ullah, Osama Zia, Jun Ho Kim, Kyungjin Han and Jong Weon Lee
Sensors 2020, 20(11), 3097; https://doi.org/10.3390/s20113097 - 30 May 2020
Cited by 24 | Viewed by 6851
Abstract
In recent years, 360° videos have gained the attention of researchers due to their versatility and applications in real-world problems. Also, easy access to different visual sensor kits and easily deployable image acquisition devices have played a vital role in the growth of [...] Read more.
In recent years, 360° videos have gained the attention of researchers due to their versatility and applications in real-world problems. Also, easy access to different visual sensor kits and easily deployable image acquisition devices have played a vital role in the growth of interest in this area by the research community. Recently, several 360° panorama generation systems have demonstrated reasonable quality generated panoramas. However, these systems are equipped with expensive image sensor networks where multiple cameras are mounted in a circular rig with specific overlapping gaps. In this paper, we propose an economical 360° panorama generation system that generates both mono and stereo panoramas. For mono panorama generation, we present a drone-mounted image acquisition sensor kit that consists of six cameras placed in a circular fashion with optimal overlapping gap. The hardware of our proposed image acquisition system is configured in such way that no user input is required to stitch multiple images. For stereo panorama generation, we propose a lightweight, cost-effective visual sensor kit that uses only three cameras to cover 360° of the surroundings. We also developed stitching software that generates both mono and stereo panoramas using a single image stitching pipeline where the panorama generated by our proposed system is automatically straightened without visible seams. Furthermore, we compared our proposed system with existing mono and stereo contents generation systems in both qualitative and quantitative perspectives, and the comparative measurements obtained verified the effectiveness of our system compared to existing mono and stereo generation systems. Full article
(This article belongs to the Special Issue Image Sensors: Systems and Applications)
Show Figures

Figure 1

23 pages, 6118 KiB  
Article
Generating a Cylindrical Panorama from a Forward-Looking Borehole Video for Borehole Condition Analysis
by Zhaopeng Deng, Maoyong Cao, Yushui Geng and Laxmisha Rai
Appl. Sci. 2019, 9(16), 3437; https://doi.org/10.3390/app9163437 - 20 Aug 2019
Cited by 15 | Viewed by 4011
Abstract
Geological exploration plays a fundamental and crucial role in geological engineering. The most frequently used method is to obtain borehole videos using an axial view borehole camera system (AVBCS) in a pre-drilled borehole. This approach to surveying the internal structure of a borehole [...] Read more.
Geological exploration plays a fundamental and crucial role in geological engineering. The most frequently used method is to obtain borehole videos using an axial view borehole camera system (AVBCS) in a pre-drilled borehole. This approach to surveying the internal structure of a borehole is based on the video playback and video screenshot analysis. One of the drawbacks of AVBCS is that it provides only a qualitative description of borehole information with a forward-looking borehole video, but quantitative analysis of the borehole data, such as the width and dip angle of fracture, are unavailable. In this paper, we proposed a new approach to create a whole borehole-wall cylindrical panorama from the borehole video acquired by AVBCS, which provides a possibility for further analysis of borehole information. Firstly, based on the Otsu and region labeling algorithms, a borehole center location algorithm is proposed to extract the borehole center of each video image automatically. Afterwards, based on coordinate mapping (CM), a virtual coordinate graph (VCG) is designed in the unwrapping process of the front view borehole-wall image sequence, generating the corresponding unfolded image sequence and reducing the computational cost. Subsequently, based on the sum of absolute difference (SAD), a projection transformation SAD (PTSAD), which considers the gray level similarity of candidate images, is proposed to achieve the matching of the unfolded image sequence. Finally, an image filtering module is introduced to filter the invalid frames and the remaining frames are stitched into a complete cylindrical panorama. Experiments on two real-world borehole videos demonstrate that the proposed method can generate panoramic borehole-wall unfolded images from videos with satisfying visual effect for follow up geological condition analysis. From the resulting image, borehole information, including the rock mechanical properties, distribution and width of fracture, fault distribution and seam thickness, can be further obtained and analyzed. Full article
(This article belongs to the Special Issue Advanced Intelligent Imaging Technology)
Show Figures

Figure 1

Back to TopTop