Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (31)

Search Parameters:
Keywords = 360° panoramic image

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 6429 KB  
Article
An Improved Map Information Collection Tool Using 360° Panoramic Images for Indoor Navigation Systems
by Kadek Suarjuna Batubulan, Nobuo Funabiki, I Nyoman Darma Kotama, Komang Candra Brata and Anak Agung Surya Pradhana
Appl. Sci. 2026, 16(3), 1499; https://doi.org/10.3390/app16031499 - 2 Feb 2026
Abstract
At present, pedestrian navigation systems using smartphones have become common in daily activities. For their ubiquitous, accurate, and reliable services, map information collection is essential for constructing comprehensive spatial databases. Previously, we have developed a map information collection tool to extract building information [...] Read more.
At present, pedestrian navigation systems using smartphones have become common in daily activities. For their ubiquitous, accurate, and reliable services, map information collection is essential for constructing comprehensive spatial databases. Previously, we have developed a map information collection tool to extract building information using Google Maps, optical character recognition (OCR), geolocation, and web scraping with smartphones. However, indoor navigation often suffers from inaccurate localization due to degraded GPS signals inside buildings and Simultaneous Localization and Mapping (SLAM) estimation errors, causing position errors and confusing augmented reality (AR) guidance. In this paper, we present an improved map information collection tool to address this problem. It captures 360° panoramic images to build 3D models, apply photogrammetry-based mesh reconstruction to correct geometry, and georeference point clouds to refine latitude–longitude coordinates. For evaluations, experiments in various indoor scenarios were conducted. The results demonstrate that the proposed method effectively mitigates positional errors with an average drift correction of 3.15 m, calculated via the Haversine formula. Geometric validation using point cloud analysis showed high registration accuracy, which translated to a 100% task completion rate and an average navigation time of 124.5 s among participants. Furthermore, usability testing using the System Usability Scale (SUS) yielded an average score of 96.5, categorizing the user interface as ’Best Imaginable’. These quantitative findings substantiate that the integration of 360° imaging and photogrammetric correction significantly enhances navigation reliability and user satisfaction compared with previous sensor fusion approaches. Full article
(This article belongs to the Section Electrical, Electronics and Communications Engineering)
16 pages, 64671 KB  
Article
A Dual-UNet Diffusion Framework for Personalized Panoramic Generation
by Jing Shen, Leigang Huo, Chunlei Huo and Shiming Xiang
J. Imaging 2026, 12(1), 40; https://doi.org/10.3390/jimaging12010040 - 11 Jan 2026
Viewed by 215
Abstract
While text-to-image and customized generation methods demonstrate strong capabilities in single-image generation, they fall short in supporting immersive applications that require coherent 360° panoramas. Conversely, existing panorama generation models lack customization capabilities. In panoramic scenes, reference objects often appear as minor background elements [...] Read more.
While text-to-image and customized generation methods demonstrate strong capabilities in single-image generation, they fall short in supporting immersive applications that require coherent 360° panoramas. Conversely, existing panorama generation models lack customization capabilities. In panoramic scenes, reference objects often appear as minor background elements and may be multiple in number, while reference images across different views exhibit weak correlations. To address these challenges, we propose a diffusion-based framework for customized multi-view image generation. Our approach introduces a decoupled feature injection mechanism within a dual-UNet architecture to handle weakly correlated reference images, effectively integrating spatial information by concurrently feeding both reference images and noise into the denoising branch. A hybrid attention mechanism enables deep fusion of reference features and multi-view representations. Furthermore, a data augmentation strategy facilitates viewpoint-adaptive pose adjustments, and panoramic coordinates are employed to guide multi-view attention. The experimental results demonstrate our model’s effectiveness in generating coherent, high-quality customized multi-view images. Full article
(This article belongs to the Section AI in Imaging)
Show Figures

Figure 1

20 pages, 8493 KB  
Article
Low-Cost Panoramic Photogrammetry: A Case Study on Flat Textures and Poor Lighting Conditions
by Ondrej Benko, Marek Fraštia, Marián Marčiš and Adrián Filip
Geomatics 2026, 6(1), 2; https://doi.org/10.3390/geomatics6010002 - 3 Jan 2026
Viewed by 286
Abstract
The article addresses the issue of panoramic photogrammetry for the reconstruction of interior spaces. Such environments often present challenges, including poor lighting conditions and surfaces with variable texture for photogrammetric scanning. In this case study, we reconstruct the interior spaces of the historical [...] Read more.
The article addresses the issue of panoramic photogrammetry for the reconstruction of interior spaces. Such environments often present challenges, including poor lighting conditions and surfaces with variable texture for photogrammetric scanning. In this case study, we reconstruct the interior spaces of the historical house of Samuel Mikovíni, which represents these unfavorable conditions. The 3D reconstruction of interior spaces is performed using the Ricoh Theta Z1 spherical camera (Ricoh Company, Ltd.; Tokyo, Japan) in six variants, each employing a different number of images and different camera networks. Scale is introduced into the reconstructions based on significant dimensions measured with a measuring tape. A comparison is carried out using a point cloud obtained from terrestrial laser scanning and difference point clouds are generated for each variant. Based on the results, reconstructions produced from a reduced number of spherical images can serve as a basic source for simple documentation with accuracy up to 0.15 m. When the number of spherical images is increased and images from different height levels are included, the reconstruction accuracy improves markedly, achieving positional accuracy of up to 0.05 m, even in areas affected by poor lighting conditions or low-texture surfaces. The results confirm that for interior reconstruction, a higher number of images not only increases the density of the reconstructed point cloud but also enhances its positional accuracy. Full article
Show Figures

Figure 1

24 pages, 8595 KB  
Article
Integrated Geomatic Approaches for the 3D Documentation and Analysis of the Church of Saint Andrew in Orani, Sardinia
by Giuseppina Vacca and Enrica Vecchi
Remote Sens. 2025, 17(19), 3376; https://doi.org/10.3390/rs17193376 - 7 Oct 2025
Viewed by 962
Abstract
Documenting cultural heritage sites through 3D reconstruction is crucial and can be accomplished using various geomatic techniques, such as Terrestrial Laser Scanners (TLS), Close-Range Photogrammetry (CRP), and UAV photogrammetry. Each method comes with different levels of complexity, accuracy, field times, post-processing requirements, and [...] Read more.
Documenting cultural heritage sites through 3D reconstruction is crucial and can be accomplished using various geomatic techniques, such as Terrestrial Laser Scanners (TLS), Close-Range Photogrammetry (CRP), and UAV photogrammetry. Each method comes with different levels of complexity, accuracy, field times, post-processing requirements, and costs, making them suitable for different types of restitutions. Recently, research has increasingly focused on user-friendly and faster techniques, while also considering the cost–benefit balance between accuracy, times, and costs. In this scenario, photogrammetry using images captured with 360-degree cameras and LiDAR sensors integrated into Apple devices have gained significant popularity. This study proposes the application of various techniques for the geometric reconstruction of a complex cultural heritage site, the Church of Saint Andrew in Orani, Sardinia. Datasets acquired from different geomatic techniques have been evaluated in terms of quality and usability for documenting various aspects of the site. The TLS provided an accurate model of both the interior and exterior of the church, serving as the ground truth for the validation process. UAV photogrammetry offered a broader view of the exterior, while panoramic photogrammetry from 360° camera was applied to survey the bell tower’s interior. Additionally, CRP and Apple LiDAR were compared in the context of a detailed survey. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

22 pages, 17160 KB  
Article
Visual Perception Element Evaluation of Suburban Local Landscapes: Integrating Multiple Machine Learning Methods
by Suning Gong, Jie Zhang and Yuxi Duan
Buildings 2025, 15(18), 3312; https://doi.org/10.3390/buildings15183312 - 12 Sep 2025
Cited by 1 | Viewed by 846
Abstract
Comprehensive evaluation of suburban landscape perception is essential for improving environmental quality and fostering integrated urban–rural development. Despite its importance, limited research has systematically extracted local visual features and analyzed influencing factors in suburban landscapes using multi-source data and machine learning. This study [...] Read more.
Comprehensive evaluation of suburban landscape perception is essential for improving environmental quality and fostering integrated urban–rural development. Despite its importance, limited research has systematically extracted local visual features and analyzed influencing factors in suburban landscapes using multi-source data and machine learning. This study investigated Chongming District, a suburban area of Shanghai. Using Baidu Street View 360° panoramic images, local visual features were extracted through semantic segmentation of street view imagery, spatial multi-clustering, and random forest classification. A geographic detector model was employed to explore the relationships between landscape characteristics and their driving factors. The findings of the study indicate (1) significant spatial variations in the green visibility, sky openness, building density, road width, facility diversity, and enclosure integrity; (2) an intertwined spatial pattern of blue, green, and gray spaces; (3) the emergence of natural environment dimension factors as the primary drivers influencing the spatial configuration. In the suburban industrial dimension, the interaction between the GDP and commercial vitality exhibits the highest level of synergy. Based on these findings, targeted strategies are proposed to enhance the distinctive landscape features of Chongming Island. This research framework and methodology are specifically applied to Chongming District as a case study. Future studies should consider modifying the algorithms and index systems to better reflect other study areas, thereby ensuring the validity and precision of the results. Full article
(This article belongs to the Section Architectural Design, Urban Science, and Real Estate)
Show Figures

Figure 1

32 pages, 3256 KB  
Review
AI and Generative Models in 360-Degree Video Creation: Building the Future of Virtual Realities
by Nicolay Anderson Christian, Jason Turuwhenua and Mohammad Norouzifard
Appl. Sci. 2025, 15(17), 9292; https://doi.org/10.3390/app15179292 - 24 Aug 2025
Viewed by 4560
Abstract
The generation of 360° video is gaining prominence in immersive media, virtual reality (VR), gaming projects, and the emerging metaverse. Traditional methods for panoramic content creation often rely on specialized hardware and dense video capture, which limits scalability and accessibility. Recent advances in [...] Read more.
The generation of 360° video is gaining prominence in immersive media, virtual reality (VR), gaming projects, and the emerging metaverse. Traditional methods for panoramic content creation often rely on specialized hardware and dense video capture, which limits scalability and accessibility. Recent advances in generative artificial intelligence, particularly diffusion models and neural radiance fields (NeRFs), are examined in this research for their potential to generate immersive panoramic video content from minimal input, such as a sparse set of narrow-field-of-view (NFoV) images. To investigate this, a structured literature review of over 70 recent papers in panoramic image and video generation was conducted. We analyze key contributions from models such as 360DVD, Imagine360, and PanoDiff, focusing on their approaches to motion continuity, spatial realism, and conditional control. Our analysis highlights that achieving seamless motion continuity remains the primary challenge, as most current models struggle with temporal consistency when generating long sequences. Based on these findings, a research direction has been proposed that aims to generate 360° video from as few as 8–10 static NFoV inputs, drawing on techniques from image stitching, scene completion, and view bridging. This review also underscores the potential for creating scalable, data-efficient, and near-real-time panoramic video synthesis, while emphasizing the critical need to address temporal consistency for practical deployment. Full article
Show Figures

Figure 1

26 pages, 42046 KB  
Article
High-Resolution Wide-Beam Millimeter-Wave ArcSAR System for Urban Infrastructure Monitoring
by Wenjie Shen, Wenxing Lv, Yanping Wang, Yun Lin, Yang Li, Zechao Bai and Kuai Yu
Remote Sens. 2025, 17(12), 2043; https://doi.org/10.3390/rs17122043 - 13 Jun 2025
Viewed by 1105
Abstract
Arc scanning synthetic aperture radar (ArcSAR) can achieve high-resolution panoramic imaging and retrieve submillimeter-level deformation information. To monitor buildings in a city scenario, ArcSAR must be lightweight; have a high resolution, a mid-range (around a hundred meters), and low power consumption; and be [...] Read more.
Arc scanning synthetic aperture radar (ArcSAR) can achieve high-resolution panoramic imaging and retrieve submillimeter-level deformation information. To monitor buildings in a city scenario, ArcSAR must be lightweight; have a high resolution, a mid-range (around a hundred meters), and low power consumption; and be cost-effective. In this study, a novel high-resolution wide-beam single-chip millimeter-wave (mmwave) ArcSAR system, together with an imaging algorithm, is presented. First, to handle the non-uniform azimuth sampling caused by motor motion, a high-accuracy angular coder is used in the system design. The coder can send the radar a hardware trigger signal when rotated to a specific angle so that uniform angular sampling can be achieved under the unstable rotation of the motor. Second, the ArcSAR’s maximum azimuth sampling angle that can avoid aliasing is deducted based on the Nyquist theorem. The mathematical relation supports the proposed ArcSAR system in acquiring data by setting the sampling angle interval. Third, the range cell migration (RCM) phenomenon is severe because mmwave radar has a wide azimuth beamwidth and a high frequency, and ArcSAR has a curved synthetic aperture. Therefore, the fourth-order RCM model based on the range-Doppler (RD) algorithm is interpreted with a uniform azimuth angle to suit the system and implemented. The proposed system uses the TI 6843 module as the radar sensor, and its azimuth beamwidth is 64°. The performance of the system and the corresponding imaging algorithm are thoroughly analyzed and validated via simulations and real data experiments. The output image covers a 360° and 180 m area at an azimuth resolution of 0.2°. The results show that the proposed system has good application prospects, and the design principles can support the improvement of current ArcSARs. Full article
Show Figures

Figure 1

13 pages, 7359 KB  
Article
Tabletop 3D Display with Large Radial Viewing Angle Based on Panoramic Annular Lens Array
by Min-Yang He, Cheng-Bo Zhao, Xue-Rui Wen, Yi-Jian Liu, Qiong-Hua Wang and Yan Xing
Photonics 2025, 12(5), 515; https://doi.org/10.3390/photonics12050515 - 21 May 2025
Viewed by 926
Abstract
Tabletop 3D display is an emerging display form that enables multiple users to share viewing around a central tabletop, making it promising for the application of collaborative work. However, achieving an ideal ring-shaped viewing zone with a large radial viewing angle remains a [...] Read more.
Tabletop 3D display is an emerging display form that enables multiple users to share viewing around a central tabletop, making it promising for the application of collaborative work. However, achieving an ideal ring-shaped viewing zone with a large radial viewing angle remains a challenge for most current tabletop 3D displays. This paper presents a tabletop 3D display based on a panoramic annular lens array to realize a large radial viewing angle. Each panoramic annular lens in the array is designed with a block-structured panoramic front unit and a relay lens system, enabling the formation of a ring-shaped viewing zone and increasing the radial angle of the outgoing light. Additionally, the diffusion characteristics of the optical diffusing screen component are analyzed under large angles of incidence after light passes through the panoramic annular lens array. Then, a method for generating the corresponding elemental image array is presented. The results of the simulation experiments demonstrate that the viewing range is improved to −78.4–−42.2° and 42.6–78.9°, resulting in a total radial viewing angle of up to 72.5°, and the proposed 3D display can present a 360° viewable 3D image with correct perspective and parallax. Full article
(This article belongs to the Special Issue Research on Optical Materials and Components for 3D Displays)
Show Figures

Figure 1

25 pages, 9788 KB  
Article
Visual Geo-Localization Based on Spatial Structure Feature Enhancement and Adaptive Scene Alignment
by Yifan Ping, Jun Lu, Haitao Guo, Lei Ding and Qingfeng Hou
Electronics 2025, 14(7), 1269; https://doi.org/10.3390/electronics14071269 - 24 Mar 2025
Viewed by 1777
Abstract
The task of visual geo-localization based on street-view images estimates the geographical location of a query image by recognizing the nearest reference image in a geo-tagged database. This task holds considerable practical significance in domains such as autonomous driving and outdoor navigation. Current [...] Read more.
The task of visual geo-localization based on street-view images estimates the geographical location of a query image by recognizing the nearest reference image in a geo-tagged database. This task holds considerable practical significance in domains such as autonomous driving and outdoor navigation. Current approaches typically use perspective street-view images as reference images. However, the lack of scene content resulting from the restricted field of view (FOV) in such images is the main cause of inaccuracies in matching and localizing the query and reference images with the same global positioning system (GPS) labels. To address this issue, we propose a perspective-to-panoramic image visual geo-localization framework. This framework employs 360° panoramic images as references, thereby eliminating the issue of scene content mismatch due to the restricted FOV. Moreover, we propose the structural feature enhancement (SFE) module and integrate it into LskNet to enhance the ability of the feature extraction network to capture and extract long-term stable structural features. Furthermore, we propose the adaptive scene alignment (ASA) strategy to address the issue of data capacity and information content asymmetry between perspective and panoramic images, thereby facilitating initial scene alignment. In addition, a lightweight feature aggregation module, MixVPR, which considers spatial structure relationships, is introduced to aggregate the scene-aligned region features into robust global feature descriptors for matching and localization. Experimental results demonstrate that the proposed model outperforms current state-of-the-art methods and achieves R@1 scores of 72.5% on the Pitts250k-P2E dataset and 58.4% on the YQ360 dataset, indicating the efficacy of this approach in practical visual geo-localization applications. Full article
Show Figures

Figure 1

15 pages, 14361 KB  
Article
Precision Monitoring of Dead Chickens and Floor Eggs with a Robotic Machine Vision Method
by Xiao Yang, Jinchang Zhang, Bidur Paneru, Jiakai Lin, Ramesh Bahadur Bist, Guoyu Lu and Lilong Chai
AgriEngineering 2025, 7(2), 35; https://doi.org/10.3390/agriengineering7020035 - 3 Feb 2025
Cited by 3 | Viewed by 3616
Abstract
Modern poultry and egg production is facing challenges such as dead chickens and floor eggs in cage-free housing. Precision poultry management strategies are needed to address those challenges. In this study, convolutional neural network (CNN) models and an intelligent bionic quadruped robot were [...] Read more.
Modern poultry and egg production is facing challenges such as dead chickens and floor eggs in cage-free housing. Precision poultry management strategies are needed to address those challenges. In this study, convolutional neural network (CNN) models and an intelligent bionic quadruped robot were used to detect floor eggs and dead chickens in cage-free housing environments. A dataset comprising 1200 images was used to develop detection models, which were split into training, testing, and validation sets in a 3:1:1 ratio. Five different CNN models were developed based on YOLOv8 and the robot’s 360° panoramic depth perception camera. The final results indicated that YOLOv8m exhibited the highest performance, achieving a precision of 90.59%. The application of the optimal model facilitated the detection of floor eggs in dimly lit areas such as below the feeder area and in corner spaces, as well as the detection of dead chickens within the flock. This research underscores the utility of bionic robotics and convolutional neural networks for poultry management and precision livestock farming. Full article
(This article belongs to the Section Livestock Farming Technology)
Show Figures

Figure 1

15 pages, 1955 KB  
Article
CAPDepth: 360 Monocular Depth Estimation by Content-Aware Projection
by Xu Gao, Yongqiang Shi, Yaqian Zhao, Yanan Wang, Jin Wang and Gang Wu
Appl. Sci. 2025, 15(2), 769; https://doi.org/10.3390/app15020769 - 14 Jan 2025
Cited by 1 | Viewed by 2995
Abstract
Solving the depth estimation problem in a 360° image space, which has holistic scene perception, has become a trend in recent years. However, depth estimation in common 360° images is prone to geometric distortion. Therefore, this study proposes a new method, CAPDepth, to [...] Read more.
Solving the depth estimation problem in a 360° image space, which has holistic scene perception, has become a trend in recent years. However, depth estimation in common 360° images is prone to geometric distortion. Therefore, this study proposes a new method, CAPDepth, to address the geometric-distortion problem of 360° monocular depth estimation. We reduce the tangential projections by an optimized content-aware projection (CAP) and a geometric embedding module to capture more features for global depth consistency. Additionally, we adopt an index map and a de-blocking scheme to improve the inference efficiency and quality of our CAPDepth model. Our experiments show that CAPDepth greatly alleviates the distortion problem, producing smoother, more accurate predicted depth results, and improves performance in panoramic depth estimation. Full article
Show Figures

Figure 1

18 pages, 29460 KB  
Article
A Deep Learning Approach of Intrusion Detection and Tracking with UAV-Based 360° Camera and 3-Axis Gimbal
by Yao Xu, Yunxiao Liu, Han Li, Liangxiu Wang and Jianliang Ai
Drones 2024, 8(2), 68; https://doi.org/10.3390/drones8020068 - 18 Feb 2024
Cited by 6 | Viewed by 4265
Abstract
Intrusion detection is often used in scenarios such as airports and essential facilities. Based on UAVs equipped with optical payloads, intrusion detection from an aerial perspective can be realized. However, due to the limited field of view of the camera, it is difficult [...] Read more.
Intrusion detection is often used in scenarios such as airports and essential facilities. Based on UAVs equipped with optical payloads, intrusion detection from an aerial perspective can be realized. However, due to the limited field of view of the camera, it is difficult to achieve large-scale continuous tracking of intrusion targets. In this study, we proposed an intrusion target detection and tracking method based on the fusion of a 360° panoramic camera and a 3-axis gimbal, and designed a detection model covering five types of intrusion targets. During the research process, the multi-rotor UAV platform was built. Then, based on a field flight test, 3043 flight images taken by a 360° panoramic camera and a 3-axis gimbal in various environments were collected, and an intrusion data set was produced. Subsequently, considering the applicability of the YOLO model in intrusion target detection, this paper proposes an improved YOLOv5s-360ID model based on the original YOLOv5-s model. This model improved and optimized the anchor box of the YOLOv5-s model according to the characteristics of the intrusion target. It used the K-Means++ clustering algorithm to regain the anchor box that matches the small target detection task. It also introduced the EIoU loss function to replace the original CIoU loss function. The target bounding box regression loss function made the intrusion target detection model more efficient while ensuring high detection accuracy. The performance of the UAV platform was assessed using the detection model to complete the test flight verification in an actual scene. The experimental results showed that the mean average precision (mAP) of the YOLOv5s-360ID was 75.2%, which is better than the original YOLOv5-s model of 72.4%, and the real-time detection frame rate of the intrusion detection was 31 FPS, which validated the real-time performance of the detection model. The gimbal tracking control algorithm for intrusion targets is also validated. The experimental results demonstrate that the system can enhance intrusion targets’ detection and tracking range. Full article
(This article belongs to the Section Drone Design and Development)
Show Figures

Figure 1

25 pages, 85034 KB  
Article
360° Map Establishment and Real-Time Simultaneous Localization and Mapping Based on Equirectangular Projection for Autonomous Driving Vehicles
by Bo-Hong Lin, Vinay M. Shivanna, Jiun-Shiung Chen and Jiun-In Guo
Sensors 2023, 23(12), 5560; https://doi.org/10.3390/s23125560 - 14 Jun 2023
Cited by 3 | Viewed by 3835
Abstract
This paper proposes the design of a 360° map establishment and real-time simultaneous localization and mapping (SLAM) algorithm based on equirectangular projection. All equirectangular projection images with an aspect ratio of 2:1 are supported for input image types of the proposed system, allowing [...] Read more.
This paper proposes the design of a 360° map establishment and real-time simultaneous localization and mapping (SLAM) algorithm based on equirectangular projection. All equirectangular projection images with an aspect ratio of 2:1 are supported for input image types of the proposed system, allowing an unlimited number and arrangement of cameras. Firstly, the proposed system uses dual back-to-back fisheye cameras to capture 360° images, followed by the adoption of the perspective transformation with any yaw degree given to shrink the feature extraction area in order to reduce the computational time, as well as retain the 360° field of view. Secondly, the oriented fast and rotated brief (ORB) feature points extracted from perspective images with a GPU acceleration are used for tracking, mapping, and camera pose estimation in the system. The 360° binary map supports the functions of saving, loading, and online updating to enhance the flexibility, convenience, and stability of the 360° system. The proposed system is also implemented on an nVidia Jetson TX2 embedded platform with 1% accumulated RMS error of 250 m. The average performance of the proposed system achieves 20 frames per second (FPS) in the case with a single-fisheye camera of resolution 1024 × 768, and the system performs panoramic stitching and blending under 1416 × 708 resolution from a dual-fisheye camera at the same time. Full article
Show Figures

Figure 1

16 pages, 5560 KB  
Article
Free-Viewpoint Navigation of Indoor Scene with 360° Field of View
by Hang Xu, Qiang Zhao, Yike Ma, Shuai Wang, Chenggang Yan and Feng Dai
Electronics 2023, 12(8), 1954; https://doi.org/10.3390/electronics12081954 - 21 Apr 2023
Cited by 3 | Viewed by 2587
Abstract
By providing a 360° field of view, spherical panoramas can convey vivid visual impressions. Thus, they are widely used in virtual reality systems and street view services. However, due to bandwidth or storage limitations, existing systems only provide sparsely captured panoramas and have [...] Read more.
By providing a 360° field of view, spherical panoramas can convey vivid visual impressions. Thus, they are widely used in virtual reality systems and street view services. However, due to bandwidth or storage limitations, existing systems only provide sparsely captured panoramas and have limited interaction modes. Although there are methods that can synthesize novel views based on captured panoramas, the generated novel views all lie on the lines connecting existing views. Therefore these methods do not support free-viewpoint navigation. In this paper, we propose a new panoramic image-based rendering method for novel view generation. Our method represents each input panorama with a set of spherical superpixels and warps each superpixel individually so the method can deal with the occlusion and disocclusion problem. The warping is dominated by a two-term constraint, which can preserve the shape of the superpixel and ensure it is warped to the correct position determined by the 3D reconstruction of the scene. Our method can generate novel views that are far from input camera positions. Thus, it supports freely exploring the scene with a 360° field of view. We compare our method with three previous methods on datasets captured by ourselves and by others. Experiments show that our method can obtain better results. Full article
Show Figures

Figure 1

11 pages, 2398 KB  
Article
Rapidly Quantifying Interior Greenery Using 360° Panoramic Images
by Junzhiwei Jiang, Cris Brack, Robert Coe and Philip Gibbons
Forests 2022, 13(4), 602; https://doi.org/10.3390/f13040602 - 12 Apr 2022
Cited by 3 | Viewed by 2716
Abstract
Many people spend the majority of their time indoors and there is emerging evidence that interior greenery contributes to human wellbeing. Accurately capturing the amount of interior greenery is an important first step in studying its contribution to human well-being. In this study, [...] Read more.
Many people spend the majority of their time indoors and there is emerging evidence that interior greenery contributes to human wellbeing. Accurately capturing the amount of interior greenery is an important first step in studying its contribution to human well-being. In this study, we evaluated the accuracy of interior greenery captured using 360° panoramic images taken within a range of different interior spaces. We developed an Interior Green View Index (iGVI) based on a K-means clustering algorithm to estimate interior greenery from 360° panoramic images taken within 66 interior spaces and compared these estimates with interior greenery measured manually from the same panoramic images. Interior greenery estimated using the automated method ranged from 0% to 34.19% of image pixels within the sampled interior spaces. Interior greenery estimated using the automated method was highly correlated (r = 0.99) with interior greenery measured manually, although we found the accuracy of the automated method compared with the manual method declined with the volume and illuminance of interior spaces. The results suggested that our automated method for extracting interior greenery from 360° panoramic images is a useful tool for rapidly estimating interior greenery in all but very large and highly illuminated interior spaces. Full article
(This article belongs to the Special Issue Urban Forestry Measurements)
Show Figures

Figure 1

Back to TopTop