Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (25)

Search Parameters:
Keywords = 360° panoramic images

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 42046 KiB  
Article
High-Resolution Wide-Beam Millimeter-Wave ArcSAR System for Urban Infrastructure Monitoring
by Wenjie Shen, Wenxing Lv, Yanping Wang, Yun Lin, Yang Li, Zechao Bai and Kuai Yu
Remote Sens. 2025, 17(12), 2043; https://doi.org/10.3390/rs17122043 - 13 Jun 2025
Viewed by 315
Abstract
Arc scanning synthetic aperture radar (ArcSAR) can achieve high-resolution panoramic imaging and retrieve submillimeter-level deformation information. To monitor buildings in a city scenario, ArcSAR must be lightweight; have a high resolution, a mid-range (around a hundred meters), and low power consumption; and be [...] Read more.
Arc scanning synthetic aperture radar (ArcSAR) can achieve high-resolution panoramic imaging and retrieve submillimeter-level deformation information. To monitor buildings in a city scenario, ArcSAR must be lightweight; have a high resolution, a mid-range (around a hundred meters), and low power consumption; and be cost-effective. In this study, a novel high-resolution wide-beam single-chip millimeter-wave (mmwave) ArcSAR system, together with an imaging algorithm, is presented. First, to handle the non-uniform azimuth sampling caused by motor motion, a high-accuracy angular coder is used in the system design. The coder can send the radar a hardware trigger signal when rotated to a specific angle so that uniform angular sampling can be achieved under the unstable rotation of the motor. Second, the ArcSAR’s maximum azimuth sampling angle that can avoid aliasing is deducted based on the Nyquist theorem. The mathematical relation supports the proposed ArcSAR system in acquiring data by setting the sampling angle interval. Third, the range cell migration (RCM) phenomenon is severe because mmwave radar has a wide azimuth beamwidth and a high frequency, and ArcSAR has a curved synthetic aperture. Therefore, the fourth-order RCM model based on the range-Doppler (RD) algorithm is interpreted with a uniform azimuth angle to suit the system and implemented. The proposed system uses the TI 6843 module as the radar sensor, and its azimuth beamwidth is 64°. The performance of the system and the corresponding imaging algorithm are thoroughly analyzed and validated via simulations and real data experiments. The output image covers a 360° and 180 m area at an azimuth resolution of 0.2°. The results show that the proposed system has good application prospects, and the design principles can support the improvement of current ArcSARs. Full article
Show Figures

Figure 1

13 pages, 7359 KiB  
Article
Tabletop 3D Display with Large Radial Viewing Angle Based on Panoramic Annular Lens Array
by Min-Yang He, Cheng-Bo Zhao, Xue-Rui Wen, Yi-Jian Liu, Qiong-Hua Wang and Yan Xing
Photonics 2025, 12(5), 515; https://doi.org/10.3390/photonics12050515 - 21 May 2025
Viewed by 390
Abstract
Tabletop 3D display is an emerging display form that enables multiple users to share viewing around a central tabletop, making it promising for the application of collaborative work. However, achieving an ideal ring-shaped viewing zone with a large radial viewing angle remains a [...] Read more.
Tabletop 3D display is an emerging display form that enables multiple users to share viewing around a central tabletop, making it promising for the application of collaborative work. However, achieving an ideal ring-shaped viewing zone with a large radial viewing angle remains a challenge for most current tabletop 3D displays. This paper presents a tabletop 3D display based on a panoramic annular lens array to realize a large radial viewing angle. Each panoramic annular lens in the array is designed with a block-structured panoramic front unit and a relay lens system, enabling the formation of a ring-shaped viewing zone and increasing the radial angle of the outgoing light. Additionally, the diffusion characteristics of the optical diffusing screen component are analyzed under large angles of incidence after light passes through the panoramic annular lens array. Then, a method for generating the corresponding elemental image array is presented. The results of the simulation experiments demonstrate that the viewing range is improved to −78.4–−42.2° and 42.6–78.9°, resulting in a total radial viewing angle of up to 72.5°, and the proposed 3D display can present a 360° viewable 3D image with correct perspective and parallax. Full article
(This article belongs to the Special Issue Research on Optical Materials and Components for 3D Displays)
Show Figures

Figure 1

25 pages, 9788 KiB  
Article
Visual Geo-Localization Based on Spatial Structure Feature Enhancement and Adaptive Scene Alignment
by Yifan Ping, Jun Lu, Haitao Guo, Lei Ding and Qingfeng Hou
Electronics 2025, 14(7), 1269; https://doi.org/10.3390/electronics14071269 - 24 Mar 2025
Viewed by 686
Abstract
The task of visual geo-localization based on street-view images estimates the geographical location of a query image by recognizing the nearest reference image in a geo-tagged database. This task holds considerable practical significance in domains such as autonomous driving and outdoor navigation. Current [...] Read more.
The task of visual geo-localization based on street-view images estimates the geographical location of a query image by recognizing the nearest reference image in a geo-tagged database. This task holds considerable practical significance in domains such as autonomous driving and outdoor navigation. Current approaches typically use perspective street-view images as reference images. However, the lack of scene content resulting from the restricted field of view (FOV) in such images is the main cause of inaccuracies in matching and localizing the query and reference images with the same global positioning system (GPS) labels. To address this issue, we propose a perspective-to-panoramic image visual geo-localization framework. This framework employs 360° panoramic images as references, thereby eliminating the issue of scene content mismatch due to the restricted FOV. Moreover, we propose the structural feature enhancement (SFE) module and integrate it into LskNet to enhance the ability of the feature extraction network to capture and extract long-term stable structural features. Furthermore, we propose the adaptive scene alignment (ASA) strategy to address the issue of data capacity and information content asymmetry between perspective and panoramic images, thereby facilitating initial scene alignment. In addition, a lightweight feature aggregation module, MixVPR, which considers spatial structure relationships, is introduced to aggregate the scene-aligned region features into robust global feature descriptors for matching and localization. Experimental results demonstrate that the proposed model outperforms current state-of-the-art methods and achieves R@1 scores of 72.5% on the Pitts250k-P2E dataset and 58.4% on the YQ360 dataset, indicating the efficacy of this approach in practical visual geo-localization applications. Full article
Show Figures

Figure 1

15 pages, 14361 KiB  
Article
Precision Monitoring of Dead Chickens and Floor Eggs with a Robotic Machine Vision Method
by Xiao Yang, Jinchang Zhang, Bidur Paneru, Jiakai Lin, Ramesh Bahadur Bist, Guoyu Lu and Lilong Chai
AgriEngineering 2025, 7(2), 35; https://doi.org/10.3390/agriengineering7020035 - 3 Feb 2025
Cited by 1 | Viewed by 1842
Abstract
Modern poultry and egg production is facing challenges such as dead chickens and floor eggs in cage-free housing. Precision poultry management strategies are needed to address those challenges. In this study, convolutional neural network (CNN) models and an intelligent bionic quadruped robot were [...] Read more.
Modern poultry and egg production is facing challenges such as dead chickens and floor eggs in cage-free housing. Precision poultry management strategies are needed to address those challenges. In this study, convolutional neural network (CNN) models and an intelligent bionic quadruped robot were used to detect floor eggs and dead chickens in cage-free housing environments. A dataset comprising 1200 images was used to develop detection models, which were split into training, testing, and validation sets in a 3:1:1 ratio. Five different CNN models were developed based on YOLOv8 and the robot’s 360° panoramic depth perception camera. The final results indicated that YOLOv8m exhibited the highest performance, achieving a precision of 90.59%. The application of the optimal model facilitated the detection of floor eggs in dimly lit areas such as below the feeder area and in corner spaces, as well as the detection of dead chickens within the flock. This research underscores the utility of bionic robotics and convolutional neural networks for poultry management and precision livestock farming. Full article
(This article belongs to the Section Livestock Farming Technology)
Show Figures

Figure 1

15 pages, 1955 KiB  
Article
CAPDepth: 360 Monocular Depth Estimation by Content-Aware Projection
by Xu Gao, Yongqiang Shi, Yaqian Zhao, Yanan Wang, Jin Wang and Gang Wu
Appl. Sci. 2025, 15(2), 769; https://doi.org/10.3390/app15020769 - 14 Jan 2025
Cited by 1 | Viewed by 1518
Abstract
Solving the depth estimation problem in a 360° image space, which has holistic scene perception, has become a trend in recent years. However, depth estimation in common 360° images is prone to geometric distortion. Therefore, this study proposes a new method, CAPDepth, to [...] Read more.
Solving the depth estimation problem in a 360° image space, which has holistic scene perception, has become a trend in recent years. However, depth estimation in common 360° images is prone to geometric distortion. Therefore, this study proposes a new method, CAPDepth, to address the geometric-distortion problem of 360° monocular depth estimation. We reduce the tangential projections by an optimized content-aware projection (CAP) and a geometric embedding module to capture more features for global depth consistency. Additionally, we adopt an index map and a de-blocking scheme to improve the inference efficiency and quality of our CAPDepth model. Our experiments show that CAPDepth greatly alleviates the distortion problem, producing smoother, more accurate predicted depth results, and improves performance in panoramic depth estimation. Full article
Show Figures

Figure 1

18 pages, 29460 KiB  
Article
A Deep Learning Approach of Intrusion Detection and Tracking with UAV-Based 360° Camera and 3-Axis Gimbal
by Yao Xu, Yunxiao Liu, Han Li, Liangxiu Wang and Jianliang Ai
Drones 2024, 8(2), 68; https://doi.org/10.3390/drones8020068 - 18 Feb 2024
Cited by 3 | Viewed by 3263
Abstract
Intrusion detection is often used in scenarios such as airports and essential facilities. Based on UAVs equipped with optical payloads, intrusion detection from an aerial perspective can be realized. However, due to the limited field of view of the camera, it is difficult [...] Read more.
Intrusion detection is often used in scenarios such as airports and essential facilities. Based on UAVs equipped with optical payloads, intrusion detection from an aerial perspective can be realized. However, due to the limited field of view of the camera, it is difficult to achieve large-scale continuous tracking of intrusion targets. In this study, we proposed an intrusion target detection and tracking method based on the fusion of a 360° panoramic camera and a 3-axis gimbal, and designed a detection model covering five types of intrusion targets. During the research process, the multi-rotor UAV platform was built. Then, based on a field flight test, 3043 flight images taken by a 360° panoramic camera and a 3-axis gimbal in various environments were collected, and an intrusion data set was produced. Subsequently, considering the applicability of the YOLO model in intrusion target detection, this paper proposes an improved YOLOv5s-360ID model based on the original YOLOv5-s model. This model improved and optimized the anchor box of the YOLOv5-s model according to the characteristics of the intrusion target. It used the K-Means++ clustering algorithm to regain the anchor box that matches the small target detection task. It also introduced the EIoU loss function to replace the original CIoU loss function. The target bounding box regression loss function made the intrusion target detection model more efficient while ensuring high detection accuracy. The performance of the UAV platform was assessed using the detection model to complete the test flight verification in an actual scene. The experimental results showed that the mean average precision (mAP) of the YOLOv5s-360ID was 75.2%, which is better than the original YOLOv5-s model of 72.4%, and the real-time detection frame rate of the intrusion detection was 31 FPS, which validated the real-time performance of the detection model. The gimbal tracking control algorithm for intrusion targets is also validated. The experimental results demonstrate that the system can enhance intrusion targets’ detection and tracking range. Full article
(This article belongs to the Section Drone Design and Development)
Show Figures

Figure 1

25 pages, 85034 KiB  
Article
360° Map Establishment and Real-Time Simultaneous Localization and Mapping Based on Equirectangular Projection for Autonomous Driving Vehicles
by Bo-Hong Lin, Vinay M. Shivanna, Jiun-Shiung Chen and Jiun-In Guo
Sensors 2023, 23(12), 5560; https://doi.org/10.3390/s23125560 - 14 Jun 2023
Cited by 2 | Viewed by 2673
Abstract
This paper proposes the design of a 360° map establishment and real-time simultaneous localization and mapping (SLAM) algorithm based on equirectangular projection. All equirectangular projection images with an aspect ratio of 2:1 are supported for input image types of the proposed system, allowing [...] Read more.
This paper proposes the design of a 360° map establishment and real-time simultaneous localization and mapping (SLAM) algorithm based on equirectangular projection. All equirectangular projection images with an aspect ratio of 2:1 are supported for input image types of the proposed system, allowing an unlimited number and arrangement of cameras. Firstly, the proposed system uses dual back-to-back fisheye cameras to capture 360° images, followed by the adoption of the perspective transformation with any yaw degree given to shrink the feature extraction area in order to reduce the computational time, as well as retain the 360° field of view. Secondly, the oriented fast and rotated brief (ORB) feature points extracted from perspective images with a GPU acceleration are used for tracking, mapping, and camera pose estimation in the system. The 360° binary map supports the functions of saving, loading, and online updating to enhance the flexibility, convenience, and stability of the 360° system. The proposed system is also implemented on an nVidia Jetson TX2 embedded platform with 1% accumulated RMS error of 250 m. The average performance of the proposed system achieves 20 frames per second (FPS) in the case with a single-fisheye camera of resolution 1024 × 768, and the system performs panoramic stitching and blending under 1416 × 708 resolution from a dual-fisheye camera at the same time. Full article
Show Figures

Figure 1

16 pages, 5560 KiB  
Article
Free-Viewpoint Navigation of Indoor Scene with 360° Field of View
by Hang Xu, Qiang Zhao, Yike Ma, Shuai Wang, Chenggang Yan and Feng Dai
Electronics 2023, 12(8), 1954; https://doi.org/10.3390/electronics12081954 - 21 Apr 2023
Cited by 1 | Viewed by 1992
Abstract
By providing a 360° field of view, spherical panoramas can convey vivid visual impressions. Thus, they are widely used in virtual reality systems and street view services. However, due to bandwidth or storage limitations, existing systems only provide sparsely captured panoramas and have [...] Read more.
By providing a 360° field of view, spherical panoramas can convey vivid visual impressions. Thus, they are widely used in virtual reality systems and street view services. However, due to bandwidth or storage limitations, existing systems only provide sparsely captured panoramas and have limited interaction modes. Although there are methods that can synthesize novel views based on captured panoramas, the generated novel views all lie on the lines connecting existing views. Therefore these methods do not support free-viewpoint navigation. In this paper, we propose a new panoramic image-based rendering method for novel view generation. Our method represents each input panorama with a set of spherical superpixels and warps each superpixel individually so the method can deal with the occlusion and disocclusion problem. The warping is dominated by a two-term constraint, which can preserve the shape of the superpixel and ensure it is warped to the correct position determined by the 3D reconstruction of the scene. Our method can generate novel views that are far from input camera positions. Thus, it supports freely exploring the scene with a 360° field of view. We compare our method with three previous methods on datasets captured by ourselves and by others. Experiments show that our method can obtain better results. Full article
Show Figures

Figure 1

11 pages, 2398 KiB  
Article
Rapidly Quantifying Interior Greenery Using 360° Panoramic Images
by Junzhiwei Jiang, Cris Brack, Robert Coe and Philip Gibbons
Forests 2022, 13(4), 602; https://doi.org/10.3390/f13040602 - 12 Apr 2022
Cited by 3 | Viewed by 2410
Abstract
Many people spend the majority of their time indoors and there is emerging evidence that interior greenery contributes to human wellbeing. Accurately capturing the amount of interior greenery is an important first step in studying its contribution to human well-being. In this study, [...] Read more.
Many people spend the majority of their time indoors and there is emerging evidence that interior greenery contributes to human wellbeing. Accurately capturing the amount of interior greenery is an important first step in studying its contribution to human well-being. In this study, we evaluated the accuracy of interior greenery captured using 360° panoramic images taken within a range of different interior spaces. We developed an Interior Green View Index (iGVI) based on a K-means clustering algorithm to estimate interior greenery from 360° panoramic images taken within 66 interior spaces and compared these estimates with interior greenery measured manually from the same panoramic images. Interior greenery estimated using the automated method ranged from 0% to 34.19% of image pixels within the sampled interior spaces. Interior greenery estimated using the automated method was highly correlated (r = 0.99) with interior greenery measured manually, although we found the accuracy of the automated method compared with the manual method declined with the volume and illuminance of interior spaces. The results suggested that our automated method for extracting interior greenery from 360° panoramic images is a useful tool for rapidly estimating interior greenery in all but very large and highly illuminated interior spaces. Full article
(This article belongs to the Special Issue Urban Forestry Measurements)
Show Figures

Figure 1

19 pages, 5200 KiB  
Article
Performance Enhancement of Functional Delay and Sum Beamforming for Spherical Microphone Arrays
by Yang Zhao, Zhigang Chu and Linyong Li
Electronics 2022, 11(7), 1132; https://doi.org/10.3390/electronics11071132 - 2 Apr 2022
Cited by 3 | Viewed by 2773
Abstract
Functional delay and sum (FDAS) beamforming for spherical microphone arrays can achieve 360° panoramic acoustic source identification, thus having broad application prospects for identifying interior noise sources. However, its acoustic imaging suffers from severe sidelobe contamination under a low signal-to-noise ratio (SNR), which [...] Read more.
Functional delay and sum (FDAS) beamforming for spherical microphone arrays can achieve 360° panoramic acoustic source identification, thus having broad application prospects for identifying interior noise sources. However, its acoustic imaging suffers from severe sidelobe contamination under a low signal-to-noise ratio (SNR), which deteriorates the sound source identification performance. In order to overcome this issue, the cross-spectral matrix (CSM) of the measured sound pressure signal is reconstructed with diagonal reconstruction (DRec), robust principal component analysis (RPCA), and probabilistic factor analysis (PFA). Correspondingly, three enhanced FDAS methods, namely EFDAS-DRec, EFDAS-RPCA, and EFDAS-PFA, are established. Simulations show that the three methods can significantly enhance the sound source identification performance of FDAS under low SNRs. Compared with FDAS at SNR = 0 dB and the number of snapshots = 1000, the average maximum sidelobe levels of EFDAS-DRec, EFDAS-RPCA, and EFDAS-PFA are reduced by 6.4 dB, 21.6 dB, and 53.1 dB, respectively, and the mainlobes of sound sources are shrunk by 43.5%, 69.0%, and 80.0%, respectively. Moreover, when the number of snapshots is sufficient, the three EFDAS methods can improve both the quantification accuracy and the weak source localization capability. Among the three EFDAS methods, EFDAS-DRec has the highest quantification accuracy, and EFDAS-PFA has the best localization ability for weak sources. The effectiveness of the established methods and the correctness of the simulation conclusions are verified by the acoustic source identification experiment in an ordinary room, and the findings provide a more advanced test and analysis tool for noise source identification in low-SNR cabin environments. Full article
Show Figures

Figure 1

20 pages, 125434 KiB  
Article
Development of a Hybrid Method to Generate Gravito-Inertial Cues for Motion Platforms in Highly Immersive Environments
by Jose V. Riera, Sergio Casas, Marcos Fernández, Francisco Alonso and Sergio A. Useche
Sensors 2021, 21(23), 8079; https://doi.org/10.3390/s21238079 - 2 Dec 2021
Cited by 2 | Viewed by 2760
Abstract
Motion platforms have been widely used in Virtual Reality (VR) systems for decades to simulate motion in virtual environments, and they have several applications in emerging fields such as driving assistance systems, vehicle automation and road risk management. Currently, the development of new [...] Read more.
Motion platforms have been widely used in Virtual Reality (VR) systems for decades to simulate motion in virtual environments, and they have several applications in emerging fields such as driving assistance systems, vehicle automation and road risk management. Currently, the development of new VR immersive systems faces unique challenges to respond to the user’s requirements, such as introducing high-resolution 360° panoramic images and videos. With this type of visual information, it is much more complicated to apply the traditional methods of generating motion cues, since it is generally not possible to calculate the necessary corresponding motion properties that are needed to feed the motion cueing algorithms. For this reason, this paper aims to present a new method for generating non-real-time gravito-inertial cues with motion platforms using a system fed both with computer-generated—simulation-based—images and video imagery. It is a hybrid method where part of the gravito-inertial cues—those with acceleration information—are generated using a classical approach through the application of physical modeling in a VR scene utilizing washout filters, and part of the gravito-inertial cues—the ones coming from recorded images and video, without acceleration information—were generated ad hoc in a semi-manual way. The resulting motion cues generated were further modified according to the contributions of different experts based on a successive approximation—Wideband Delphi-inspired—method. The subjective evaluation of the proposed method showed that the motion signals refined with this method were significantly better than the original non-refined ones in terms of user perception. The final system, developed as part of an international road safety education campaign, could be useful for developing further VR-based applications for key fields such as driving assistance, vehicle automation and road crash prevention. Full article
(This article belongs to the Special Issue Object Tracking and Motion Analysis)
Show Figures

Figure 1

21 pages, 7272 KiB  
Article
Generation of a Panorama Compatible with the JPEG 360 International Standard Using a Single PTZ Camera
by Faiz Ullah, Oh-Jin Kwon and Seungcheol Choi
Appl. Sci. 2021, 11(22), 11019; https://doi.org/10.3390/app112211019 - 21 Nov 2021
Cited by 7 | Viewed by 3109
Abstract
Recently, the JPEG working group (ISO/IEC JTC1 SC29 WG1) developed an international standard, JPEG 360, that specifies the metadata and functionalities for saving and sharing 360-degree images efficiently to create a more realistic environment in various virtual reality services. We surveyed the metadata [...] Read more.
Recently, the JPEG working group (ISO/IEC JTC1 SC29 WG1) developed an international standard, JPEG 360, that specifies the metadata and functionalities for saving and sharing 360-degree images efficiently to create a more realistic environment in various virtual reality services. We surveyed the metadata formats of existing 360-degree images and compared them to the JPEG 360 metadata format. We found that existing omnidirectional cameras and stitching software packages use formats that are incompatible with the JPEG 360 standard to embed metadata in JPEG image files. This paper proposes an easy-to-use tool for embedding JPEG 360 standard metadata for 360-degree images in JPEG image files using a JPEG-defined box format: the JPEG universal metadata box format. The proposed implementation will help 360-degree cameras and software vendors provide immersive services to users in a standardized manner for various markets, such as entertainment, education, professional training, navigation, and virtual and augmented reality applications. We also propose and develop an economical JPEG 360 standard compatible panoramic image acquisition system from a single PTZ camera with a special-use case of a wide field of view image of a conference or meeting. A remote attendee of the conference/meeting can see the realistic and immersive environment through our PTZ panorama in virtual reality. Full article
(This article belongs to the Special Issue Advances in Intelligent Control and Image Processing)
Show Figures

Figure 1

17 pages, 5576 KiB  
Article
Scale-Variant Flight Planning for the Creation of 3D Geovisualization and Augmented Reality Maps of Geosites: The Case of Voulgaris Gorge, Lesvos, Greece
by Ermioni-Eirini Papadopoulou, Apostolos Papakonstantinou, Nikolaos Zouros and Nikolaos Soulakellis
Appl. Sci. 2021, 11(22), 10733; https://doi.org/10.3390/app112210733 - 13 Nov 2021
Cited by 10 | Viewed by 2531
Abstract
The purpose of this paper was to study the influence of cartographic scale and flight design on data acquisition using unmanned aerial systems (UASs) to create augmented reality 3D geovisualization of geosites. The relationship between geographical and cartographic scales, the spatial resolution of [...] Read more.
The purpose of this paper was to study the influence of cartographic scale and flight design on data acquisition using unmanned aerial systems (UASs) to create augmented reality 3D geovisualization of geosites. The relationship between geographical and cartographic scales, the spatial resolution of UAS-acquired images, along with their relationship with the produced 3D models of geosites, were investigated. Additionally, the lighting of the produced 3D models was examined as a key visual variable in the 3D space. Furthermore, the adaptation of the 360° panoramas as environmental lighting parameters was considered. The geosite selected as a case study was the gorge of the river Voulgaris in the western part of the island of Lesvos, which is located in the northeastern part of the Aegean Sea in Greece. The methodology applied consisted of four pillars: (i) scale-variant flight planning, (ii) data acquisition, (iii) data processing, (iv) AR, 3D geovisualization. Based on the geographic and cartographic scales, the flight design calculates the most appropriate flight parameters (height, speed, and image overlaps) to achieve the desired spatial resolution (3 cm) capable of illustrating all the scale-variant details of the geosite when mapped in 3D. High-resolution oblique aerial images and 360° panoramic aerial images were acquired using scale-variant flight plans. The data were processed using image processing algorithms to produce 3D models and create mosaic panoramas. The 3D geovisualization of the geosite selected was created using the textured 3D model produced from the aerial images. The panoramic images were converted to high-dynamic-range image (HDRI) panoramas and used as a background to the 3D model. The geovisualization was transferred and displayed in the virtual space where the panoramas were used as a light source, thus enlightening the model. Data acquisition and flight planning were crucial scale-variant steps in the 3D geovisualization. These two processes comprised the most important factors in 3D geovisualization creation embedded in the virtual space as they designated the geometry of the 3D model. The use of panoramas as the illumination parameter of an outdoor 3D scene of a geosite contributed significantly to its photorealistic performance into the 3D augmented reality and virtual space. Full article
(This article belongs to the Special Issue Autonomous Flying Robots: Recent Developments and Future Prospects)
Show Figures

Figure 1

17 pages, 782 KiB  
Review
Panoramic Street-Level Imagery in Data-Driven Urban Research: A Comprehensive Global Review of Applications, Techniques, and Practical Considerations
by Jonathan Cinnamon and Lindi Jahiu
ISPRS Int. J. Geo-Inf. 2021, 10(7), 471; https://doi.org/10.3390/ijgi10070471 - 9 Jul 2021
Cited by 38 | Viewed by 7643
Abstract
The release of Google Street View in 2007 inspired several new panoramic street-level imagery platforms including Apple Look Around, Bing StreetSide, Baidu Total View, Tencent Street View, Naver Street View, and Yandex Panorama. The ever-increasing global capture of cities in 360° provides considerable [...] Read more.
The release of Google Street View in 2007 inspired several new panoramic street-level imagery platforms including Apple Look Around, Bing StreetSide, Baidu Total View, Tencent Street View, Naver Street View, and Yandex Panorama. The ever-increasing global capture of cities in 360° provides considerable new opportunities for data-driven urban research. This paper provides the first comprehensive, state-of-the-art review on the use of street-level imagery for urban analysis in five research areas: built environment and land use; health and wellbeing; natural environment; urban modelling and demographic surveillance; and area quality and reputation. Panoramic street-level imagery provides advantages in comparison to remotely sensed imagery and conventional urban data sources, whether manual, automated, or machine learning data extraction techniques are applied. Key advantages include low-cost, rapid, high-resolution, and wide-scale data capture, enhanced safety through remote presence, and a unique pedestrian/vehicle point of view for analyzing cities at the scale and perspective in which they are experienced. However, several limitations are evident, including limited ability to capture attribute information, unreliability for temporal analyses, limited use for depth and distance analyses, and the role of corporations as image-data gatekeepers. Findings provide detailed insight for those interested in using panoramic street-level imagery for urban research. Full article
Show Figures

Figure 1

25 pages, 20051 KiB  
Article
Planar-Equirectangular Image Stitching
by Muhammad-Firdaus Syawaludin, Seungwon Kim and Jae-In Hwang
Electronics 2021, 10(9), 1126; https://doi.org/10.3390/electronics10091126 - 10 May 2021
Cited by 3 | Viewed by 4988
Abstract
The 360° cameras have served as a convenient tool for people to record their special moments or everyday lives. The supported panoramic view allowed for an immersive experience with a virtual reality (VR) headset, thus adding viewer enjoyment. Nevertheless, they cannot deliver the [...] Read more.
The 360° cameras have served as a convenient tool for people to record their special moments or everyday lives. The supported panoramic view allowed for an immersive experience with a virtual reality (VR) headset, thus adding viewer enjoyment. Nevertheless, they cannot deliver the best angular resolution images that a perspective camera may support. We put forward a solution by placing the perspective camera planar image onto the pertinent 360° camera equirectangular image region of interest (ROI) through planar-equirectangular image stitching. The proposed method includes (1) tangent image-based stitching pipeline to solve the equirectangular image spherical distortion, (2) feature matching scheme to increase correct feature match count, (3) ROI detection to find the relevant ROI on the equirectangular image, and (4) human visual system (HVS)-based image alignment to tackle the parallax error. The qualitative and quantitative experiments showed improvement of the proposed planar-equirectangular image stitching over existing approaches on a collected dataset: (1) less distortion on the stitching result, (2) 29.0% increased on correct matches, (3) 5.72° ROI position error from the ground truth and (4) lower aggregated alignment-distortion error over existing alignment approaches. We discuss possible improvement points and future research directions. Full article
(This article belongs to the Special Issue LifeXR: Concepts, Technology and Design for Everyday XR)
Show Figures

Figure 1

Back to TopTop