Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (7)

Search Parameters:
Keywords = omnidirectional depth estimation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 28511 KB  
Article
A Method for Estimating the Distribution of Trachinotus ovatus in Marine Cages Based on Omnidirectional Scanning Sonar
by Yu Hu, Jiazhen Hu, Pengqi Sun, Guohao Zhu, Jialong Sun, Qiyou Tao, Taiping Yuan, Gen Li, Guoliang Pang and Xiaohua Huang
J. Mar. Sci. Eng. 2024, 12(9), 1571; https://doi.org/10.3390/jmse12091571 - 6 Sep 2024
Cited by 2 | Viewed by 1144
Abstract
In order to accurately estimate the distribution of Trachinotus ovatus in marine cages, a novel method was developed using omnidirectional scanning sonar and deep-learning techniques. This method involved differentiating water layers and clustering data layer by layer to achieve precise location estimation. The [...] Read more.
In order to accurately estimate the distribution of Trachinotus ovatus in marine cages, a novel method was developed using omnidirectional scanning sonar and deep-learning techniques. This method involved differentiating water layers and clustering data layer by layer to achieve precise location estimation. The approach comprised two main components: fish identification and fish clustering. Firstly, omnidirectional scanning sonar was employed to perform spiral detection within marine cages, capturing fish image data. These images were then labeled to construct a training dataset for an enhanced CS-YOLOv8s model. After training, the CS-YOLOv8s model was used to identify and locate fish within the images. Secondly, the cages were divided into water layers with depth intervals of 40 cm. The identification coordinate data for each water layer were clustered using the DBSCAN method to generate location coordinates for the fish in each layer. Finally, the coordinate data from all water layers were consolidated to determine the overall distribution of fish within the cage. This method was shown, through multiple experimental results, to effectively estimate the distribution of Trachinotus ovatus in marine cages, closely matching the distributions detected manually. Full article
(This article belongs to the Special Issue New Techniques and Equipment in Large Offshore Aquaculture Platform)
Show Figures

Figure 1

14 pages, 9099 KB  
Article
CasOmniMVS: Cascade Omnidirectional Depth Estimation with Dynamic Spherical Sweeping
by Pinzhi Wang, Ming Li, Jinghao Cao, Sidan Du and Yang Li
Appl. Sci. 2024, 14(2), 517; https://doi.org/10.3390/app14020517 - 6 Jan 2024
Cited by 2 | Viewed by 2517
Abstract
Estimating 360 depth from multiple cameras has been a challenging problem. However, existing methods often adopt a fixed-step spherical sweeping approach with densely sampled spheres and use numerous 3D convolutions in networks, which limits the speed of algorithms in practice. Additionally, obtaining [...] Read more.
Estimating 360 depth from multiple cameras has been a challenging problem. However, existing methods often adopt a fixed-step spherical sweeping approach with densely sampled spheres and use numerous 3D convolutions in networks, which limits the speed of algorithms in practice. Additionally, obtaining high-precision depth maps of real scenes poses a challenge for the existing algorithms. In this paper, we design a cascade architecture using a dynamic spherical sweeping method that progressively refines the depth estimation from coarse to fine over multiple stages. The proposed method adaptively adjusts sweeping intervals and ranges based on the predicted depth and the uncertainty from the previous stage, resulting in a more efficient cost aggregation performance. The experimental results demonstrated that our method achieved state-of-the-art accuracy with reduced GPU memory usage and time consumption compared to the other methods. Furthermore, we illustrate that our method achieved satisfactory performance on real-world data, despite being trained on synthetic data, indicating its generalization potential and practical applicability. Full article
(This article belongs to the Special Issue Applications of Computer Vision in 3D Perception)
Show Figures

Figure 1

19 pages, 11179 KB  
Article
New Structural Complexity Metrics for Forests from Single Terrestrial Lidar Scans
by Jonathan L. Batchelor, Todd M. Wilson, Michael J. Olsen and William J. Ripple
Remote Sens. 2023, 15(1), 145; https://doi.org/10.3390/rs15010145 - 27 Dec 2022
Cited by 6 | Viewed by 4506
Abstract
We developed new measures of structural complexity using single point terrestrial laser scanning (TLS) point clouds. These metrics are depth, openness, and isovist. Depth is a three-dimensional, radial measure of the visible distance in all directions from plot center. Openness is the percent [...] Read more.
We developed new measures of structural complexity using single point terrestrial laser scanning (TLS) point clouds. These metrics are depth, openness, and isovist. Depth is a three-dimensional, radial measure of the visible distance in all directions from plot center. Openness is the percent of scan pulses in the near-omnidirectional view without a return. Isovists are a measurement of the area visible from the scan location, a quantified measurement of the viewshed within the forest canopy. 243 scans were acquired in 27 forested stands in the Pacific Northwest region of the United States, in different ecoregions representing a broad gradient in structural complexity. All stands were designated natural areas with little to no human perturbations. We created “structural signatures” from depth and openness metrics that can be used to qualitatively visualize differences in forest structures and quantitively distinguish the structural composition of a forest at differing height strata. In most cases, the structural signatures of stands were effective at providing statistically significant metrics differentiating forests from various ecoregions and growth patterns. Isovists were less effective at differentiating between forested stands across multiple ecoregions, but they still quantify the ecological important metric of occlusion. These new metrics appear to capture the structural complexity of forests with a high level of precision and low observer bias and have great potential for quantifying structural change to forest ecosystems, quantifying effects of forest management activities, and describing habitat for organisms. Our measures of structure can be used to ground truth data obtained from aerial lidar to develop models estimating forest structure. Full article
(This article belongs to the Special Issue New Tools or Trends for Large-Scale Mapping and 3D Modelling)
Show Figures

Graphical abstract

19 pages, 7888 KB  
Article
A Dataset of Annotated Omnidirectional Videos for Distancing Applications
by Giuseppe Mazzola, Liliana Lo Presti, Edoardo Ardizzone and Marco La Cascia
J. Imaging 2021, 7(8), 158; https://doi.org/10.3390/jimaging7080158 - 21 Aug 2021
Cited by 7 | Viewed by 4871
Abstract
Omnidirectional (or 360°) cameras are acquisition devices that, in the next few years, could have a big impact on video surveillance applications, research, and industry, as they can record a spherical view of a whole environment from every perspective. This paper presents two [...] Read more.
Omnidirectional (or 360°) cameras are acquisition devices that, in the next few years, could have a big impact on video surveillance applications, research, and industry, as they can record a spherical view of a whole environment from every perspective. This paper presents two new contributions to the research community: the CVIP360 dataset, an annotated dataset of 360° videos for distancing applications, and a new method to estimate the distances of objects in a scene from a single 360° image. The CVIP360 dataset includes 16 videos acquired outdoors and indoors, annotated by adding information about the pedestrians in the scene (bounding boxes) and the distances to the camera of some points in the 3D world by using markers at fixed and known intervals. The proposed distance estimation algorithm is based on geometry facts regarding the acquisition process of the omnidirectional device, and is uncalibrated in practice: the only required parameter is the camera height. The proposed algorithm was tested on the CVIP360 dataset, and empirical results demonstrate that the estimation error is negligible for distancing applications. Full article
(This article belongs to the Special Issue 2020 Selected Papers from Journal of Imaging Editorial Board Members)
Show Figures

Figure 1

13 pages, 5011 KB  
Article
Paquitop.arm, a Mobile Manipulator for Assessing Emerging Challenges in the COVID-19 Pandemic Scenario
by Giovanni Colucci, Luigi Tagliavini, Luca Carbonari, Paride Cavallone, Andrea Botta and Giuseppe Quaglia
Robotics 2021, 10(3), 102; https://doi.org/10.3390/robotics10030102 - 14 Aug 2021
Cited by 9 | Viewed by 5085
Abstract
The use of automation and robotics technologies for caregiving and assistance has become a very interesting research topic in the field of robotics. The spread of COVID-19 has highlighted the importance of social distancing in hospitals and health centers, and collaborative robotics can [...] Read more.
The use of automation and robotics technologies for caregiving and assistance has become a very interesting research topic in the field of robotics. The spread of COVID-19 has highlighted the importance of social distancing in hospitals and health centers, and collaborative robotics can bring substantial improvements in terms of sparing health workers basic operations. Thus, researchers from Politecnico di Torino are working on Paquitop.arm, a mobile robot for assistive tasks. The purpose of this paper is to present a system composed of an omnidirectional mobile platform, a 6 DOF robot arm, and a depth camera. Task-oriented considerations are made to estimate a set of mounting parameters that represents a trade-off between the exploitation of the robot arm workspace and the compactness of the entire system. To this end, dexterity and force transmission indexes are introduced to study both the kinematic and the static behavior of the manipulator as a function of the mounting parameters. Finally, to avoid singularities during the execution of the task, the platform approach to the task workspaces is studied. Full article
(This article belongs to the Special Issue Service Robotics against COVID-2019 Pandemic)
Show Figures

Figure 1

24 pages, 3051 KB  
Article
Visual Information Fusion through Bayesian Inference for Adaptive Probability-Oriented Feature Matching
by David Valiente, Luis Payá, Luis M. Jiménez, Jose M. Sebastián and Óscar Reinoso
Sensors 2018, 18(7), 2041; https://doi.org/10.3390/s18072041 - 26 Jun 2018
Cited by 26 | Viewed by 4475
Abstract
This work presents a visual information fusion approach for robust probability-oriented feature matching. It is sustained by omnidirectional imaging, and it is tested in a visual localization framework, in mobile robotics. General visual localization methods have been extensively studied and optimized in terms [...] Read more.
This work presents a visual information fusion approach for robust probability-oriented feature matching. It is sustained by omnidirectional imaging, and it is tested in a visual localization framework, in mobile robotics. General visual localization methods have been extensively studied and optimized in terms of performance. However, one of the main threats that jeopardizes the final estimation is the presence of outliers. In this paper, we present several contributions to deal with that issue. First, 3D information data, associated with SURF (Speeded-Up Robust Feature) points detected on the images, is inferred under the Bayesian framework established by Gaussian processes (GPs). Such information represents a probability distribution for the feature points’ existence, which is successively fused and updated throughout the robot’s poses. Secondly, this distribution can be properly sampled and projected onto the next 2D image frame in t+1, by means of a filter-motion prediction. This strategy permits obtaining relevant areas in the image reference system, from which probable matches could be detected, in terms of the accumulated probability of feature existence. This approach entails an adaptive probability-oriented matching search, which accounts for significant areas of the image, but it also considers unseen parts of the scene, thanks to an internal modulation of the probability distribution domain, computed in terms of the current uncertainty of the system. The main outcomes confirm a robust feature matching, which permits producing consistent localization estimates, aided by the odometer’s prior to estimate the scale factor. Publicly available datasets have been used to validate the design and operation of the approach. Moreover, the proposal has been compared, firstly with a standard feature matching and secondly with a localization method, based on an inverse depth parametrization. The results confirm the validity of the approach in terms of feature matching, localization accuracy, and time consumption. Full article
(This article belongs to the Special Issue Visual Sensors)
Show Figures

Figure 1

25 pages, 1168 KB  
Article
A Featured-Based Strategy for Stereovision Matching in Sensors with Fish-Eye Lenses for Forest Environments
by Pedro Javier Herrera, Gonzalo Pajares, Maria Guijarro, José J. Ruz, Jesús M. Cruz and Fernando Montes
Sensors 2009, 9(12), 9468-9492; https://doi.org/10.3390/s91209468 - 26 Nov 2009
Cited by 15 | Viewed by 14445
Abstract
This paper describes a novel feature-based stereovision matching process based on a pair of omnidirectional images in forest stands acquired with a stereovision sensor equipped with fish-eye lenses. The stereo analysis problem consists of the following steps: image acquisition, camera modelling, feature extraction, [...] Read more.
This paper describes a novel feature-based stereovision matching process based on a pair of omnidirectional images in forest stands acquired with a stereovision sensor equipped with fish-eye lenses. The stereo analysis problem consists of the following steps: image acquisition, camera modelling, feature extraction, image matching and depth determination. Once the depths of significant points on the trees are obtained, the growing stock volume can be estimated by considering the geometrical camera modelling, which is the final goal. The key steps are feature extraction and image matching. This paper is devoted solely to these two steps. At a first stage a segmentation process extracts the trunks, which are the regions used as features, where each feature is identified through a set of attributes of properties useful for matching. In the second step the features are matched based on the application of the following four well known matching constraints, epipolar, similarity, ordering and uniqueness. The combination of the segmentation and matching processes for this specific kind of sensors make the main contribution of the paper. The method is tested with satisfactory results and compared against the human expert criterion. Full article
(This article belongs to the Special Issue State-of-the-Art Sensors Technology in Spain)
Show Figures

Graphical abstract

Back to TopTop