Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (16)

Search Parameters:
Keywords = disparity map acquisition

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 1028 KiB  
Review
Vancomycin-Resistant E. faecium: Addressing Global and Clinical Challenges
by Daniel E. Radford-Smith and Daniel C. Anthony
Antibiotics 2025, 14(5), 522; https://doi.org/10.3390/antibiotics14050522 - 19 May 2025
Cited by 1 | Viewed by 1547
Abstract
Antimicrobial resistance (AMR) poses a profound threat to modern healthcare, with vancomycin-resistant Enterococcus faecium (VREfm) emerging as a particularly resilient and clinically significant pathogen. This mini-review examines the biological mechanisms underpinning VREfm resistance, including biofilm formation, stress tolerance, and the acquisition of resistance [...] Read more.
Antimicrobial resistance (AMR) poses a profound threat to modern healthcare, with vancomycin-resistant Enterococcus faecium (VREfm) emerging as a particularly resilient and clinically significant pathogen. This mini-review examines the biological mechanisms underpinning VREfm resistance, including biofilm formation, stress tolerance, and the acquisition of resistance genes such as vanA and vanB. It also explores the behavioural, social, and healthcare system factors that facilitate VREfm transmission, highlighting disparities in burden across vulnerable populations and low-resource settings. Prevention strategies are mapped across the disease pathway, spanning primary, secondary, and tertiary levels, with a particular focus on the role and evolving challenges of antimicrobial stewardship programmes (ASP). We highlight emerging threats, such as rifaximin-induced cross-resistance to daptomycin, which challenge conventional stewardship paradigms. Finally, we propose future directions to enhance global surveillance, promote equitable stewardship interventions, and accelerate the development of innovative therapies. Addressing VREfm requires a coordinated, multidisciplinary effort to safeguard the efficacy of existing antimicrobials and protect at-risk patient populations. Full article
(This article belongs to the Section Antibiotics Use and Antimicrobial Stewardship)
Show Figures

Figure 1

24 pages, 6629 KiB  
Article
UnDER: Unsupervised Dense Point Cloud Extraction Routine for UAV Imagery Using Deep Learning
by John Ray Bergado and Francesco Nex
Remote Sens. 2025, 17(1), 24; https://doi.org/10.3390/rs17010024 - 25 Dec 2024
Viewed by 933
Abstract
Extraction of dense 3D geographic information from ultra-high-resolution unmanned aerial vehicle (UAV) imagery unlocks a great number of mapping and monitoring applications. This is facilitated by a step called dense image matching, which tries to find pixels corresponding to the same object within [...] Read more.
Extraction of dense 3D geographic information from ultra-high-resolution unmanned aerial vehicle (UAV) imagery unlocks a great number of mapping and monitoring applications. This is facilitated by a step called dense image matching, which tries to find pixels corresponding to the same object within overlapping images captured by the UAV from different locations. Recent developments in deep learning utilize deep convolutional networks to perform this dense pixel correspondence task. A common theme in these developments is to train the network in a supervised setting using available dense 3D reference datasets. However, in this work we propose a novel unsupervised dense point cloud extraction routine for UAV imagery, called UnDER. We propose a novel disparity-shifting procedure to enable the use of a stereo matching network pretrained on an entirely different typology of image data in the disparity-estimation step of UnDER. Unlike previously proposed disparity-shifting techniques for forming cost volumes, the goal of our procedure was to address the domain shift between the images that the network was pretrained on and the UAV images, by using prior information from the UAV image acquisition. We also developed a procedure for occlusion masking based on disparity consistency checking that uses the disparity image space rather than the object space proposed in a standard 3D reconstruction routine for UAV data. Our benchmarking results demonstrated significant improvements in quantitative performance, reducing the mean cloud-to-cloud distance by approximately 1.8 times the ground sampling distance (GSD) compared to other methods. Full article
Show Figures

Figure 1

22 pages, 13810 KiB  
Article
An Underwater Stereo Matching Method: Exploiting Segment-Based Method Traits without Specific Segment Operations
by Xinlin Xu, Huiping Xu, Lianjiang Ma, Kelin Sun and Jingchuan Yang
J. Mar. Sci. Eng. 2024, 12(9), 1599; https://doi.org/10.3390/jmse12091599 - 10 Sep 2024
Viewed by 1409
Abstract
Stereo matching technology, enabling the acquisition of three-dimensional data, holds profound implications for marine engineering. In underwater images, irregular object surfaces and the absence of texture information make it difficult for stereo matching algorithms that rely on discrete disparity values to accurately capture [...] Read more.
Stereo matching technology, enabling the acquisition of three-dimensional data, holds profound implications for marine engineering. In underwater images, irregular object surfaces and the absence of texture information make it difficult for stereo matching algorithms that rely on discrete disparity values to accurately capture the 3D details of underwater targets. This paper proposes a stereo method based on an energy function of Markov random field (MRF) with 3D labels to fit the inclined plane of underwater objects. Through the integration of a cross-based patch alignment approach with two label optimization stages, the proposed method demonstrates features akin to segment-based stereo matching methods, enabling it to handle images with sparse textures effectively. Through experiments conducted on both simulated UW-Middlebury datasets and real deteriorated underwater images, our method demonstrates superiority compared to classical or state-of-the-art methods by analyzing the acquired disparity maps and observing the three-dimensional reconstruction of the underwater target. Full article
(This article belongs to the Special Issue Underwater Observation Technology in Marine Environment)
Show Figures

Figure 1

19 pages, 43187 KiB  
Article
Large-Scale Land Cover Mapping Framework Based on Prior Product Label Generation: A Case Study of Cambodia
by Hongbo Zhu, Tao Yu, Xiaofei Mi, Jian Yang, Chuanzhao Tian, Peizhuo Liu, Jian Yan, Yuke Meng, Zhenzhao Jiang and Zhigao Ma
Remote Sens. 2024, 16(13), 2443; https://doi.org/10.3390/rs16132443 - 3 Jul 2024
Cited by 2 | Viewed by 1999
Abstract
Large-Scale land cover mapping (LLCM) based on deep learning models necessitates a substantial number of high-precision sample datasets. However, the limited availability of such datasets poses challenges in regularly updating land cover products. A commonly referenced method involves utilizing prior products (PPs) as [...] Read more.
Large-Scale land cover mapping (LLCM) based on deep learning models necessitates a substantial number of high-precision sample datasets. However, the limited availability of such datasets poses challenges in regularly updating land cover products. A commonly referenced method involves utilizing prior products (PPs) as labels to achieve up-to-date land cover mapping. Nonetheless, the accuracy of PPs at the regional level remains uncertain, and the Remote Sensing Image (RSI) corresponding to the product is not publicly accessible. Consequently, the sample dataset constructed through geographic location matching may lack precision. Errors in such datasets are not only due to inherent product discrepancies, and can also arise from temporal and scale disparities between the RSI and PPs. In order to solve the above problems, this paper proposes an LLCM framework for generating labels for use with PPs. The framework consists of three main parts. First, initial generation of labels, in which the collected PPs are integrated based on D-S evidence theory and initial labels are obtained using the generated trust map. Second, for dynamic label correction, a two-stage training method based on initial labels is adopted. The correction model is pretrained in the first stage, then the confidence probability (CP) correction module of the dynamic threshold value and NDVI correction module are introduced in the second stage. The initial labels are iteratively corrected while the model is trained using the joint correction loss, with the corrected labels obtained after training. Finally, the classification model is trained using the corrected labels. Using the proposed land cover mapping framework, this study used PPs to produce a 10 m spatial resolution land cover map of Cambodia in 2020. The overall accuracy of the land cover map was 91.68% and the Kappa value was 0.8808. Based on these results, the proposed mapping framework can effectively use PPs to update medium-resolution large-scale land cover datasets, and provides a powerful solution for label acquisition in LLCM projects. Full article
(This article belongs to the Special Issue Deep Learning Techniques Applied in Remote Sensing)
Show Figures

Figure 1

17 pages, 2261 KiB  
Article
A Broadscale Assessment of Sentinel-2 Imagery and the Google Earth Engine for the Nationwide Mapping of Chlorophyll a
by Richard A. Johansen, Molly K. Reif, Christina L. Saltus and Kaytee L. Pokrzywinski
Sustainability 2024, 16(5), 2090; https://doi.org/10.3390/su16052090 - 2 Mar 2024
Cited by 4 | Viewed by 2478
Abstract
Harmful algal blooms are a global phenomenon that degrade water quality and can result in adverse health impacts to both humans and wildlife. Monitoring algal blooms at scale is extremely difficult due to the lack of coincident data across space and time. Additionally, [...] Read more.
Harmful algal blooms are a global phenomenon that degrade water quality and can result in adverse health impacts to both humans and wildlife. Monitoring algal blooms at scale is extremely difficult due to the lack of coincident data across space and time. Additionally, traditional field collection methods tend to be labor- and cost-prohibitive, resulting in disparate data collection not capable of capturing the physical and biological variations within waterbodies or regions. This research attempts to help alleviate this issue by leveraging large, public, water quality databases coupled with open-access Google Earth Engine-derived Sentinel-2 imagery to evaluate the practical usability of four common chlorophyll a algorithms as a proxy for detecting and mapping algal blooms nationwide. Chlorophyll a data were aggregated from spatially diverse sites across the continental United States between 2019 and 2022. Data were aggregated via a field method and matched to coincident Sentinel-2 imagery using k-folds cross-validation to evaluate the performance of the band ratio algorithms at the nationwide scale. Additionally, the dataset was portioned to evaluate the influence of temporal windows and annual consistency on algorithm performance. The 2BDA and the NDCI algorithms were the most viable for broadscale mapping of chlorophyll a, which performed moderately well (R2 > 0.5) across the entire continental united states, encompassing highly diverse spatial, temporal, and physical conditions. Algorithms’ performances were consistent across different field methods, temporal windows, and annually. The most compatible field data acquisition method was the chlorophyll a, water, trichromatic method, uncorrected with R2 values of 0.63, 0.62, and 0.41 and RMSE values of 15.89, 16.2, and 23.30 for 2BDA, NDCI, and MCI, respectively. These results indicate the feasibility of utilizing band ratio algorithms for broadscale detection and mapping of chlorophyll a as a proxy for HABs, which is especially valuable when coincident data are unavailable or limited. Full article
Show Figures

Figure 1

19 pages, 8760 KiB  
Article
Three-Dimensional Point Cloud Reconstruction Method of Cardiac Soft Tissue Based on Binocular Endoscopic Images
by Jiawei Tian, Botao Ma, Siyu Lu, Bo Yang, Shan Liu and Zhengtong Yin
Electronics 2023, 12(18), 3799; https://doi.org/10.3390/electronics12183799 - 8 Sep 2023
Cited by 4 | Viewed by 1984
Abstract
Three-dimensional reconstruction technology based on binocular stereo vision is a key research area with potential clinical applications. Mainstream research has focused on sparse point reconstruction within the soft tissue domain, limiting the comprehensive 3D data acquisition required for effective surgical robot navigation. This [...] Read more.
Three-dimensional reconstruction technology based on binocular stereo vision is a key research area with potential clinical applications. Mainstream research has focused on sparse point reconstruction within the soft tissue domain, limiting the comprehensive 3D data acquisition required for effective surgical robot navigation. This study introduces a new paradigm to address existing challenges. An innovative stereoscopic endoscopic image correction algorithm is proposed, exploiting intrinsic insights into stereoscopic calibration parameters. The synergy between the stereoscopic endoscope parameters and the disparity map derived from the cardiac soft tissue images ultimately leads to the acquisition of precise 3D points. Guided by deliberate filtering and optimization methods, the triangulation process subsequently facilitates the reconstruction of the complex surface of the cardiac soft tissue. The experimental results strongly emphasize the accuracy of the calibration algorithm, confirming its utility in stereoscopic endoscopy. Furthermore, the image rectification algorithm exhibits a significant reduction in vertical parallax, which effectively enhances the stereo matching process. The resulting 3D reconstruction technique enables the targeted surface reconstruction of different regions of interest in the cardiac soft tissue landscape. This study demonstrates the potential of binocular stereo vision-based 3D reconstruction techniques for integration into clinical settings. The combination of joint calibration algorithms, image correction innovations, and precise tissue reconstruction enhances the promise of improved surgical precision and outcomes in the field of cardiac interventions. Full article
Show Figures

Figure 1

21 pages, 32440 KiB  
Article
Reference-Based Super-Resolution Method for Remote Sensing Images with Feature Compression Module
by Jiayang Zhang, Wanxu Zhang, Bo Jiang, Xiaodan Tong, Keya Chai, Yanchao Yin, Lin Wang, Junhao Jia and Xiaoxuan Chen
Remote Sens. 2023, 15(4), 1103; https://doi.org/10.3390/rs15041103 - 17 Feb 2023
Cited by 8 | Viewed by 3034
Abstract
High-quality remote sensing images play important roles in the development of ecological indicators’ mapping, urban-rural management, urban planning, and other fields. Compared with natural images, remote sensing images have more abundant land cover along with lower spatial resolutions. Given the embedded longitude and [...] Read more.
High-quality remote sensing images play important roles in the development of ecological indicators’ mapping, urban-rural management, urban planning, and other fields. Compared with natural images, remote sensing images have more abundant land cover along with lower spatial resolutions. Given the embedded longitude and latitude information of remote sensing images, reference (Ref) images with similar scenes could be more accessible. However, existing traditional super-resolution (SR) approaches always depend on increases in network depth to improve performance, which limits the acquisition and application of high-quality remote sensing images. In this paper, we proposed a novel, reference-image-based, super-resolution method with feature compression module (FCSR) for remote sensing images to alleviate the above issue while effectively utilizing high-resolution (HR) information from Ref images. Specifically, we exploited a feature compression branch (FCB) to extract relevant features in feature detail matching with large measurements. This branch employed a feature compression module (FCM) to extract features from low-resolution (LR) and Ref images, which enabled texture transfer from different perspectives. To decrease the impact of environmental factors such as resolution, brightness and ambiguity disparities between the LR and Ref images, we designed a feature extraction encoder (FEE) to ensure accuracy in feature extraction in the feature acquisition branch. The experimental results demonstrate that the proposed FCSR achieves significant performance and visual quality compared to state-of-the-art SR methods. Explicitly, when compared with the best method, the average peak signal-to-noise ratio (PSNR) index on the three test sets is improved by 1.0877%, 0.8161%, 1.0296%, respectively, and the structural similarity (SSIM) index on four test sets is improved by 1.4764%, 1.4467%, 0.0882%, and 1.8371%, respectively. Simultaneously, FCSR obtains satisfactory visual details following qualitative evaluation. Full article
(This article belongs to the Special Issue Remote Sensing in Natural Resource and Water Environment)
Show Figures

Graphical abstract

13 pages, 2470 KiB  
Article
Improvement of AD-Census Algorithm Based on Stereo Vision
by Yina Wang, Mengjiao Gu, Yufeng Zhu, Gang Chen, Zhaodong Xu and Yingqing Guo
Sensors 2022, 22(18), 6933; https://doi.org/10.3390/s22186933 - 13 Sep 2022
Cited by 19 | Viewed by 3661
Abstract
Problems such as low light, similar background colors, and noisy image acquisition often occur when collecting images of lunar surface obstacles. Given these problems, this study focuses on the AD-Census algorithm. In the original Census algorithm, in the bit string calculated with the [...] Read more.
Problems such as low light, similar background colors, and noisy image acquisition often occur when collecting images of lunar surface obstacles. Given these problems, this study focuses on the AD-Census algorithm. In the original Census algorithm, in the bit string calculated with the central pixel point, the bit string will be affected by the noise that the central point is subjected to. The effect of noise results in errors and mismatching. We introduce an improved algorithm to calculate the average window pixel for solving the problem of being susceptible to the central pixel value and improve the accuracy of the algorithm. Experiments have proven that the object contour in the grayscale map of disparity obtained by the improved algorithm is more apparent, and the edge part of the image is significantly improved, which is more in line with the real scene. In addition, because the traditional Census algorithm matches the window size in a fixed rectangle, it is difficult to obtain a suitable window in the image range of different textures, affecting the timeliness of the algorithm. An improvement idea of area growth adaptive window matching is proposed. The improved Census algorithm is applied to the AD-Census algorithm. The results show that the improved AD-Census algorithm has been shown to have an average run time of 5.3% and better matching compared to the traditional AD-Census algorithm for all tested image sets. Finally, the improved algorithm is applied to the simulation environment, and the experimental results show that the obstacles in the image can be effectively detected. The improved algorithm has important practical application value and is important to improve the feasibility and reliability of obstacle detection in lunar exploration projects. Full article
(This article belongs to the Special Issue Sensing and Processing for 3D Computer Vision: 2nd Edition)
Show Figures

Figure 1

27 pages, 24113 KiB  
Article
Synthesizing Disparate LiDAR and Satellite Datasets through Deep Learning to Generate Wall-to-Wall Regional Inventories for the Complex, Mixed-Species Forests of the Eastern United States
by Elias Ayrey, Daniel J. Hayes, John B. Kilbride, Shawn Fraver, John A. Kershaw, Bruce D. Cook and Aaron R. Weiskittel
Remote Sens. 2021, 13(24), 5113; https://doi.org/10.3390/rs13245113 - 16 Dec 2021
Cited by 13 | Viewed by 5308
Abstract
Light detection and ranging (LiDAR) has become a commonly-used tool for generating remotely-sensed forest inventories. However, LiDAR-derived forest inventories have remained uncommon at a regional scale due to varying parameters among LiDAR data acquisitions and the availability of sufficient calibration data. Here, we [...] Read more.
Light detection and ranging (LiDAR) has become a commonly-used tool for generating remotely-sensed forest inventories. However, LiDAR-derived forest inventories have remained uncommon at a regional scale due to varying parameters among LiDAR data acquisitions and the availability of sufficient calibration data. Here, we present a model using a 3-D convolutional neural network (CNN), a form of deep learning capable of scanning a LiDAR point cloud, combined with coincident satellite data (spectral, phenology, and disturbance history). We compared this approach to traditional modeling used for making forest predictions from LiDAR data (height metrics and random forest) and found that the CNN had consistently lower uncertainty. We then applied the CNN to public data over six New England states in the USA, generating maps of 14 forest attributes at a 10 m resolution over 85% of the region. Aboveground biomass estimates produced a root mean square error of 36 Mg ha−1 (44%) and were within the 97.5% confidence of independent county-level estimates for 33 of 38 or 86.8% of the counties examined. CNN predictions for stem density and percentage of conifer attributes were moderately successful, while predictions for detailed species groupings were less successful. The approach shows promise for improving the prediction of forest attributes from regional LiDAR data and for combining disparate LiDAR datasets into a common framework for large-scale estimation. Full article
(This article belongs to the Special Issue Remote Sensing of Forest Carbon)
Show Figures

Figure 1

23 pages, 2346 KiB  
Article
A Benchmark Evaluation of Adaptive Image Compression for Multi Picture Object Stereoscopic Images
by Alessandro Ortis, Marco Grisanti, Francesco Rundo and Sebastiano Battiato
J. Imaging 2021, 7(8), 160; https://doi.org/10.3390/jimaging7080160 - 23 Aug 2021
Cited by 1 | Viewed by 2604
Abstract
A stereopair consists of two pictures related to the same subject taken by two different points of view. Since the two images contain a high amount of redundant information, new compression approaches and data formats are continuously proposed, which aim to reduce the [...] Read more.
A stereopair consists of two pictures related to the same subject taken by two different points of view. Since the two images contain a high amount of redundant information, new compression approaches and data formats are continuously proposed, which aim to reduce the space needed to store a stereoscopic image while preserving its quality. A standard for multi-picture image encoding is represented by the MPO format (Multi-Picture Object). The classic stereoscopic image compression approaches compute a disparity map between the two views, which is stored with one of the two views together with a residual image. An alternative approach, named adaptive stereoscopic image compression, encodes just the two views independently with different quality factors. Then, the redundancy between the two views is exploited to enhance the low quality image. In this paper, the problem of stereoscopic image compression is presented, with a focus on the adaptive stereoscopic compression approach, which allows us to obtain a standardized format of the compressed data. The paper presents a benchmark evaluation on large and standardized datasets including 60 stereopairs that differ by resolution and acquisition technique. The method is evaluated by varying the amount of compression, as well as the matching and optimization methods resulting in 16 different settings. The adaptive approach is also compared with other MPO-compliant methods. The paper also presents an Human Visual System (HVS)-based assessment experiment which involved 116 people in order to verify the perceived quality of the decoded images. Full article
(This article belongs to the Special Issue New and Specialized Methods of Image Compression)
Show Figures

Figure 1

13 pages, 2809 KiB  
Article
3D Transparent Object Detection and Reconstruction Based on Passive Mode Single-Pixel Imaging
by Anumol Mathai, Ningqun Guo, Dong Liu and Xin Wang
Sensors 2020, 20(15), 4211; https://doi.org/10.3390/s20154211 - 29 Jul 2020
Cited by 11 | Viewed by 4599
Abstract
Transparent object detection and reconstruction are significant, due to their practical applications. The appearance and characteristics of light in these objects make reconstruction methods tailored for Lambertian surfaces fail disgracefully. In this paper, we introduce a fixed multi-viewpoint approach to ascertain the shape [...] Read more.
Transparent object detection and reconstruction are significant, due to their practical applications. The appearance and characteristics of light in these objects make reconstruction methods tailored for Lambertian surfaces fail disgracefully. In this paper, we introduce a fixed multi-viewpoint approach to ascertain the shape of transparent objects, thereby avoiding the rotation or movement of the object during imaging. In addition, a simple and cost-effective experimental setup is presented, which employs two single-pixel detectors and a digital micromirror device, for imaging transparent objects by projecting binary patterns. In the system setup, a dark framework is implemented around the object, to create shades at the boundaries of the object. By triangulating the light path from the object, the surface shape is recovered, neither considering the reflections nor the number of refractions. It can, therefore, handle transparent objects with a relatively complex shape with the unknown refractive index. The implementation of compressive sensing in this technique further simplifies the acquisition process, by reducing the number of measurements. The experimental results show that 2D images obtained from the single-pixel detectors are better in quality with a resolution of 32×32. Additionally, the obtained disparity and error map indicate the feasibility and accuracy of the proposed method. This work provides a new insight into 3D transparent object detection and reconstruction, based on single-pixel imaging at an affordable cost, with the implementation of a few numbers of detectors. Full article
(This article belongs to the Special Issue Sensing and Processing for 3D Computer Vision)
Show Figures

Figure 1

22 pages, 12034 KiB  
Article
A Semi-Supervised Monocular Stereo Matching Method
by Zhimin Zhang, Jianzhong Qiao and Shukuan Lin
Symmetry 2019, 11(5), 690; https://doi.org/10.3390/sym11050690 - 18 May 2019
Cited by 2 | Viewed by 3362
Abstract
Supervised monocular depth estimation methods based on learning have shown promising results compared with the traditional methods. However, these methods require a large number of high-quality corresponding ground truth depth data as supervision labels. Due to the limitation of acquisition equipment, it is [...] Read more.
Supervised monocular depth estimation methods based on learning have shown promising results compared with the traditional methods. However, these methods require a large number of high-quality corresponding ground truth depth data as supervision labels. Due to the limitation of acquisition equipment, it is expensive and impractical to record ground truth depth for different scenes. Compared to supervised methods, the self-supervised monocular depth estimation method without using ground truth depth is a promising research direction, but self-supervised depth estimation from a single image is geometrically ambiguous and suboptimal. In this paper, we propose a novel semi-supervised monocular stereo matching method based on existing approaches to improve the accuracy of depth estimation. This idea is inspired by the experimental results of the paper that the depth estimation accuracy of a stereo pair as input is better than that of a monocular view as input in the same self-supervised network model. Therefore, we decompose the monocular depth estimation problem into two sub-problems, a right view synthesized process followed by a semi-supervised stereo matching process. In order to improve the accuracy of the synthetic right view, we innovate beyond the existing view synthesis method Deep3D by adding a left-right consistency constraint and a smoothness constraint. To reduce the error caused by the reconstructed right view, we propose a semi-supervised stereo matching model that makes use of disparity maps generated by a self-supervised stereo matching model as the supervision cues and joint self-supervised cues to optimize the stereo matching network. In the test, the two networks are able to predict the depth map directly from a single image by pipeline connecting. Both procedures not only obey geometric principles, but also improve estimation accuracy. Test results on the KITTI dataset show that this method is superior to the current mainstream monocular self-supervised depth estimation methods under the same condition. Full article
Show Figures

Figure 1

22 pages, 2574 KiB  
Article
Creating Landscape-Scale Site Index Maps for the Southeastern US Is Possible with Airborne LiDAR and Landsat Imagery
by Ranjith Gopalakrishnan, Jobriath S. Kauffman, Matthew E. Fagan, John W. Coulston, Valerie A. Thomas, Randolph H. Wynne, Thomas R. Fox and Valquiria F. Quirino
Forests 2019, 10(3), 234; https://doi.org/10.3390/f10030234 - 6 Mar 2019
Cited by 21 | Viewed by 4248
Abstract
Sustainable forest management is hugely dependent on high-quality estimates of forest site productivity, but it is challenging to generate productivity maps over large areas. We present a method for generating site index (a measure of such forest productivity) maps for plantation loblolly pine [...] Read more.
Sustainable forest management is hugely dependent on high-quality estimates of forest site productivity, but it is challenging to generate productivity maps over large areas. We present a method for generating site index (a measure of such forest productivity) maps for plantation loblolly pine (Pinus taeda L.) forests over large areas in the southeastern United States by combining airborne laser scanning (ALS) data from disparate acquisitions and Landsat-based estimates of forest age. For predicting canopy heights, a linear regression model was developed using ALS data and field measurements from the Forest Inventory and Analysis (FIA) program of the US Forest Service (n = 211 plots). The model was strong (R2 = 0.84, RMSE = 1.85 m), and applicable over a large area (~208,000 sq. km). To estimate the site index, we combined the ALS estimated heights with Landsat-derived maps of stand age and planted pine area. The estimated bias was low (−0.28 m) and the RMSE (3.8 m, relative RMSE: 19.7%, base age 25 years) was consistent with other similar approaches. Due to Landsat-related constraints, our methodology is valid only for relatively young pine plantations established after 1984. We generated 30 m resolution site index maps over a large area (~832 sq. km). The site index distribution had a median value of 19.4 m, the 5th percentile value of 13.0 m and the 95th percentile value of 23.3 m. Further, using a watershed level analysis, we ranked these regions by their estimated productivity. These results demonstrate the potential and value of remote sensing based large-area site index maps. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

23 pages, 24477 KiB  
Article
Prioritizing Seafloor Mapping for Washington’s Pacific Coast
by Timothy Battista, Ken Buja, John Christensen, Jennifer Hennessey and Katrina Lassiter
Sensors 2017, 17(4), 701; https://doi.org/10.3390/s17040701 - 28 Mar 2017
Cited by 3 | Viewed by 5593
Abstract
Remote sensing systems are critical tools used for characterizing the geological and ecological composition of the seafloor. However, creating comprehensive and detailed maps of ocean and coastal environments has been hindered by the high cost of operating ship- and aircraft-based sensors. While a [...] Read more.
Remote sensing systems are critical tools used for characterizing the geological and ecological composition of the seafloor. However, creating comprehensive and detailed maps of ocean and coastal environments has been hindered by the high cost of operating ship- and aircraft-based sensors. While a number of groups (e.g., academic research, government resource management, and private sector) are engaged in or would benefit from the collection of additional seafloor mapping data, disparate priorities, dauntingly large data gaps, and insufficient funding have confounded strategic planning efforts. In this study, we addressed these challenges by implementing a quantitative, spatial process to facilitate prioritizing seafloor mapping needs in Washington State. The Washington State Prioritization Tool (WASP), a custom web-based mapping tool, was developed to solicit and analyze mapping priorities from each participating group. The process resulted in the identification of several discrete, high priority mapping hotspots. As a result, several of the areas have been or will be subsequently mapped. Furthermore, information captured during the process about the intended application of the mapping data was paramount for identifying the optimum remote sensing sensors and acquisition parameters to use during subsequent mapping surveys. Full article
Show Figures

Figure 1

25 pages, 6392 KiB  
Article
Prediction of Canopy Heights over a Large Region Using Heterogeneous Lidar Datasets: Efficacy and Challenges
by Ranjith Gopalakrishnan, Valerie A. Thomas, John W. Coulston and Randolph H. Wynne
Remote Sens. 2015, 7(9), 11036-11060; https://doi.org/10.3390/rs70911036 - 27 Aug 2015
Cited by 21 | Viewed by 6447
Abstract
Generating accurate and unbiased wall-to-wall canopy height maps from airborne lidar data for large regions is useful to forest scientists and natural resource managers. However, mapping large areas often involves using lidar data from different projects, with varying acquisition parameters. In this work, [...] Read more.
Generating accurate and unbiased wall-to-wall canopy height maps from airborne lidar data for large regions is useful to forest scientists and natural resource managers. However, mapping large areas often involves using lidar data from different projects, with varying acquisition parameters. In this work, we address the important question of whether one can accurately model canopy heights over large areas of the Southeastern US using a very heterogeneous dataset of small-footprint, discrete-return airborne lidar data (with 76 separate lidar projects). A unique aspect of this effort is the use of nationally uniform and extensive field data (~1800 forested plots) from the Forest Inventory and Analysis (FIA) program of the US Forest Service. Preliminary results are quite promising: Over all lidar projects, we observe a good correlation between the 85th percentile of lidar heights and field-measured height (r = 0.85). We construct a linear regression model to predict subplot-level dominant tree heights from distributional lidar metrics (R2 = 0.74, RMSE = 3.0 m, n = 1755). We also identify and quantify the importance of several factors (like heterogeneity of vegetation, point density, the predominance of hardwoods or softwoods, the average height of the forest stand, slope of the plot, and average scan angle of lidar acquisition) that influence the efficacy of predicting canopy heights from lidar data. For example, a subset of plots (coefficient of variation of vegetation heights <0.2) significantly reduces the RMSE of our model from 3.0–2.4 m (~20% reduction). We conclude that when all these elements are factored into consideration, combining data from disparate lidar projects does not preclude robust estimation of canopy heights. Full article
Show Figures

Graphical abstract

Back to TopTop