Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (87)

Search Parameters:
Keywords = Pix4D

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 2370 KiB  
Article
DP-AMF: Depth-Prior–Guided Adaptive Multi-Modal and Global–Local Fusion for Single-View 3D Reconstruction
by Luoxi Zhang, Chun Xie and Itaru Kitahara
J. Imaging 2025, 11(7), 246; https://doi.org/10.3390/jimaging11070246 - 21 Jul 2025
Viewed by 283
Abstract
Single-view 3D reconstruction remains fundamentally ill-posed, as a single RGB image lacks scale and depth cues, often yielding ambiguous results under occlusion or in texture-poor regions. We propose DP-AMF, a novel Depth-Prior–Guided Adaptive Multi-Modal and Global–Local Fusion framework that integrates high-fidelity depth priors—generated [...] Read more.
Single-view 3D reconstruction remains fundamentally ill-posed, as a single RGB image lacks scale and depth cues, often yielding ambiguous results under occlusion or in texture-poor regions. We propose DP-AMF, a novel Depth-Prior–Guided Adaptive Multi-Modal and Global–Local Fusion framework that integrates high-fidelity depth priors—generated offline by the MARIGOLD diffusion-based estimator and cached to avoid extra training cost—with hierarchical local features from ResNet-32/ResNet-18 and semantic global features from DINO-ViT. A learnable fusion module dynamically adjusts per-channel weights to balance these modalities according to local texture and occlusion, and an implicit signed-distance field decoder reconstructs the final mesh. Extensive experiments on 3D-FRONT and Pix3D demonstrate that DP-AMF reduces Chamfer Distance by 7.64%, increases F-Score by 2.81%, and boosts Normal Consistency by 5.88% compared to strong baselines, while qualitative results show sharper edges and more complete geometry in challenging scenes. DP-AMF achieves these gains without substantially increasing model size or inference time, offering a robust and effective solution for complex single-view reconstruction tasks. Full article
(This article belongs to the Section AI in Imaging)
Show Figures

Figure 1

12 pages, 600 KiB  
Article
Expanded Performance Comparison of the Oncuria 10-Plex Bladder Cancer Urine Assay Using Three Different Luminex xMAP Instruments
by Sunao Tanaka, Takuto Shimizu, Ian Pagano, Wayne Hogrefe, Sherry Dunbar, Charles J. Rosser and Hideki Furuya
Diagnostics 2025, 15(14), 1749; https://doi.org/10.3390/diagnostics15141749 - 10 Jul 2025
Viewed by 381
Abstract
Background/Objectives: The clinically validated multiplex Oncuria bladder cancer (BC) assay quickly and noninvasively identifies disease risk and tracks treatment success by simultaneously profiling 10 protein biomarkers in voided urine samples. Oncuria uses paramagnetic bead-based fluorescence multiplex technology (xMAP®; Luminex, Austin, [...] Read more.
Background/Objectives: The clinically validated multiplex Oncuria bladder cancer (BC) assay quickly and noninvasively identifies disease risk and tracks treatment success by simultaneously profiling 10 protein biomarkers in voided urine samples. Oncuria uses paramagnetic bead-based fluorescence multiplex technology (xMAP®; Luminex, Austin, TX, USA) to simultaneously measure 10 protein analytes in urine [angiogenin, apolipoprotein E, carbonic anhydrase IX (CA9), interleukin-8, matrix metalloproteinase-9 and -10, alpha-1 anti-trypsin, plasminogen activator inhibitor-1, syndecan-1, and vascular endothelial growth factor]. Methods: In a pilot study (N = 36 subjects; 18 with BC), Oncuria performed essentially identically across three different common analyzers (the laser/flow-based FlexMap 3D and 200 systems, and the LED/image-based MagPix system; Luminex). The current study compared Oncuria performance across instrumentation platforms using a larger study population (N = 181 subjects; 51 with BC). Results: All three analyzers assessed all 10 analytes in identical samples with excellent concordance. The percent coefficient of variation (%CV) in protein concentrations across systems was ≤2.3% for 9/10 analytes, with only CA9 having %CVs > 2.3%. In pairwise correlation plot comparisons between instruments for all 10 biomarkers, R2 values were 0.999 for 15/30 comparisons and R2 ≥ 0.995 for 27/30 comparisons; CA9 showed the greatest variability (R2 = 0.948–0.970). Standard curve slopes were statistically indistinguishable for all 10 biomarkers across analyzers. Conclusions: The Oncuria BC assay generates comprehensive urinary protein signatures useful for assisting BC diagnosis, predicting treatment response, and tracking disease progression and recurrence. The equivalent performance of the multiplex BC assay using three popular analyzers rationalizes test adoption by CLIA (Clinical Laboratory Improvement Amendments) clinical and research laboratories. Full article
(This article belongs to the Special Issue Diagnostic Markers of Genitourinary Tumors)
Show Figures

Figure 1

17 pages, 9448 KiB  
Article
Plant Height and Soil Compaction in Coffee Crops Based on LiDAR and RGB Sensors Carried by Remotely Piloted Aircraft
by Nicole Lopes Bento, Gabriel Araújo e Silva Ferraz, Lucas Santos Santana, Rafael de Oliveira Faria, Giuseppe Rossi and Gianluca Bambi
Remote Sens. 2025, 17(8), 1445; https://doi.org/10.3390/rs17081445 - 17 Apr 2025
Viewed by 708
Abstract
Remotely Piloted Aircraft (RPA) as sensor-carrying airborne platforms for indirect measurement of plant physical parameters has been discussed in the scientific community. The utilization of RGB sensors with photogrammetric data processing based on Structure-from-Motion (SfM) and Light Detection and Ranging (LiDAR) sensors for [...] Read more.
Remotely Piloted Aircraft (RPA) as sensor-carrying airborne platforms for indirect measurement of plant physical parameters has been discussed in the scientific community. The utilization of RGB sensors with photogrammetric data processing based on Structure-from-Motion (SfM) and Light Detection and Ranging (LiDAR) sensors for point cloud construction are applicable in this context and can yield high-quality results. In this sense, this study aimed to compare coffee plant height data obtained from RGB/SfM and LiDAR point clouds and to estimate soil compaction through penetration resistance in a coffee plantation located in Minas Gerais, Brazil. A Matrice 300 RTK RPA equipped with a Zenmuse L1 sensor was used, with RGB data processed in PIX4D software (version 4.5.6) and LiDAR data in DJI Terra software (version V4.4.6). Canopy Height Model (CHM) analysis and cross-sectional profile, together with correlation and statistical difference studies between the height data from the two sensors, were conducted to evaluate the RGB sensor’s capability to estimate coffee plant height compared to LiDAR data considered as reference. Based on the height data obtained by the two sensors, soil compaction in the coffee plantation was estimated through soil penetration resistance. The results demonstrated that both sensors provided dense point clouds from which plant height (R2 = 0.72, R = 0.85, and RMSE = 0.44) and soil penetration resistance (R2 = 0.87, R = 0.8346, and RMSE = 0.14 m) were accurately estimated, with no statistically significant differences determined between the analyzed sensor data. It is concluded, therefore, that the use of remote sensing technologies can be employed for accurate estimation of coffee plantation heights and soil compaction, emphasizing a potential pathway for reducing laborious manual field measurements. Full article
Show Figures

Figure 1

19 pages, 3066 KiB  
Article
WGA-SWIN: Efficient Multi-View 3D Object Reconstruction Using Window Grouping Attention in Swin Transformer
by Sheikh Sohan Mamun, Shengbing Ren, MD Youshuf Khan Rakib and Galana Fekadu Asafa
Electronics 2025, 14(8), 1619; https://doi.org/10.3390/electronics14081619 - 17 Apr 2025
Viewed by 1029
Abstract
Multi-view 3D reconstruction aims to discover 3D characteristics based on visual information captured across multiple viewpoints. Transformer networks have shown remarkable success in various computer vision tasks, including multi-view 3D reconstruction. However, the reconstruction of accurate 3D shapes faces challenges when trying to [...] Read more.
Multi-view 3D reconstruction aims to discover 3D characteristics based on visual information captured across multiple viewpoints. Transformer networks have shown remarkable success in various computer vision tasks, including multi-view 3D reconstruction. However, the reconstruction of accurate 3D shapes faces challenges when trying to efficiently extract and merge features across views. The existing frameworks struggled to capture the subtle relationships between the views, resulting in a poor reconstruction. To address this issue, we present a new framework, WGA-SWIN, for 3D reconstruction using multi-view objects. Our method introduces a Window Grouping Attention (WGA) mechanism that uses group tokens from different views for each window attention operation, enabling efficient inter-view and intra-view feature extraction. Diversity among various groups in a model contributes to the richness of feature learning, which results in advanced and dependable feature learning, resulting in more comprehensive and robust representations. Within the encoder swin transformer blocks, we integrated WGA to utilize both hierarchical design and shifted window attention mechanisms for efficient multi-view feature extraction. In addition, we developed a progressive hierarchical decoder that combines swin transformer blocks with 3D convolutions to utilize voxel representation, resulting in a high resolution for obtaining high-quality 3D reconstructions with fine structural details. The experimental results on the benchmark datasets ShapeNet and Pix3D demonstrate that our work achieves state-of-the-art (SOTA) performance, outperforming existing methods in both single-view and multi-view 3D reconstruction, beyond the capabilities of current technologies. We lead by 0.95% and 1.07% in both IoU and F-Scores respectively, which demonstrates the robustness of our method. Full article
(This article belongs to the Special Issue 3D Computer Vision and 3D Reconstruction)
Show Figures

Figure 1

22 pages, 26135 KiB  
Article
New Approach for Mapping Land Cover from Archive Grayscale Satellite Imagery
by Mohamed Rabii Simou, Mohamed Maanan, Safia Loulad, Mehdi Maanan and Hassan Rhinane
Technologies 2025, 13(4), 158; https://doi.org/10.3390/technologies13040158 - 14 Apr 2025
Viewed by 645
Abstract
This paper examines the use of image-to-image translation models to colorize grayscale satellite images for improved built-up segmentation of Agadir, Morocco, in 1967 and Les Sables-d’Olonne, France, in 1975. The proposed method applies advanced colorization techniques to historical remote sensing data, enhancing the [...] Read more.
This paper examines the use of image-to-image translation models to colorize grayscale satellite images for improved built-up segmentation of Agadir, Morocco, in 1967 and Les Sables-d’Olonne, France, in 1975. The proposed method applies advanced colorization techniques to historical remote sensing data, enhancing the segmentation process compared to using the original grayscale images. In this study, spatial data such as Landsat 5TM satellite images and declassified satellite images were collected and prepared for analysis. The models were trained and validated using Landsat 5TM RGB images and their corresponding grayscale versions. Once trained, these models were applied to colorize the declassified grayscale satellite images. To train the segmentation models, colorized Landsat images were paired with built-up-area masks, allowing the models to learn the relationship between colorized features and built-up regions. The best-performing segmentation model was then used to segment the colorized declassified images into built-up areas. The results demonstrate that the Attention Pix2Pix model successfully learned to colorize grayscale satellite images accurately, improving the PSNR by up to 27.72 and SSIM by 0.96. Furthermore, the results of segmentation were highly satisfactory, with UNet++ identified as the best-performing model with an mIoU of 96.95% in Greater Agadir and 95.42% in Vendée. These findings indicate that the application of the developed method can achieve accurate and reliable results that can be utilized for future LULC change studies. The innovative approach of the study has significant implications for land planning and management, providing accurate LULC information to inform decisions related to zoning, environmental protection, and disaster management. Full article
(This article belongs to the Section Environmental Technology)
Show Figures

Graphical abstract

21 pages, 33600 KiB  
Article
Pix2Pix-Based Modelling of Urban Morphogenesis and Its Linkage to Local Climate Zones and Urban Heat Islands in Chinese Megacities
by Mo Wang, Ziheng Xiong, Jiayu Zhao, Shiqi Zhou and Qingchan Wang
Land 2025, 14(4), 755; https://doi.org/10.3390/land14040755 - 1 Apr 2025
Viewed by 750
Abstract
Accelerated urbanization in China poses significant challenges for developing urban planning strategies that are responsive to diverse climatic conditions. This demands a sophisticated understanding of the complex interactions between 3D urban forms and local climate dynamics. This study employed the Conditional Generative Adversarial [...] Read more.
Accelerated urbanization in China poses significant challenges for developing urban planning strategies that are responsive to diverse climatic conditions. This demands a sophisticated understanding of the complex interactions between 3D urban forms and local climate dynamics. This study employed the Conditional Generative Adversarial Network (cGAN) of the Pix2Pix algorithm as a predictive model to simulate 3D urban morphologies aligned with Local Climate Zone (LCZ) classifications. The research framework comprises four key components: (1) acquisition of LCZ maps and urban form samples from selected Chinese megacities for training, utilizing datasets such as the World Cover database, RiverMap’s building outlines, and integrated satellite data from Landsat 8, Sentinel-1, and Sentinel-2; (2) evaluation of the Pix2Pix algorithm’s performance in simulating urban environments; (3) generation of 3D urban models to demonstrate the model’s capability for automated urban morphology construction, with specific potential for examining urban heat island effects; (4) examination of the model’s adaptability in urban planning contexts in projecting urban morphological transformations. By integrating urban morphological inputs from eight representative Chinese metropolises, the model’s efficacy was assessed both qualitatively and quantitatively, achieving an RMSE of 0.187, an R2 of 0.78, and a PSNR of 14.592. In a generalized test of urban morphology prediction through LCZ classification, exemplified by the case of Zhuhai, results indicated the model’s effectiveness in categorizing LCZ types. In conclusion, the integration of urban morphological data from eight representative Chinese metropolises further confirmed the model’s potential in climate-adaptive urban planning. The findings of this study underscore the potential of generative algorithms based on LCZ types in accurately forecasting urban morphological development, thereby making significant contributions to sustainable and climate-responsive urban planning. Full article
Show Figures

Figure 1

21 pages, 6672 KiB  
Article
Influence of Ground Control Point Placement and Surrounding Environment on Unmanned Aerial Vehicle-Based Structure-from-Motion Forest Resource Estimation
by Shohei Kameyama
Drones 2025, 9(4), 258; https://doi.org/10.3390/drones9040258 - 28 Mar 2025
Viewed by 698
Abstract
Ground control points (GCPs) are used in forest surveys employing unmanned aerial vehicle (UAV)-based structure from motion (SfM). In that context, the influence of the surrounding environment on GCP placement requires further analysis. This study investigated the effects of GCP placement and the [...] Read more.
Ground control points (GCPs) are used in forest surveys employing unmanned aerial vehicle (UAV)-based structure from motion (SfM). In that context, the influence of the surrounding environment on GCP placement requires further analysis. This study investigated the effects of GCP placement and the surrounding environment on the estimation of forest information by UAV-SfM. Forest resource estimation was performed using UAV (Inspire2) aerial images and SfM analysis (via Pix4Dmapper) under varying environmental conditions around GCPs within the same forest stand. The results indicated that GCP placement had no significant effect on SfM processing, tree top extraction (the number of extracted target trees was 151 or 150), or tree crown area estimation (RMSEs ranged from approximately 5 to 6.5 m2). However, when GCPs were placed in open areas, the tree height estimation accuracy improved, without significant differences between estimated and measured values (patterns A, B, D and E, had RMSEs of 1.60 to 3.09 m; patterns C and D had RMSEs of 5.69 to 7.92 m). These findings suggest that in UAV-SfM-based forest resource surveys, particularly for tree height estimation, both the number and placement of GCPs, as well as the surrounding environment, are crucial in enhancing estimation accuracy. Full article
Show Figures

Figure 1

28 pages, 41613 KiB  
Article
Acquisition and Modeling of Material Appearance Using a Portable, Low Cost, Device
by Davide Marelli, Simone Bianco and Gianluigi Ciocca
Sensors 2025, 25(4), 1143; https://doi.org/10.3390/s25041143 - 13 Feb 2025
Viewed by 939
Abstract
Material appearance acquisition allows researchers to capture the optical properties of surfaces and use them in different tasks such as material analysis, digital twins reproduction, 3D configurators, augmented and virtual reality, etc. Precise acquisition of such properties requires complex and expensive hardware. In [...] Read more.
Material appearance acquisition allows researchers to capture the optical properties of surfaces and use them in different tasks such as material analysis, digital twins reproduction, 3D configurators, augmented and virtual reality, etc. Precise acquisition of such properties requires complex and expensive hardware. In this paper, we aim to answer the following research challenge: Can we design an accurate enough but low-cost and portable device for material appearance acquisition? We present the rationale behind the design of our device using consumer-grade hardware components. Ultimately, our device costs EUR 80 and can acquire surface patches of size 5 × 5 cm with a 40 pix/mm resolution. Our device exploits a traditional RGB camera to capture a surface using 24 different images, each photographed using different lighting conditions. The different lighting conditions are generated by exploiting the LED rings included in our device; specifically, each of the 24 images is acquired by turning on one individual LED at time. We also illustrate the custom processing pipelines developed to support capturing and generating the material data in terms of albedo, normal, and roughness maps. The accuracy of the acquisition process is comprehensively evaluated both quantitatively and qualitatively. Results show that our low-cost device can faithfully acquire different materials. The usefulness of our device is further demonstrated by a textile virtual catalog application that we designed for rendering virtual fabrics on a mobile apparatus. Full article
Show Figures

Figure 1

21 pages, 16950 KiB  
Article
Retrieval of Three-Dimensional Wave Surfaces from X-Band Marine Radar Images Utilizing Enhanced Pix2Pix Model
by Lingyi Hou, Xiao Wang, Bo Yang, Zhiyuan Wei, Yuwen Sun and Yuxiang Ma
J. Mar. Sci. Eng. 2024, 12(12), 2229; https://doi.org/10.3390/jmse12122229 - 5 Dec 2024
Cited by 1 | Viewed by 829
Abstract
In this study, we propose a novel method for retrieving the three-dimensional (3D) wave surface from sea clutter using both simulated and measured data. First, the linear wave superposition model and modulation principle are employed to generate simulated datasets comprising 3D wave surfaces [...] Read more.
In this study, we propose a novel method for retrieving the three-dimensional (3D) wave surface from sea clutter using both simulated and measured data. First, the linear wave superposition model and modulation principle are employed to generate simulated datasets comprising 3D wave surfaces and corresponding sea clutter. Subsequently, we develop a Pix2Pix model enhanced with a self-attention mechanism and a multiscale discriminator to effectively capture the nonlinear relationship between the simulated 3D wave surfaces and sea clutter. The model’s performance is evaluated through error analysis, comparisons of wave number spectra, and differences in wave surface reconstructions using a dedicated test set. Finally, the trained model is applied to reconstruct wave surfaces from sea clutter data collected aboard a ship, with results benchmarked against those derived from the Schrödinger equation. The findings demonstrate that the proposed model excels in preserving high-frequency image details while ensuring precise alignment between reconstructed images. Furthermore, it achieves superior retrieval accuracy compared to traditional approaches, highlighting its potential for advancing wave surface retrieval techniques. Full article
(This article belongs to the Section Physical Oceanography)
Show Figures

Figure 1

55 pages, 7917 KiB  
Systematic Review
Application of Building Information Modelling in Construction and Demolition Waste Management: Systematic Review and Future Trends Supported by a Conceptual Framework
by Eduardo José Melo Lins, Rachel Perez Palha, Maria do Carmo Martins Sobral, Adolpho Guido de Araújo and Érika Alves Tavares Marques
Sustainability 2024, 16(21), 9425; https://doi.org/10.3390/su16219425 - 30 Oct 2024
Cited by 2 | Viewed by 4034
Abstract
The architecture, engineering, construction, and operations industry faces an urgent need to enhance construction and demolition waste management in urban areas, driven by increasing demolition and construction activities and a desire to align with sustainable practices and the circular economy principles. To address [...] Read more.
The architecture, engineering, construction, and operations industry faces an urgent need to enhance construction and demolition waste management in urban areas, driven by increasing demolition and construction activities and a desire to align with sustainable practices and the circular economy principles. To address this need, a systematic literature review on the building information modelling methodology was conducted, employing a structured protocol and specific tools for the analysis of academic studies, based on PRISMA guidelines and StArt software (version 3.4 BETA). Ninety relevant studies published between 1998 and 2024, were analysed and selected from the Web of Science, Scopus, and Engineering Village databases. Findings indicate that China leads in publications with 34%, followed by Brazil (8%) and the United Kingdom (7%). The analysis emphasises the use of drones and LiDAR scanners for precise spatial data, processed by 3D reconstruction tools like Pix4D and FARO As-Built. Revit excels in 3D modelling, providing a robust platform for visualisation and analysis. Visual programming tools such as Dynamo automate processes and optimise material reuse. The study presents a conceptual framework that integrates these technologies with the principles of the circular economy, clarifying the interactions and practical applications that promote the sustainable management of demolition waste from urban buildings and process efficiency. Although the approach promotes material reuse and sustainability, it still faces barriers such as the need for waste segregation at the source, the adaptation of innovative technologies, like the iPhone 15 Pro LiDAR and thermal cameras, as well as associated costs. These factors may limit its adoption in larger-scale projects, particularly due to the increased complexity of buildings. Full article
(This article belongs to the Section Sustainable Management)
Show Figures

Figure 1

12 pages, 2392 KiB  
Communication
Multi-Head Attention Refiner for Multi-View 3D Reconstruction
by Kyunghee Lee, Ihjoon Cho, Boseung Yang and Unsang Park
J. Imaging 2024, 10(11), 268; https://doi.org/10.3390/jimaging10110268 - 24 Oct 2024
Cited by 1 | Viewed by 11848
Abstract
Traditional 3D reconstruction models have consistently faced the challenge of balancing high recall of object edges with maintaining a high precision. In this paper, we introduce a post-processing method, the Multi-Head Attention Refiner (MA-R), designed to address this issue by integrating a multi-head [...] Read more.
Traditional 3D reconstruction models have consistently faced the challenge of balancing high recall of object edges with maintaining a high precision. In this paper, we introduce a post-processing method, the Multi-Head Attention Refiner (MA-R), designed to address this issue by integrating a multi-head attention mechanism into the U-Net style refiner module. Our method demonstrates improved capability in capturing intricate image details, leading to significant enhancements in boundary predictions and recall rates. In our experiments, the proposed approach notably improves the reconstruction performance of Pix2Vox++ when multiple images are used as the input. Specifically, with 20-view images, our method achieves an IoU score of 0.730, a 1.1% improvement over the 0.719 of Pix2Vox++, and a 2.1% improvement in F-Score, achieving 0.483 compared to 0.462 of Pix2Vox++. These results underscore the robustness of our approach in enhancing both precision and recall in 3D reconstruction tasks involving multiple views. Full article
(This article belongs to the Special Issue Geometry Reconstruction from Images (2nd Edition))
Show Figures

Figure 1

21 pages, 5961 KiB  
Article
Influence of Structure from Motion Algorithm Parameters on Metrics for Individual Tree Detection Accuracy and Precision
by Wade T. Tinkham and George A. Woolsey
Remote Sens. 2024, 16(20), 3844; https://doi.org/10.3390/rs16203844 - 16 Oct 2024
Viewed by 1238
Abstract
Uncrewed aerial system (UAS) structure from motion (SfM) monitoring strategies for individual trees has rapidly expanded in the early 21st century. It has become common for studies to report accuracies for individual tree heights and DBH, along with stand density metrics. This study [...] Read more.
Uncrewed aerial system (UAS) structure from motion (SfM) monitoring strategies for individual trees has rapidly expanded in the early 21st century. It has become common for studies to report accuracies for individual tree heights and DBH, along with stand density metrics. This study evaluates individual tree detection and stand basal area accuracy and precision in five ponderosa pine sites against the range of SfM parameters in the Agisoft Metashape, Pix4DMapper, and OpenDroneMap algorithms. The study is designed to frame UAS-SfM individual tree monitoring accuracy in the context of data processing and storage demands as a function of SfM algorithm parameter levels. Results show that when SfM algorithms are properly tuned, differences between software types are negligible, with Metashape providing a median F-score improvement over OpenDroneMap of 0.02 and PIX4DMapper of 0.06. However, tree extraction performance varied greatly across algorithm parameters, with the greatest extraction rates typically coming from parameters causing increased density in dense point clouds and minimal point cloud filtering. Transferring UAS-SfM forest monitoring into management will require tradeoffs between accuracy and efficiency. Our analysis shows that a one-step reduction in dense point cloud quality saves 77–86% in point cloud processing time without decreasing tree extraction (F-score) or basal area precision using Metashape and PIX4DMapper but the same parameter change for OpenDroneMap caused a ~5% loss in precision. Providing reproducible processing strategies is a vital step in successfully transferring these technologies into usage as management tools. Full article
(This article belongs to the Topic Individual Tree Detection (ITD) and Its Applications)
Show Figures

Figure 1

13 pages, 5322 KiB  
Article
Improvement in the Number of Velocity Vector Acquisitions Using an In-Picture Tracking Method for 3D3C Rainbow Particle Tracking Velocimetry
by Mao Takeyama, Kota Fujiwara and Yasuo Hattori
Fluids 2024, 9(10), 226; https://doi.org/10.3390/fluids9100226 - 30 Sep 2024
Cited by 1 | Viewed by 1082
Abstract
Particle image velocimetry and particle tracking velocimetry (PTV) have developed from two-dimensional two-component (2D2C) velocity vector measurements to 3D3C measurements. Rainbow particle tracking velocimetry is a low-cost 3D3C measurement technique adopting a single color camera. However, the vector acquisition rate is not so [...] Read more.
Particle image velocimetry and particle tracking velocimetry (PTV) have developed from two-dimensional two-component (2D2C) velocity vector measurements to 3D3C measurements. Rainbow particle tracking velocimetry is a low-cost 3D3C measurement technique adopting a single color camera. However, the vector acquisition rate is not so high. To increase the number of acquired vectors, this paper proposes a high probability and long-term tracking method. First, particles are tracked in a raw picture instead of in three-dimensional space. The tracking is aided by the color information. Second, a particle that temporarily cannot be tracked due to particle overlap is compensated for using the positional information at times before and after. The proposed method is demonstrated for flow under a rotating disk with different particle densities and velocities. The use of the proposed method improves the tracking rate, number of continuous tracking steps, and number of acquired velocity vectors. The method can be applied under the difficult conditions of high particle density (0.004 particles per pixel) and large particle movement (maximum of 60 pix). Full article
(This article belongs to the Special Issue Flow Visualization: Experiments and Techniques)
Show Figures

Figure 1

14 pages, 6582 KiB  
Article
Multi-Temporal Snow-Covered Remote Sensing Image Matching via Image Transformation and Multi-Level Feature Extraction
by Zhitao Fu, Jian Zhang and Bo-Hui Tang
Optics 2024, 5(4), 392-405; https://doi.org/10.3390/opt5040029 - 29 Sep 2024
Cited by 1 | Viewed by 1546
Abstract
To address the challenge of image matching posed by significant modal differences in remote sensing images influenced by snow cover, this paper proposes an innovative image transformation-based matching method. Initially, the Pix2Pix-GAN conversion network is employed to transform remote sensing images with snow [...] Read more.
To address the challenge of image matching posed by significant modal differences in remote sensing images influenced by snow cover, this paper proposes an innovative image transformation-based matching method. Initially, the Pix2Pix-GAN conversion network is employed to transform remote sensing images with snow cover into images without snow cover, reducing the feature disparity between the images. This conversion facilitates the extraction of more discernible features for matching by transforming the problem from snow-covered to snow-free images. Subsequently, a multi-level feature extraction network is utilized to extract multi-level feature descriptors from the transformed images. Keypoints are derived from these descriptors, enabling effective feature matching. Finally, the matching results are mapped back onto the original snow-covered remote sensing images. The proposed method was compared to well-established techniques such as SIFT, RIFT2, R2D2, and ReDFeat and demonstrated outstanding performance. In terms of NCM, MP, Rep, Recall, and F1-measure, our method outperformed the state of the art by 177, 0.29, 0.22, 0.21, and 0.25, respectively. In addition, the algorithm shows robustness over a range of image rotation angles from −40° to 40°. This innovative approach offers a new perspective on the task of matching multi-temporal snow-covered remote sensing images. Full article
(This article belongs to the Topic Applications in Image Analysis and Pattern Recognition)
Show Figures

Figure 1

16 pages, 6907 KiB  
Article
Unoccupied-Aerial-Systems-Based Biophysical Analysis of Montmorency Cherry Orchards: A Comparative Study
by Grayson R. Morgan and Lane Stevenson
Drones 2024, 8(9), 494; https://doi.org/10.3390/drones8090494 - 18 Sep 2024
Cited by 1 | Viewed by 1473
Abstract
With the global population on the rise and arable land diminishing, the need for sustainable and precision agriculture has become increasingly important. This study explores the application of unoccupied aerial systems (UAS) in precision agriculture, specifically focusing on Montmorency cherry orchards in Payson, [...] Read more.
With the global population on the rise and arable land diminishing, the need for sustainable and precision agriculture has become increasingly important. This study explores the application of unoccupied aerial systems (UAS) in precision agriculture, specifically focusing on Montmorency cherry orchards in Payson, Utah. Despite the widespread use of UAS for various crops, there is a notable gap in research concerning cherry orchards, which present unique challenges due to their physical structure. UAS data were gathered using an RTK-enabled DJI Mavic 3M, equipped with both RGB and multispectral cameras, to capture high-resolution imagery. This research investigates two primary applications of UAS in cherry orchards: tree height mapping and crop health assessment. We also evaluate the accuracy of tree height measurements derived from three UAS data processing software packages: Pix4D, Drone2Map, and DroneDeploy. Our results indicated that DroneDeploy provided the closest relationship to ground truth data with an R2 of 0.61 and an RMSE of 31.83 cm, while Pix4D showed the lowest accuracy. Furthermore, we examined the efficacy of RGB-based vegetation indices in predicting leaf area index (LAI), a key indicator of crop health, in the absence of more expensive multispectral sensors. Twelve RGB-based indices were tested for their correlation with LAI, with the IKAW index showing the strongest correlation (R = 0.36). However, the overall explanatory power of these indices was limited, with an R2 of 0.135 in the best-fitting model. Despite the promising results for tree height estimation, the correlation between RGB-based indices and LAI was underwhelming, suggesting the need for further research. Full article
(This article belongs to the Special Issue Recent Advances in Crop Protection Using UAV and UGV)
Show Figures

Figure 1

Back to TopTop