Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (8)

Search Parameters:
Keywords = voxel-based cropping

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 2346 KB  
Article
Explainable Liver Segmentation and Volume Assessment Using Parallel Cropping
by Nitin Satpute, Nikhil B. Gaikwad, Smith K. Khare, Juan Gómez-Luna and Joaquín Olivares
Appl. Sci. 2025, 15(14), 7807; https://doi.org/10.3390/app15147807 - 11 Jul 2025
Viewed by 736
Abstract
Accurate liver segmentation and volume estimation from CT images are critical for diagnosis, surgical planning, and treatment monitoring. This paper proposes a GPU-accelerated voxel-level cropping method that localizes the liver region in a single pass, significantly reducing unnecessary computation and memory transfers. We [...] Read more.
Accurate liver segmentation and volume estimation from CT images are critical for diagnosis, surgical planning, and treatment monitoring. This paper proposes a GPU-accelerated voxel-level cropping method that localizes the liver region in a single pass, significantly reducing unnecessary computation and memory transfers. We integrate this pre-processing step into two segmentation pipelines: a traditional Chan-Vese model and a deep learning U-Net trained on the LiTS dataset. After segmentation, a seeded region growing algorithm is used for 3D liver volume assessment. Our method reduces unnecessary image data by an average of 90%, speeds up segmentation by 1.39× for Chan-Vese, and improves dice scores from 0.938 to 0.960. When integrated into U-Net pipelines, the post-processed dice score rises drastically from 0.521 to 0.956. Additionally, the voxel-based cropping approach achieves a 2.29× acceleration compared to state-of-the-art slice-based methods in 3D volume assessment. Our results demonstrate high segmentation accuracy and precise volume estimates with errors below 2.5%. This proposal offers a scalable, interpretable, efficient liver segmentation and volume assessment solution. It eliminates unwanted artifacts and facilitates real-time deployment in clinical environments where transparency and resource constraints are critical. It is also tested in other anatomical structures such as skin, lungs, and vessels, enabling broader applicability in medical imaging. Full article
(This article belongs to the Special Issue Image Processing and Computer Vision Applications)
Show Figures

Figure 1

21 pages, 5405 KB  
Article
Optimization of the Canopy Three-Dimensional Reconstruction Method for Intercropped Soybeans and Early Yield Prediction
by Xiuni Li, Menggen Chen, Shuyuan He, Xiangyao Xu, Panxia Shao, Yahan Su, Lingxiao He, Jia Qiao, Mei Xu, Yao Zhao, Wenyu Yang, Wouter H. Maes and Weiguo Liu
Agriculture 2025, 15(7), 729; https://doi.org/10.3390/agriculture15070729 - 28 Mar 2025
Cited by 1 | Viewed by 625
Abstract
Intercropping is a key cultivation strategy for safeguarding national food and oil security. Accurate early-stage yield prediction of intercropped soybeans is essential for the rapid screening and breeding of high-yield soybean varieties. As a widely used technique for crop yield estimation, the accuracy [...] Read more.
Intercropping is a key cultivation strategy for safeguarding national food and oil security. Accurate early-stage yield prediction of intercropped soybeans is essential for the rapid screening and breeding of high-yield soybean varieties. As a widely used technique for crop yield estimation, the accuracy of 3D reconstruction models directly affects the reliability of yield predictions. This study focuses on optimizing the 3D reconstruction process for intercropped soybeans to efficiently extract canopy structural parameters throughout the entire growth cycle, thereby enhancing the accuracy of early yield prediction. To achieve this, we optimized image acquisition protocols by testing four imaging angles (15°, 30°, 45°, and 60°), four plant rotation speeds (0.8 rpm, 1.0 rpm, 1.2 rpm, and 1.4 rpm), and four image acquisition counts (24, 36, 48, and 72 images). Point cloud preprocessing was refined through the application of secondary transformation matrices, color thresholding, statistical filtering, and scaling. Key algorithms—including the convex hull algorithm, voxel method, and 3D α-shape algorithm—were optimized using MATLAB, enabling the extraction of multi-dimensional canopy parameters. Subsequently, a stepwise regression model was developed to achieve precise early-stage yield prediction for soybeans. The study identified optimal image acquisition settings: a 30° imaging angle, a plant rotation speed of 1.2 rpm, and the collection of 36 images during the vegetative stage and 48 images during the reproductive stage. With these improvements, a high-precision 3D canopy point-cloud model of soybeans covering the entire growth period was successfully constructed. The optimized pipeline enabled batch extraction of 23 canopy structural parameters, achieving high accuracy, with linear fitting R2 values of 0.990 for plant height and 0.950 for plant width. Furthermore, the voxel volume-based prediction approach yielded a maximum yield prediction accuracy of R2 = 0.788. This study presents an integrated 3D reconstruction framework, spanning image acquisition, point cloud generation, and structural parameter extraction, effectively enabling early and precise yield prediction for intercropped soybeans. The proposed method offers an efficient and reliable technical reference for acquiring 3D structural information of soybeans in strip intercropping systems and contributes to the accurate identification of soybean germplasm resources, providing substantial theoretical and practical value. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

19 pages, 5808 KB  
Article
A Corn Point Cloud Stem-Leaf Segmentation Method Based on Octree Voxelization and Region Growing
by Qinzhe Zhu and Ming Yu
Agronomy 2025, 15(3), 740; https://doi.org/10.3390/agronomy15030740 - 19 Mar 2025
Cited by 1 | Viewed by 1398
Abstract
Plant phenotyping is crucial for advancing precision agriculture and modern breeding, with 3D point cloud segmentation of plant organs being essential for phenotypic parameter extraction. Nevertheless, although existing approaches maintain segmentation precision, they struggle to efficiently process complex geometric configurations and large-scale point [...] Read more.
Plant phenotyping is crucial for advancing precision agriculture and modern breeding, with 3D point cloud segmentation of plant organs being essential for phenotypic parameter extraction. Nevertheless, although existing approaches maintain segmentation precision, they struggle to efficiently process complex geometric configurations and large-scale point cloud datasets, significantly increasing computational costs. Furthermore, their heavy reliance on high-quality annotated data restricts their use in high-throughput settings. To address these limitations, we propose a novel multi-stage region-growing algorithm based on an octree structure for efficient stem-leaf segmentation in maize point cloud data. The method first extracts key geometric features through octree voxelization, significantly improving segmentation efficiency. In the region-growing phase, a preliminary structural segmentation strategy using fitted cylinder parameters is applied. A refinement strategy is then applied to improve segmentation accuracy in complex regions. Finally, stem segmentation consistency is enhanced through central axis fitting and distance-based filtering. In this study, we utilize the Pheno4D dataset, which comprises three-dimensional point cloud data of maize plants at different growth stages, collected from greenhouse environments. Experimental results show that the proposed algorithm achieves an average precision of 98.15% and an IoU of 84.81% on the Pheno4D dataset, demonstrating strong robustness across various growth stages. Segmentation time per instance is reduced to 4.8 s, offering over a fourfold improvement compared to PointNet while maintaining high accuracy and efficiency. Additionally, validation experiments on tomato point cloud data confirm the proposed method’s strong generalization capability. In this paper, we present an algorithm that addresses the shortcomings of traditional methods in complex agricultural environments. Specifically, our approach improves efficiency and accuracy while reducing dependency on high-quality annotated data. This solution not only delivers high precision and faster computational performance but also lays a strong technical foundation for high-throughput crop management and precision breeding. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

19 pages, 3196 KB  
Article
Autonomous Tumor Signature Extraction Applied to Spatially Registered Bi-Parametric MRI to Predict Prostate Tumor Aggressiveness: A Pilot Study
by Rulon Mayer, Baris Turkbey and Charles B. Simone
Cancers 2024, 16(10), 1822; https://doi.org/10.3390/cancers16101822 - 10 May 2024
Cited by 1 | Viewed by 1437
Abstract
Background: Accurate, reliable, non-invasive assessment of patients diagnosed with prostate cancer is essential for proper disease management. Quantitative assessment of multi-parametric MRI, such as through artificial intelligence or spectral/statistical approaches, can provide a non-invasive objective determination of the prostate tumor aggressiveness without side [...] Read more.
Background: Accurate, reliable, non-invasive assessment of patients diagnosed with prostate cancer is essential for proper disease management. Quantitative assessment of multi-parametric MRI, such as through artificial intelligence or spectral/statistical approaches, can provide a non-invasive objective determination of the prostate tumor aggressiveness without side effects or potential poor sampling from needle biopsy or overdiagnosis from prostate serum antigen measurements. To simplify and expedite prostate tumor evaluation, this study examined the efficacy of autonomously extracting tumor spectral signatures for spectral/statistical algorithms for spatially registered bi-parametric MRI. Methods: Spatially registered hypercubes were digitally constructed by resizing, translating, and cropping from the image sequences (Apparent Diffusion Coefficient (ADC), High B-value, T2) from 42 consecutive patients in the bi-parametric MRI PI-CAI dataset. Prostate cancer blobs exceeded a threshold applied to the registered set from normalizing the registered set into an image that maximizes High B-value, but minimizes the ADC and T2 images, appearing “green” in the color composite. Clinically significant blobs were selected based on size, average normalized green value, sliding window statistics within a blob, and position within the hypercube. The center of mass and maximized sliding window statistics within the blobs identified voxels associated with tumor signatures. We used correlation coefficients (R) and p-values, to evaluate the linear regression fits of the z-score and SCR (with processed covariance matrix) to tumor aggressiveness, as well as Area Under the Curves (AUC) for Receiver Operator Curves (ROC) from logistic probability fits to clinically significant prostate cancer. Results: The highest R (R > 0.45), AUC (>0.90), and lowest p-values (<0.01) were achieved using z-score and modified registration applied to the covariance matrix and tumor signatures selected from the “greenest” parts from the selected blob. Conclusions: The first autonomous tumor signature applied to spatially registered bi-parametric MRI shows promise for determining prostate tumor aggressiveness. Full article
(This article belongs to the Section Methods and Technologies Development)
Show Figures

Figure 1

21 pages, 5214 KB  
Article
PointNet++ and Three Layers of Features Fusion for Occlusion Three-Dimensional Ear Recognition Based on One Sample per Person
by Qinping Zhu and Zhichun Mu
Symmetry 2020, 12(1), 78; https://doi.org/10.3390/sym12010078 - 2 Jan 2020
Cited by 12 | Viewed by 4006
Abstract
The ear’s relatively stable structure makes it suitable for recognition. In common identification applications, only one sample per person (OSPP) is registered in a gallery; consequently, effectively training deep-learning-based ear recognition approach is difficult. The state-of-the-art (SOA) 3D ear recognition using the OSPP [...] Read more.
The ear’s relatively stable structure makes it suitable for recognition. In common identification applications, only one sample per person (OSPP) is registered in a gallery; consequently, effectively training deep-learning-based ear recognition approach is difficult. The state-of-the-art (SOA) 3D ear recognition using the OSPP approach bottlenecks when large occluding objects are close to the ear. Hence, we propose a system that combines PointNet++ and three layers of features that are capable of extracting rich identification information from a 3D ear. Our goal is to correctly recognize a 3D ear affected by a large nearby occlusion using one sample per person (OSPP) registered in a gallery. The system comprises four primary components: (1) segmentation; (2) local and local joint structural (LJS) feature extraction; (3) holistic feature extraction; and (4) fusion. We use PointNet++ for ear segmentation. For local and LJS feature extraction, we propose an LJS feature descriptor–pairwise surface patch cropped using a symmetrical hemisphere cut-structured histogram with an indexed shape (PSPHIS) descriptor. Furthermore, we propose a local and LJS matching engine based on the proposed LJS feature descriptor and SOA surface patch histogram indexed shape (SPHIS) local feature descriptor. For holistic feature extraction, we use a voxelization method for global matching. For the fusion component, we use a weighted fusion method to recognize the 3D ear. The experimental results demonstrate that the proposed system outperforms the SOA normalization-free 3D ear recognition methods using OSPP when the ear surface is influenced by a large nearby occlusion. Full article
Show Figures

Figure 1

15 pages, 13567 KB  
Article
Effect of Leaf Occlusion on Leaf Area Index Inversion of Maize Using UAV–LiDAR Data
by Lei Lei, Chunxia Qiu, Zhenhai Li, Dong Han, Liang Han, Yaohui Zhu, Jintao Wu, Bo Xu, Haikuan Feng, Hao Yang and Guijun Yang
Remote Sens. 2019, 11(9), 1067; https://doi.org/10.3390/rs11091067 - 6 May 2019
Cited by 45 | Viewed by 7080
Abstract
The leaf area index (LAI) is a key parameter for describing crop canopy structure, and is of great importance for early nutrition diagnosis and breeding research. Light detection and ranging (LiDAR) is an active remote sensing technology that can detect the vertical distribution [...] Read more.
The leaf area index (LAI) is a key parameter for describing crop canopy structure, and is of great importance for early nutrition diagnosis and breeding research. Light detection and ranging (LiDAR) is an active remote sensing technology that can detect the vertical distribution of a crop canopy. To quantitatively analyze the influence of the occlusion effect, three flights of multi-route high-density LiDAR dataset were acquired at two time points, using an Unmanned Aerial Vehicle (UAV)-mounted RIEGL VUX-1 laser scanner at an altitude of 15 m, to evaluate the validity of LAI estimation, in different layers, under different planting densities. The result revealed that normalized root-mean-square error (NRMSE) for the upper, middle, and lower layers were 10.8%, 12.4%, 42.8%, for 27,495 plants/ha, respectively. The relationship between the route direction and ridge direction was compared, and found that the direction of flight perpendicular to the maize planting ridge was better than that parallel to the maize planting ridge. The voxel-based method was used to invert the LAI, and we concluded that the optimal voxel size were concentrated on 0.040 m to 0.055 m, which was approximately 1.7 to 2.3 times of the average ground point distance. The detection of the occlusion effect in different layers under different planting densities, the relationship between the route and ridge directions, and the optimal voxel size could provide a guideline for UAV–LiDAR application in the crop canopy structure analysis. Full article
(This article belongs to the Special Issue Trends in UAV Remote Sensing Applications)
Show Figures

Figure 1

17 pages, 9352 KB  
Article
Estimating Changes in Leaf Area, Leaf Area Density, and Vertical Leaf Area Profile for Mango, Avocado, and Macadamia Tree Crowns Using Terrestrial Laser Scanning
by Dan Wu, Stuart Phinn, Kasper Johansen, Andrew Robson, Jasmine Muir and Christopher Searle
Remote Sens. 2018, 10(11), 1750; https://doi.org/10.3390/rs10111750 - 6 Nov 2018
Cited by 37 | Viewed by 8134
Abstract
Vegetation metrics, such as leaf area (LA), leaf area density (LAD), and vertical leaf area profile, are essential measures of tree-scale biophysical processes associated with photosynthetic capacity, and canopy geometry. However, there are limited published investigations of their use for horticultural tree crops. [...] Read more.
Vegetation metrics, such as leaf area (LA), leaf area density (LAD), and vertical leaf area profile, are essential measures of tree-scale biophysical processes associated with photosynthetic capacity, and canopy geometry. However, there are limited published investigations of their use for horticultural tree crops. This study evaluated the ability of light detection and ranging (LiDAR) for measuring LA, LAD, and vertical leaf area profile across two mango, macadamia and avocado trees using discrete return data from a RIEGL VZ-400 Terrestrial Laser Scanning (TLS) system. These data were collected multiple times for individual trees to align with key growth stages, essential management practices, and following a severe storm. The first return of each laser pulse was extracted for each individual tree and classified as foliage or wood based on TLS point cloud geometry. LAD at a side length of 25 cm voxels, LA at the canopy level and vertical leaf area profile were calculated to analyse tree crown changes. These changes included: (1) pre-pruning vs. post-pruning for mango trees; (2) pre-pruning vs. post-pruning for macadamia trees; (3) pre-storm vs. post-storm for macadamia trees; and (4) tree leaf growth over a year for two young avocado trees. Decreases of 34.13 m2 and 8.34 m2 in LA of mango tree crowns occurred due to pruning. Pruning for the high vigour mango tree was mostly identified between 1.25 m and 3 m. Decreases of 38.03 m2 and 16.91 m2 in LA of a healthy and unhealthy macadamia tree occurred due to pruning. After flowering and spring flush of the same macadamia trees, storm effects caused a 9.65 m2 decrease in LA for the unhealthy tree, while an increase of 34.19 m2 occurred for the healthy tree. The tree height increased from 11.13 m to 11.66 m, and leaf loss was mainly observed between 1.5 m and 4.5 m for the unhealthy macadamia tree. Annual increases in LA of 82.59 m2 and 59.97 m2 were observed for two three-year-old avocado trees. Our results show that TLS is a useful tool to quantify changes in the LA, LAD, and vertical leaf area profiles of horticultural trees over time, which can be used as a general indicator of tree health, as well as assist growers with improved pruning, irrigation, and fertilisation application decisions. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
Show Figures

Figure 1

19 pages, 13071 KB  
Article
Designing and Testing a UAV Mapping System for Agricultural Field Surveying
by Martin Peter Christiansen, Morten Stigaard Laursen, Rasmus Nyholm Jørgensen, Søren Skovsen and René Gislum
Sensors 2017, 17(12), 2703; https://doi.org/10.3390/s17122703 - 23 Nov 2017
Cited by 154 | Viewed by 19760
Abstract
A Light Detection and Ranging (LiDAR) sensor mounted on an Unmanned Aerial Vehicle (UAV) can map the overflown environment in point clouds. Mapped canopy heights allow for the estimation of crop biomass in agriculture. The work presented in this paper contributes to sensory [...] Read more.
A Light Detection and Ranging (LiDAR) sensor mounted on an Unmanned Aerial Vehicle (UAV) can map the overflown environment in point clouds. Mapped canopy heights allow for the estimation of crop biomass in agriculture. The work presented in this paper contributes to sensory UAV setup design for mapping and textual analysis of agricultural fields. LiDAR data are combined with data from Global Navigation Satellite System (GNSS) and Inertial Measurement Unit (IMU) sensors to conduct environment mapping for point clouds. The proposed method facilitates LiDAR recordings in an experimental winter wheat field. Crop height estimates ranging from 0.35–0.58 m are correlated to the applied nitrogen treatments of 0–300 kg N ha . The LiDAR point clouds are recorded, mapped, and analysed using the functionalities of the Robot Operating System (ROS) and the Point Cloud Library (PCL). Crop volume estimation is based on a voxel grid with a spatial resolution of 0.04 × 0.04 × 0.001 m. Two different flight patterns are evaluated at an altitude of 6 m to determine the impacts of the mapped LiDAR measurements on crop volume estimations. Full article
(This article belongs to the Special Issue UAV or Drones for Remote Sensing Applications)
Show Figures

Graphical abstract

Back to TopTop