Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (34)

Search Parameters:
Keywords = farthest point sampling

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 1652 KB  
Article
Classification of Point Cloud Data in Road Scenes Based on PointNet++
by Jingfeng Xue, Bin Zhao, Chunhong Zhao, Yueru Li and Yihao Cao
Sensors 2026, 26(1), 153; https://doi.org/10.3390/s26010153 - 25 Dec 2025
Viewed by 440
Abstract
Point cloud data, with its rich information and high-precision geometric details, holds significant value for urban road infrastructure surveying and management. To overcome the limitations of manual classification, this study employs deep learning techniques for automated point cloud feature extraction and classification, achieving [...] Read more.
Point cloud data, with its rich information and high-precision geometric details, holds significant value for urban road infrastructure surveying and management. To overcome the limitations of manual classification, this study employs deep learning techniques for automated point cloud feature extraction and classification, achieving high-precision object recognition in road scenes. By integrating the Princeton ModelNet40, ShapeNet, and Sydney Urban Objects datasets, we extracted 3D spatial coordinates from the Sydney Urban Objects Dataset and organized labeled point cloud files to build a comprehensive dataset reflecting real-world road scenarios. To address noise and occlusion-induced data gaps, three augmentation strategies were implemented: (1) Farthest Point Sampling (FPS): Preserves critical features while mitigating overfitting. (2) Random Z-axis rotation, translation, and scaling: Enhances model generalization. (3) Gaussian noise injection: Improves training sample realism. The PointNet++ framework was enhanced by integrating a point-filling method into the preprocessing module. Model training and prediction were conducted using its Multi-Scale Grouping (MSG) and Single-Scale Grouping (SSG) schemes. The model achieved an average training accuracy of 86.26% (peak single-instance accuracy: 98.54%; best category accuracy: 93.15%) and a test set accuracy of 97.41% (category accuracy: 84.50%). This study demonstrates successful road scene point cloud classification, providing valuable insights for point cloud data processing and related research. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

21 pages, 5525 KB  
Article
DUFA-Net: A Deep Learning-Based Method for Organ-Level Segmentation and Phenotype Extraction of Maize 3D Point Clouds
by Biqiang Ding, Yan Teng, Zhengwei Huang, Lei Wen, Chun Li and Ling Jiang
Agriculture 2025, 15(23), 2457; https://doi.org/10.3390/agriculture15232457 - 27 Nov 2025
Viewed by 451
Abstract
Accurate plant phenotyping is crucial for gaining a deeper understanding of plant growth patterns and improving yield. However, the segmentation and measurement of 3D phenotypic data in maize remains challenging due to factors such as complex canopy structure, occlusion, and uneven point distribution. [...] Read more.
Accurate plant phenotyping is crucial for gaining a deeper understanding of plant growth patterns and improving yield. However, the segmentation and measurement of 3D phenotypic data in maize remains challenging due to factors such as complex canopy structure, occlusion, and uneven point distribution. To address this, we propose a deep learning network, DUFA-Net, based on dual uncertainty-driven feature aggregation. This method employs a dual uncertainty-driven farthest point sampling (DU-FPS) strategy to mitigate errors caused by uneven point cloud density. Furthermore, for local feature encoding, we designed a Dynamic Feature Aggregation (DFA) module to model neighborhood structures and capture fine-grained geometric features, thereby effectively handling complex canopy structures. Experiments on a self-constructed maize dataset demonstrate that DUFA-Net achieves 95.82% segmentation accuracy and a mean IoU of 92.52%. Based on the segmentation results, six key phenotypic features were accurately extracted, showing high R2 values ranging from 0.92 to 0.99. Further evaluation on the Syau Single Maize dataset confirms the generalization capability of the proposed method, achieving 92.52% accuracy and 91.23% mIoU, outperforming five state-of-the-art baselines, including PointNet++, PointMLP, and CurveNet. These results highlight the effectiveness and robustness of DUFA-Net for high-precision organ segmentation and phenotypic trait extraction in complex plant architectures. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

15 pages, 4146 KB  
Article
A Coarse-to-Fine Framework with Curvature Feature Learning for Robust Point Cloud Registration in Spinal Surgical Navigation
by Lijing Zhang, Wei Wang, Tianbao Liu, Jiahui Guo, Bo Wu and Nan Zhang
Bioengineering 2025, 12(10), 1096; https://doi.org/10.3390/bioengineering12101096 - 12 Oct 2025
Viewed by 787
Abstract
In surgical navigation-assisted pedicle screw fixation, cross-source pre- and intra-operative point clouds registration faces challenges like significant initial pose differences and low overlapping ratio. Classical algorithms based on feature descriptor have high computational complexity and are less robust to noise, leading to a [...] Read more.
In surgical navigation-assisted pedicle screw fixation, cross-source pre- and intra-operative point clouds registration faces challenges like significant initial pose differences and low overlapping ratio. Classical algorithms based on feature descriptor have high computational complexity and are less robust to noise, leading to a decrease in accuracy and navigation performance. To address these problems, this paper proposes a coarse-to-fine registration framework. In the coarse registration stage, a Point Matching algorithm based on Curvature Feature Learning (CFL-PM) is proposed. Through CFL-PM and Farthest Point Sampling (FPS), the coarse registration of overlapping regions between the two point clouds is achieved. In the fine registration stage, the Iterative Closest Point (ICP) is used for further optimization. The proposed method effectively addresses the challenges of noise, initial pose and low overlapping ratio. In noise-free point cloud registration experiments, the average rotation and translation errors reached 0.34° and 0.27 mm. Under noisy conditions, the average rotation error of the coarse registration is 7.28°, and the average translation error is 9.08 mm. Experiments on pre- and intra-operative point cloud datasets demonstrate the proposed algorithm outperforms the compared algorithms in registration accuracy, speed, and robustness. Therefore, the proposed method can achieve the precise alignment of the surgical navigation-assisted pedicle screw fixation. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Graphical abstract

24 pages, 7725 KB  
Article
Effects of Scale Parameters and Counting Origins on Box-Counting Fractal Dimension and Engineering Application in Concrete Beam Crack Analysis
by Junfeng Wang, Gan Yang, Yangguang Yuan, Jianpeng Sun and Guangning Pu
Fractal Fract. 2025, 9(8), 549; https://doi.org/10.3390/fractalfract9080549 - 21 Aug 2025
Cited by 2 | Viewed by 1081
Abstract
Fractal theory provides a powerful tool for quantifying complex geometric patterns such as concrete cracks. The box-counting method is widely employed for fractal dimension (FD) calculation due to its intuitive principles and compatibility with image data. However, two critical limitations persist [...] Read more.
Fractal theory provides a powerful tool for quantifying complex geometric patterns such as concrete cracks. The box-counting method is widely employed for fractal dimension (FD) calculation due to its intuitive principles and compatibility with image data. However, two critical limitations persist in existing studies: (1) the selection of scale parameters (including minimum measurement scale and cutoff scale) lacks systematization and exhibits significant arbitrariness; (2) insufficient attention to the sensitivity of counting origins compromises the stability and comparability of FDs, severely limiting reliable engineering application. To address these limitations, this study first employs classical fractal images and crack samples to systematically analyze the impact of four minimum measurement scales (2, 2, 3, 3) and three cutoff scale coefficients (cutoff-to-minimum image side ratios: 1, 1/2, 1/3) on computational accuracy. Subsequently, the farthest point sampling (FPS) method is adopted to select counting origins, comparing two optimization strategies—Count-FD-Mean (mean of fits from multiple origins) and Count-Min-FD (fit using minimal box counts across scales). Finally, the optimized approach is validated through static loading tests on concrete beams. Key findings demonstrate that: the optimal scale combination (minimum scale: 2; cutoff coefficient: 1) yields a mere 0.5% average error from theoretical FDs; the Count-Min-FD strategy delivers the highest stability and closest alignment with theoretical values; FDs of beam cracks increase continuously with loading, exhibiting an exponential correlation with midspan deflection that effectively captures crack evolution; uncalibrated scale parameters and counting strategies may induce >40% errors in inferred mechanical parameters; results stabilize with 40–45 counting origins across three tested fractal patterns. This work advances standardization in fractal analysis, enhances reliability in concrete crack assessment, and provides critical support for the practical application of fractal theory in structural health monitoring and damage evaluation. Full article
(This article belongs to the Special Issue Fractal and Fractional in Construction Materials)
Show Figures

Figure 1

20 pages, 2788 KB  
Article
Powerful Sample Reduction Techniques for Constructing Effective Point Cloud Object Classification Models
by Chih-Lung Lin, Hai-Wei Yang and Chi-Hung Chuang
Electronics 2025, 14(12), 2439; https://doi.org/10.3390/electronics14122439 - 16 Jun 2025
Cited by 1 | Viewed by 2596
Abstract
Due to the large volume of raw data in 3D point clouds, downsampling techniques are crucial for reducing computational load and memory usage to improve the training of 3D point cloud models. This paper plans to conduct research using the ModelNet40 dataset. Our [...] Read more.
Due to the large volume of raw data in 3D point clouds, downsampling techniques are crucial for reducing computational load and memory usage to improve the training of 3D point cloud models. This paper plans to conduct research using the ModelNet40 dataset. Our proposed method is based on the PointNext architecture, an improved version of PointNet++ that significantly enhances performance through optimized training strategies and adjusted receptive fields. During the model training process, we employ the farthest point sampling method for downsampling. Specifically, we use an improved attention-based point cloud edge sampling (APES) method for downsampling, where we compute the density of each point and set the size of the neighbor K value to effectively retain feature points during downsampling. Our improved method captures edge points more effectively than the original APES method. By adjusting the architecture, our method, combined with the farthest point sampling method, not only reduced the average training time by nearly 15% compared to PointNext-s, but also improved accuracy from 93.11% to 93.57%. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

35 pages, 24325 KB  
Article
Enhancing Digital Twin Fidelity Through Low-Discrepancy Sequence and Hilbert Curve-Driven Point Cloud Down-Sampling
by Yuening Ma, Liang Guo and Min Li
Sensors 2025, 25(12), 3656; https://doi.org/10.3390/s25123656 - 11 Jun 2025
Cited by 1 | Viewed by 1405
Abstract
This paper addresses the critical challenge of point cloud down-sampling for digital twin creation, where reducing data volume while preserving geometric fidelity remains an ongoing research problem. We propose a novel down-sampling approach that combines Low-Discrepancy Sequences (LDS) with Hilbert curve ordering to [...] Read more.
This paper addresses the critical challenge of point cloud down-sampling for digital twin creation, where reducing data volume while preserving geometric fidelity remains an ongoing research problem. We propose a novel down-sampling approach that combines Low-Discrepancy Sequences (LDS) with Hilbert curve ordering to create a method that preserves both global distribution characteristics and local geometric features. Unlike traditional methods that impose uniform density or rely on computationally intensive feature detection, our LDS-Hilbert approach leverages the complementary mathematical properties of Low-Discrepancy Sequences and space-filling curves to achieve balanced sampling that respects the original density distribution while ensuring comprehensive coverage. Through four comprehensive experiments covering parametric surface fitting, mesh reconstruction from basic closed geometries, complex CAD models, and real-world laser scans, we demonstrate that LDS-Hilbert consistently outperforms established methods, including Simple Random Sampling (SRS), Farthest Point Sampling (FPS), and Voxel Grid Filtering (Voxel). Results show parameter recovery improvements often exceeding 50% for parametric models compared to the FPS and Voxel methods, nearly 50% better shape preservation as measured by the Point-to-Mesh Distance (than FPS) and up to 160% as measured by the Viewpoint Feature Histogram Distance (than SRS) on complex real-world scans. The method achieves these improvements without requiring feature-specific calculations, extensive pre-processing, or task-specific training data, making it a practical advance for enhancing digital twin fidelity across diverse application domains. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

26 pages, 27617 KB  
Article
MFCPopulus: A Point Cloud Completion Network Based on Multi-Feature Fusion for the 3D Reconstruction of Individual Populus Tomentosa in Planted Forests
by Hao Liu, Meng Yang, Benye Xi, Xin Wang, Qingqing Huang, Cong Xu and Weiliang Meng
Forests 2025, 16(4), 635; https://doi.org/10.3390/f16040635 - 5 Apr 2025
Viewed by 843
Abstract
The accurate point cloud completion of individual tree crowns is critical for quantifying crown complexity and advancing precision forestry, yet it remains challenging in dense plantations due to canopy occlusion and LiDAR limitations. In this study, we extended the scope of conventional point [...] Read more.
The accurate point cloud completion of individual tree crowns is critical for quantifying crown complexity and advancing precision forestry, yet it remains challenging in dense plantations due to canopy occlusion and LiDAR limitations. In this study, we extended the scope of conventional point cloud completion techniques to artificial planted forests by introducing a novel approach called Multi−feature Fusion Completion of Populus (MFCPopulus). Specifically designed for Populus Tomentosa plantations with uniform spacing, this method utilized a dataset of 1050 manually segmented trees with expert−validated trunk−canopy separation. Key innovations include the following: (1) a hierarchical adversarial framework that integrates multi−scale feature extraction (via Farthest Point Sampling at varying rates) and biologically informed normalization to address trunk−canopy density disparities; (2) a structural characteristics split−collocation (SCS−SCC) strategy that prioritizes crown reconstruction through adaptive sampling ratios, achieving a 94.5% canopy coverage in outputs; (3) a cross−layer feature integration enabling the simultaneous recovery of global contours and a fine−grained branch topology. Compared to state−of−the−art methods, MFCPopulus reduced the Chamfer distance variance by 23% and structural complexity discrepancies (ΔDb) by 33% (mean, 0.12), while preserving species−specific morphological patterns. Octree analysis demonstrated an 89−94% spatial alignment with ground truth across height ratios (HR = 1.25−5.0). Although initially developed for artificial planted forests, the framework generalizes well to diverse species, accurately reconstructing 3D crown structures for both broadleaf (Fagus sylvatica, Acer campestre) and coniferous species (Pinus sylvestris) across public datasets, providing a precise and generalizable solution for cross−species trees’ phenotypic studies. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

20 pages, 3968 KB  
Article
Research on Multi-Scale Point Cloud Completion Method Based on Local Neighborhood Dynamic Fusion
by Yalun Liu, Jiantao Sun and Ling Zhao
Appl. Sci. 2025, 15(6), 3006; https://doi.org/10.3390/app15063006 - 10 Mar 2025
Viewed by 1945
Abstract
Point cloud completion reconstructs incomplete, sparse inputs into complete 3D shapes. However, in the current 3D completion task, it is difficult to effectively extract the local details of an incomplete one, resulting in poor restoration of local details and low accuracy of the [...] Read more.
Point cloud completion reconstructs incomplete, sparse inputs into complete 3D shapes. However, in the current 3D completion task, it is difficult to effectively extract the local details of an incomplete one, resulting in poor restoration of local details and low accuracy of the completed point clouds. To address this problem, this paper proposes a multi-scale point cloud completion method based on local neighborhood dynamic fusion (LNDF: adaptive aggregation of multi-scale local features through dynamic range and weight adjustment). Firstly, the farthest point sampling (FPS) strategy is applied to the original incomplete and defective point clouds for down-sampling to obtain three types of point clouds at different scales. When extracting features from point clouds of different scales, the local neighborhood aggregation of key points is dynamically adjusted, and the Transformer architecture is integrated to further enhance the correlation of local feature extraction information. Secondly, by combining the method of generating point clouds layer by layer in a pyramid-like manner, the local details of the point clouds are gradually enriched from coarse to fine to achieve point cloud completion. Finally, when designing the decoder, inspired by the concept of generative adversarial networks (GANs), an attention discriminator designed in series with a feature extraction layer and an attention layer is added to further optimize the completion performance of the network. Experimental results show that LNDM-Net reduces the average Chamfer Distance (CD) by 5.78% on PCN and 4.54% on ShapeNet compared to SOTA. The visualization of completion results demonstrates the superior performance of our method in both point cloud completion accuracy and local detail preservation. When handling diverse samples and incomplete point clouds in real-world 3D scenarios from the KITTI dataset, the approach exhibits enhanced generalization capability and completion fidelity. Full article
(This article belongs to the Special Issue Advanced Pattern Recognition & Computer Vision)
Show Figures

Figure 1

13 pages, 2957 KB  
Article
Analysis of Kinship and Population Genetic Structure of 53 Apricot Resources Based on Whole Genome Resequencing
by Qirui Xin, Jun Qing and Yanhong He
Curr. Issues Mol. Biol. 2024, 46(12), 14106-14118; https://doi.org/10.3390/cimb46120844 - 13 Dec 2024
Cited by 3 | Viewed by 1451
Abstract
Based on the single nucleotide polymorphism (SNP) markers developed by whole genome resequencing (WGRS), the relationship and population genetic structure of 53 common apricot (P. armeniaca) varieties were analyzed to provide a theoretical basis for revealing the phylogenetic relationship and classification [...] Read more.
Based on the single nucleotide polymorphism (SNP) markers developed by whole genome resequencing (WGRS), the relationship and population genetic structure of 53 common apricot (P. armeniaca) varieties were analyzed to provide a theoretical basis for revealing the phylogenetic relationship and classification of the common apricot. WGRS was performed on 53 common apricot varieties, and high-quality SNP sites were obtained after alignment with the “Yinxiangbai” apricot genome as a reference. Phylogenetic analysis, G matrix analysis, principal component analysis, and population structure analysis were performed using Genome-wide Complex Trait Analysis (GCTA), FastTree, Admixture, and other software. The average comparison ratio between the sequencing results and the reference genome was 97.66%. After strict screening, 88,332,238 high-quality SNP sites were finally obtained. Based on the statistical SNP variation type, it was found that LNLJX had the largest number of variations (3,951,322) and the lowest base transition/base transversion ratio (ts/tv = 1.77), indicating that its gene exchange events occurred less frequently. Based on the SNP point estimation of the relationship and genetic distance between samples, the relationship between species was 1.41–0.01, among which PLDJX and BK1 had the closest relationship of 1.41, and YZH and LGWSX had the farthest relationship of 0.01. The genetic distance between species was 0.00367–0.264344, the genetic distance between HMX and JM was the closest, and the genetic distance between WYX and YX was the farthest, which was the largest. Phylogenetic tree, PCA, and genetic structure analysis results all divided 53 common apricot varieties into four groups, and the classification results were consistent. The SNP markers mined using WGRS technology are useful not only to analyze the variation of common apricots, but also to effectively identify their kinship and genetic structure, which plays a critical role in the classification and utilization of common apricot germplasm resources. Full article
(This article belongs to the Section Molecular Plant Sciences)
Show Figures

Figure 1

6 pages, 1354 KB  
Proceeding Paper
The Point Cloud Reduction Algorithm Based on the Feature Extraction of a Neighborhood Normal Vector and Fuzzy-c Means Clustering
by Hongxiao Xu, Donglai Jiao and Wenmei Li
Proceedings 2024, 110(1), 13; https://doi.org/10.3390/proceedings2024110013 - 3 Dec 2024
Viewed by 1647
Abstract
The three-dimensional model of geographic elements serves as the primary medium for digital visualization. However, the original point cloud model is often vast and includes considerable redundant data, resulting in inefficiencies during the three-dimensional modeling process. To address this issue, this paper proposes [...] Read more.
The three-dimensional model of geographic elements serves as the primary medium for digital visualization. However, the original point cloud model is often vast and includes considerable redundant data, resulting in inefficiencies during the three-dimensional modeling process. To address this issue, this paper proposes a point cloud reduction algorithm that leverages domain normal vectors and fuzzy-c means (FCM) clustering for feature extraction. The algorithm first extracts the edge points of the model and then utilizes domain normal vectors to extract the overall feature points of the model. Next, utilizing point cloud curvature, coordinate information, and geometric attributes, the algorithm applies the FCM clustering method to isolate local feature points. Non-feature points are then sampled using an enhanced farthest point sampling technique. Finally, the algorithm integrates edge points, feature points, and non-feature points to generate simplified point cloud data. This paper compares the proposed algorithm with traditional methods, including the uniform grid method, random sampling method, and curvature sampling method, and evaluates the simplified point cloud in terms of reduction level and reconstruction time. This approach effectively preserves critical feature information from the majority of point cloud data, thereby addressing the complexities inherent in original point cloud models. Full article
(This article belongs to the Proceedings of The 31st International Conference on Geoinformatics)
Show Figures

Figure 1

14 pages, 3334 KB  
Article
Pollution of Beach Sands of the Ob River (Western Siberia) with Microplastics and Persistent Organic Pollutants
by Yulia A. Frank, Yulia S. Sotnikova, Vasiliy Yu. Tsygankov, Aleksey R. Rednikin, Maksim M. Donets, Elena V. Karpova, Maksim A. Belanov, Svetlana Rakhmatullina, Aleksandra D. Borovkova, Dmitriy N. Polovyanenko and Danil S. Vorobiev
J. Xenobiot. 2024, 14(3), 989-1002; https://doi.org/10.3390/jox14030055 - 25 Jul 2024
Cited by 3 | Viewed by 3194
Abstract
Microplastics (MPs) in aquatic environments can be associated with various substances, including persistent organic pollutants, which add to the problem of plastic ecotoxicity. The abundance of 1–5 mm microplastics and concentrations of particle-adsorbed organochlorine pesticides (OCPs) and polychlorinated biphenyls (PCBs) in sandy sediments [...] Read more.
Microplastics (MPs) in aquatic environments can be associated with various substances, including persistent organic pollutants, which add to the problem of plastic ecotoxicity. The abundance of 1–5 mm microplastics and concentrations of particle-adsorbed organochlorine pesticides (OCPs) and polychlorinated biphenyls (PCBs) in sandy sediments from three beaches in recreational areas along the upper Ob River in Western Siberia were assessed. MP pollution levels in the Ob River beach sands ranged from 24 ± 20.7 to 104 ± 46.2 items m−2 or, in terms of mass concentration, from 0.26 ± 0.21 to 1.22 ± 0.39 mg m−2. The average abundance of MP particles reached 0.67 ± 0.58 items kg−1 or 8.22 ± 6.13 μg kg−1 in the studied sediments. MP concentrations were significantly higher in number (p < 0.05) and mass (p < 0.01) at the riverbank site downstream of the Novosibirsk wastewater treatment plant (WWTP) outfall compared to these at the upstream and more distant beaches. Most MPs (70–100%) were represented by irregularly shaped fragments. The polymer composition of MPs varied between sites, with a general predominance of polyethylene (PE). The study revealed associations of MPs with PCBs and OCPs not previously detected in the riverbed and beach sediments, suggesting that these substances are circulating in the Ob River basin. Although MP concentrations were higher downstream of the WWTP, the maximum levels of particle-associated OCPs were observed in the beach sands of the site farthest from the urban agglomeration. The pesticides γ-HCH, 4,4-DDT, and 4,4-DDE were detected on MPs at relatively low concentrations. PCBs were more abundant in the studied samples, including 118 dioxin-like congener. The results obtained indicate that the Ob River is susceptible to plastic and persistent organic pollutant (POP) contamination and serve as a starting point for further studies and practical solutions to the problem. Full article
(This article belongs to the Section Emerging Chemicals)
Show Figures

Figure 1

15 pages, 3070 KB  
Technical Note
Fourier Domain Adaptation for the Identification of Grape Leaf Diseases
by Jing Wang, Qiufeng Wu, Tianci Liu, Yuqi Wang, Pengxian Li, Tianhao Yuan and Ziyang Ji
Appl. Sci. 2024, 14(9), 3727; https://doi.org/10.3390/app14093727 - 27 Apr 2024
Cited by 6 | Viewed by 2483
Abstract
With the application of computer vision in the field of agricultural disease recognition, the convolutional neural network is widely used in grape leaf disease recognition and has achieved remarkable results. However, most of the grape leaf disease recognition models have the problem of [...] Read more.
With the application of computer vision in the field of agricultural disease recognition, the convolutional neural network is widely used in grape leaf disease recognition and has achieved remarkable results. However, most of the grape leaf disease recognition models have the problem of weak generalization ability. In order to overcome this challenge, this paper proposes an image identification method for grape leaf diseases in different domains based on Fourier domain adaptation. Firstly, Fourier domain adaptation is performed on the labeled source domain data and the unlabeled target domain data. To decrease the gap in distribution between the source domain data and the target domain data, the low-frequency spectrum of the source domain data and the target domain data is swapped. Then, three convolutional neural networks (AlexNet, VGG13, and ResNet101) were used to train the images after style changes and the unlabeled target domain images were classified. The highest accuracy of the three networks can reach 94.6%, 96.7%, and 91.8%, respectively, higher than that of the model without Fourier transform image training. In order to reduce the impact of randomness, when selecting the transformed image, we propose using farthest point sampling to select the image with low feature correlation for the Fourier transform. The final identification result is also higher than the accuracy of the network model trained without transformation. Experimental results showed that Fourier domain adaptation can improve the generalization ability of the model and obtain a more accurate grape leaf disease recognition model. Full article
Show Figures

Figure 1

13 pages, 1571 KB  
Article
R-PointNet: Robust 3D Object Recognition Network for Real-World Point Clouds Corruption
by Zhongyuan Zhang, Lichen Lin and Xiaoli Zhi
Appl. Sci. 2024, 14(9), 3649; https://doi.org/10.3390/app14093649 - 25 Apr 2024
Cited by 5 | Viewed by 3644
Abstract
Point clouds obtained with 3D scanners in realistic scenes inevitably contain corruption, including noise and outliers. Traditional algorithms for cleaning point cloud corruption require the selection of appropriate parameters based on the characteristics of the scene, data, and algorithm, which means that their [...] Read more.
Point clouds obtained with 3D scanners in realistic scenes inevitably contain corruption, including noise and outliers. Traditional algorithms for cleaning point cloud corruption require the selection of appropriate parameters based on the characteristics of the scene, data, and algorithm, which means that their performance is highly dependent on the experience and adaptation of the algorithm itself to the application. Three-dimensional object recognition networks for real-world recognition tasks can take the raw point cloud as input and output the recognition results directly. Current 3D object recognition networks generally acquire uniform sampling points by farthest point sampling (FPS) to extract features. However, sampled defective points from FPS lower the recognition accuracy by affecting the aggregated global feature. To deal with this issue, we design a compensation module, named offset-adjustment (OA). It can adaptively adjust the coordinates of sampled defective points based on neighbors and improve local feature extraction to enhance network robustness. Furthermore, we employ the OA module to build an end-to-end network based on PointNet++ framework for robust point cloud recognition, named R-PointNet. Experiments show that R-PointNet reaches state-of-the-art performance by 92.5% of recognition accuracy on ModelNet40, and significantly outperforms previous networks by 3–7.7% on the corruption dataset ModelNet40-C for robustness benchmark. Full article
(This article belongs to the Special Issue Advanced 2D/3D Computer Vision Technology and Applications)
Show Figures

Figure 1

16 pages, 4854 KB  
Article
Point-Sim: A Lightweight Network for 3D Point Cloud Classification
by Jiachen Guo and Wenjie Luo
Algorithms 2024, 17(4), 158; https://doi.org/10.3390/a17040158 - 15 Apr 2024
Viewed by 3240
Abstract
Analyzing point clouds with neural networks is a current research hotspot. In order to analyze the 3D geometric features of point clouds, most neural networks improve the network performance by adding local geometric operators and trainable parameters. However, deep learning usually requires a [...] Read more.
Analyzing point clouds with neural networks is a current research hotspot. In order to analyze the 3D geometric features of point clouds, most neural networks improve the network performance by adding local geometric operators and trainable parameters. However, deep learning usually requires a large amount of computational resources for training and inference, which poses challenges to hardware devices and energy consumption. Therefore, some researches have started to try to use a nonparametric approach to extract features. Point-NN combines nonparametric modules to build a nonparametric network for 3D point cloud analysis, and the nonparametric components include operations such as trigonometric embedding, farthest point sampling (FPS), k-nearest neighbor (k-NN), and pooling. However, Point-NN has some blindness in feature embedding using the trigonometric function during feature extraction. To eliminate this blindness as much as possible, we utilize a nonparametric energy function-based attention mechanism (ResSimAM). The embedded features are enhanced by calculating the energy of the features by the energy function, and then the ResSimAM is used to enhance the weights of the embedded features by the energy to enhance the features without adding any parameters to the original network; Point-NN needs to compute the similarity between each feature at the naive feature similarity matching stage; however, the magnitude difference of the features in vector space during the feature extraction stage may affect the final matching result. We use the Squash operation to squeeze the features. This nonlinear operation can make the features squeeze to a certain range without changing the original direction in the vector space, thus eliminating the effect of feature magnitude, and we can ultimately better complete the naive feature matching in the vector space. We inserted these modules into the network and build a nonparametric network, Point-Sim, which performs well in 3D classification tasks. Based on this, we extend the lightweight neural network Point-SimP by adding some trainable parameters for the point cloud classification task, which requires only 0.8 M parameters for high performance analysis. Experimental results demonstrate the effectiveness of our proposed algorithm in the point cloud shape classification task. The corresponding results on ModelNet40 and ScanObjectNN are 83.9% and 66.3% for 0 M parameters—without any training—and 93.3% and 86.6% for 0.8 M parameters. The Point-SimP reaches a test speed of 962 samples per second on the ModelNet40 dataset. The experimental results show that our proposed method effectively improves the performance on point cloud classification networks. Full article
(This article belongs to the Special Issue Machine Learning for Pattern Recognition)
Show Figures

Figure 1

17 pages, 541 KB  
Article
Utilizing Nearest-Neighbor Clustering for Addressing Imbalanced Datasets in Bioengineering
by Chih-Ming Huang, Chun-Hung Lin, Chuan-Sheng Hung, Wun-Hui Zeng, You-Cheng Zheng and Chih-Min Tsai
Bioengineering 2024, 11(4), 345; https://doi.org/10.3390/bioengineering11040345 - 31 Mar 2024
Cited by 1 | Viewed by 1792
Abstract
Imbalance classification is common in scenarios like fault diagnosis, intrusion detection, and medical diagnosis, where obtaining abnormal data is difficult. This article addresses a one-class problem, implementing and refining the One-Class Nearest-Neighbor (OCNN) algorithm. The original inter-quartile range mechanism is replaced with the [...] Read more.
Imbalance classification is common in scenarios like fault diagnosis, intrusion detection, and medical diagnosis, where obtaining abnormal data is difficult. This article addresses a one-class problem, implementing and refining the One-Class Nearest-Neighbor (OCNN) algorithm. The original inter-quartile range mechanism is replaced with the K-means with outlier removal (KMOR) algorithm for efficient outlier identification in the target class. Parameters are optimized by treating these outliers as non-target-class samples. A new algorithm, the Location-based Nearest-Neighbor (LBNN) algorithm, clusters one-class training data using KMOR and calculates the farthest distance and percentile for each test data point to determine if it belongs to the target class. Experiments cover parameter studies, validation on eight standard imbalanced datasets from KEEL, and three applications on real medical imbalanced datasets. Results show superior performance in precision, recall, and G-means compared to traditional classification models, making it effective for handling imbalanced data challenges. Full article
(This article belongs to the Special Issue Computer Vision and Machine Learning in Medical Applications)
Show Figures

Graphical abstract

Back to TopTop