Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (6)

Search Parameters:
Authors = Ziang Niu

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
11 pages, 1428 KiB  
Article
High-Precision Time Delay Estimation Algorithm Based on Generalized Quadratic Cross-Correlation
by Menghao Sun, Ziang Niu, Xuzhen Zhu and Zijia Huang
Mathematics 2025, 13(15), 2397; https://doi.org/10.3390/math13152397 - 25 Jul 2025
Viewed by 210
Abstract
In UAV target localization, the accuracy of time delay estimation is the key to high-precision positioning. However, under low signal-to-noise ratio (SNR), time delay estimation suffers from serious secondary peak interference and low accuracy, which degrades the positioning accuracy. This paper proposes an [...] Read more.
In UAV target localization, the accuracy of time delay estimation is the key to high-precision positioning. However, under low signal-to-noise ratio (SNR), time delay estimation suffers from serious secondary peak interference and low accuracy, which degrades the positioning accuracy. This paper proposes an improved time delay estimation algorithm based on generalized quadratic cross-correlation. By introducing exponential operations and Hilbert difference operation, suppressing noise interference, and sharpening the peaks of the signal correlation function, the algorithm improves the estimation accuracy. Through simulation experiments comparing with the generalized cross-correlation and quadratic correlation algorithms, the results show that the improved algorithm enhances the peak of the cross-correlation function, improves the accuracy of estimation, and exhibits better anti-noise performance in low SNR environments, providing a new approach for high-precision time delay estimation in complex signal environments. Full article
Show Figures

Figure 1

17 pages, 6959 KiB  
Article
A Skeleton-Based Method of Root System 3D Reconstruction and Phenotypic Parameter Measurement from Multi-View Image Sequence
by Chengjia Xu, Ting Huang, Ziang Niu, Xinyue Sun, Yong He and Zhengjun Qiu
Agriculture 2025, 15(3), 343; https://doi.org/10.3390/agriculture15030343 - 5 Feb 2025
Cited by 1 | Viewed by 1135
Abstract
The phenotypic parameters of root systems are vital in reflecting the influence of genes and the environment on plants, and three-dimensional (3D) reconstruction is an important method for obtaining phenotypic parameters. Based on the characteristics of root systems, being featureless, thin structures, this [...] Read more.
The phenotypic parameters of root systems are vital in reflecting the influence of genes and the environment on plants, and three-dimensional (3D) reconstruction is an important method for obtaining phenotypic parameters. Based on the characteristics of root systems, being featureless, thin structures, this study proposed a skeleton-based 3D reconstruction and phenotypic parameter measurement method for root systems using multi-view images. An image acquisition system was designed to collect multi-view images for root system. The input images were binarized by the proposed OTSU-based adaptive threshold segmentation method. Vid2Curve was adopted to realize the 3D reconstruction of root systems and calibration objects, which was divided into four steps: skeleton curve extraction, initialization, skeleton curve estimation, and surface reconstruction. Then, to extract phenotypic parameters, a scale alignment method based on the skeleton was realized using DBSCAN and RANSAC. Furthermore, a small-sized root system point completion algorithm was proposed to achieve more complete root system 3D models. Based on the above-mentioned methods, a total of 30 root samples of three species were tested. The results showed that the proposed method achieved a skeleton projection error of 0.570 pixels and a surface projection error of 0.468 pixels. Root number measurement achieved a precision of 0.97 and a recall of 0.96, and root length measurement achieved an MAE of 1.06 cm, an MAPE of 2.37%, an RMSE of 1.35 cm, and an R2 of 0.99. The whole process of reconstruction in the experiment was very fast, taking a maximum of 4.07 min. With high accuracy and high speed, the proposed methods make it possible to obtain the root phenotypic parameters quickly and accurately and promote the study of root phenotyping. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

20 pages, 22008 KiB  
Article
A Novel Approach to Optimize Key Limitations of Azure Kinect DK for Efficient and Precise Leaf Area Measurement
by Ziang Niu, Ting Huang, Chengjia Xu, Xinyue Sun, Mohamed Farag Taha, Yong He and Zhengjun Qiu
Agriculture 2025, 15(2), 173; https://doi.org/10.3390/agriculture15020173 - 14 Jan 2025
Cited by 6 | Viewed by 1215
Abstract
Maize leaf area offers valuable insights into physiological processes, playing a critical role in breeding and guiding agricultural practices. The Azure Kinect DK possesses the real-time capability to capture and analyze the spatial structural features of crops. However, its further application in maize [...] Read more.
Maize leaf area offers valuable insights into physiological processes, playing a critical role in breeding and guiding agricultural practices. The Azure Kinect DK possesses the real-time capability to capture and analyze the spatial structural features of crops. However, its further application in maize leaf area measurement is constrained by RGB–depth misalignment and limited sensitivity to detailed organ-level features. This study proposed a novel approach to address and optimize the limitations of the Azure Kinect DK through the multimodal coupling of RGB-D data for enhanced organ-level crop phenotyping. To correct RGB–depth misalignment, a unified recalibration method was developed to ensure accurate alignment between RGB and depth data. Furthermore, a semantic information-guided depth inpainting method was proposed, designed to repair void and flying pixels commonly observed in Azure Kinect DK outputs. The semantic information was extracted using a joint YOLOv11-SAM2 model, which utilizes supervised object recognition prompts and advanced visual large models to achieve precise RGB image semantic parsing with minimal manual input. An efficient pixel filter-based depth inpainting algorithm was then designed to inpaint void and flying pixels and restore consistent, high-confidence depth values within semantic regions. A validation of this approach through leaf area measurements in practical maize field applications—challenged by a limited workspace, constrained viewpoints, and environmental variability—demonstrated near-laboratory precision, achieving an MAPE of 6.549%, RMSE of 4.114 cm2, MAE of 2.980 cm2, and R2 of 0.976 across 60 maize leaf samples. By focusing processing efforts on the image level rather than directly on 3D point clouds, this approach markedly enhanced both efficiency and accuracy with the sufficient utilization of the Azure Kinect DK, making it a promising solution for high-throughput 3D crop phenotyping. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

16 pages, 6488 KiB  
Article
A Novel Method for Wheat Spike Phenotyping Based on Instance Segmentation and Classification
by Ziang Niu, Ning Liang, Yiyin He, Chengjia Xu, Sashuang Sun, Zhenjiang Zhou and Zhengjun Qiu
Appl. Sci. 2024, 14(14), 6031; https://doi.org/10.3390/app14146031 - 10 Jul 2024
Cited by 3 | Viewed by 1855
Abstract
The phenotypic analysis of wheat spikes plays an important role in wheat growth management, plant breeding, and yield estimation. However, the dense and tight arrangement of spikelets and grains on the spikes makes the phenotyping more challenging. This study proposed a rapid and [...] Read more.
The phenotypic analysis of wheat spikes plays an important role in wheat growth management, plant breeding, and yield estimation. However, the dense and tight arrangement of spikelets and grains on the spikes makes the phenotyping more challenging. This study proposed a rapid and accurate image-based method for in-field wheat spike phenotyping consisting of three steps: wheat spikelet segmentation, grain number classification, and total grain number counting. Wheat samples ranging from the early filling period to the mature period were involved in the study, including three varieties: Zhengmai 618, Yannong 19, and Sumai 8. In the first step, the in-field collected images of wheat spikes were optimized by perspective transformation, augmentation, and size reduction. The YOLOv8-seg instance segmentation model was used to segment spikelets from wheat spike images. In the second step, the number of grains in each spikelet was classified by a machine learning model like the Support Vector Machine (SVM) model, utilizing 52 image features extracted for each spikelet, involving shape, color, and texture features as the input. Finally, the total number of grains on each wheat spike was counted by adding the number of grains in the corresponding spikelets. The results showed that the YOLOv8-seg model achieved excellent segmentation performance, with an average precision (AP) @[0.50:0.95] and accuracy (A) of 0.858 and 100%. Meanwhile, the SVM model had good classification performance for the number of grains in spikelets, and the accuracy, precision, recall, and F1 score reached 0.855, 0.860, 0.865, and 0.863, respectively. Mean absolute error (MAE) and mean absolute percentage error (MAPE) were as low as 1.04 and 5% when counting the total number of grains in the frontal view wheat spike images. The proposed method meets the practical application requirements of obtaining trait parameters of wheat spikes and contributes to intelligent and non-destructive spike phenotyping. Full article
(This article belongs to the Special Issue Applications of Machine Learning in Agriculture)
Show Figures

Figure 1

22 pages, 54841 KiB  
Article
High-Throughput Analysis of Leaf Chlorophyll Content in Aquaponically Grown Lettuce Using Hyperspectral Reflectance and RGB Images
by Mohamed Farag Taha, Hanping Mao, Yafei Wang, Ahmed Islam ElManawy, Gamal Elmasry, Letian Wu, Muhammad Sohail Memon, Ziang Niu, Ting Huang and Zhengjun Qiu
Plants 2024, 13(3), 392; https://doi.org/10.3390/plants13030392 - 29 Jan 2024
Cited by 15 | Viewed by 3673
Abstract
Chlorophyll content reflects plants’ photosynthetic capacity, growth stage, and nitrogen status and is, therefore, of significant importance in precision agriculture. This study aims to develop a spectral and color vegetation indices-based model to estimate the chlorophyll content in aquaponically grown lettuce. A completely [...] Read more.
Chlorophyll content reflects plants’ photosynthetic capacity, growth stage, and nitrogen status and is, therefore, of significant importance in precision agriculture. This study aims to develop a spectral and color vegetation indices-based model to estimate the chlorophyll content in aquaponically grown lettuce. A completely open-source automated machine learning (AutoML) framework (EvalML) was employed to develop the prediction models. The performance of AutoML along with four other standard machine learning models (back-propagation neural network (BPNN), partial least squares regression (PLSR), random forest (RF), and support vector machine (SVM) was compared. The most sensitive spectral (SVIs) and color vegetation indices (CVIs) for chlorophyll content were extracted and evaluated as reliable estimators of chlorophyll content. Using an ASD FieldSpec 4 Hi-Res spectroradiometer and a portable red, green, and blue (RGB) camera, 3600 hyperspectral reflectance measurements and 800 RGB images were acquired from lettuce grown across a gradient of nutrient levels. Ground measurements of leaf chlorophyll were acquired using an SPAD-502 m calibrated via laboratory chemical analyses. The results revealed a strong relationship between chlorophyll content and SPAD-502 readings, with an R2 of 0.95 and a correlation coefficient (r) of 0.975. The developed AutoML models outperformed all traditional models, yielding the highest values of the coefficient of determination in prediction (Rp2) for all vegetation indices (VIs). The combination of SVIs and CVIs achieved the best prediction accuracy with the highest Rp2 values ranging from 0.89 to 0.98, respectively. This study demonstrated the feasibility of spectral and color vegetation indices as estimators of chlorophyll content. Furthermore, the developed AutoML models can be integrated into embedded devices to control nutrient cycles in aquaponics systems. Full article
(This article belongs to the Special Issue Research Trends in Plant Phenotyping)
Show Figures

Figure 1

23 pages, 6259 KiB  
Article
Using Deep Convolutional Neural Network for Image-Based Diagnosis of Nutrient Deficiencies in Plants Grown in Aquaponics
by Mohamed Farag Taha, Alwaseela Abdalla, Gamal ElMasry, Mostafa Gouda, Lei Zhou, Nan Zhao, Ning Liang, Ziang Niu, Amro Hassanein, Salim Al-Rejaie, Yong He and Zhengjun Qiu
Chemosensors 2022, 10(2), 45; https://doi.org/10.3390/chemosensors10020045 - 25 Jan 2022
Cited by 53 | Viewed by 10257
Abstract
In the aquaponic system, plant nutrients bioavailable from fish excreta are not sufficient for optimal plant growth. Accurate and timely monitoring of the plant’s nutrient status grown in aquaponics is a challenge in order to maintain the balance and sustainability of the system. [...] Read more.
In the aquaponic system, plant nutrients bioavailable from fish excreta are not sufficient for optimal plant growth. Accurate and timely monitoring of the plant’s nutrient status grown in aquaponics is a challenge in order to maintain the balance and sustainability of the system. This study aimed to integrate color imaging and deep convolutional neural networks (DCNNs) to diagnose the nutrient status of lettuce grown in aquaponics. Our approach consists of multi-stage procedures, including plant object detection and classification of nutrient deficiency. The robustness and diagnostic capability of proposed approaches were evaluated using a total number of 3000 lettuce images that were classified into four nutritional classes—namely, full nutrition (FN), nitrogen deficiency (N), phosphorous deficiency (P), and potassium deficiency (K). The performance of the DCNNs was compared with traditional machine learning (ML) algorithms (i.e., Simple thresholding, K-means, support vector machine; SVM, k-nearest neighbor; KNN, and decision Tree; DT). The results demonstrated that the deep proposed segmentation model obtained an accuracy of 99.1%. Also, the deep proposed classification model achieved the highest accuracy of 96.5%. These results indicate that deep learning models, combined with color imaging, provide a promising approach to timely monitor nutrient status of the plants grown in aquaponics, which allows for taking preventive measures and mitigating economic and production losses. These approaches can be integrated into embedded devices to control nutrient cycles in aquaponics. Full article
(This article belongs to the Special Issue Practical Applications of Spectral Sensing in Food and Agriculture)
Show Figures

Figure 1

Back to TopTop