Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (38)

Search Parameters:
Authors = Xiangjun Zou ORCID = 0000-0001-5146-599X

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 18466 KiB  
Article
An Innovative Method of Monitoring Cotton Aphid Infestation Based on Data Fusion and Multi-Source Remote Sensing Using Unmanned Aerial Vehicles
by Chenning Ren, Bo Liu, Zhi Liang, Zhonglong Lin, Wei Wang, Xinzheng Wei, Xiaojuan Li and Xiangjun Zou
Drones 2025, 9(4), 229; https://doi.org/10.3390/drones9040229 - 21 Mar 2025
Cited by 2 | Viewed by 781
Abstract
Cotton aphids are the primary pests that adversely affect cotton growth, and they also transmit a variety of viral diseases, seriously threatening cotton yield and quality. Although the traditional remote sensing method with a single data source improves the monitoring efficiency to a [...] Read more.
Cotton aphids are the primary pests that adversely affect cotton growth, and they also transmit a variety of viral diseases, seriously threatening cotton yield and quality. Although the traditional remote sensing method with a single data source improves the monitoring efficiency to a certain extent, it has limitations with regard to reflecting the complex distribution characteristics of aphid pests and accurate identification. Accordingly, there is a pressing need for efficient and high-precision UAV remote sensing technology for effective identification and localization. To address the above problems, this study began by presenting a fusion of two kinds of images, namely panchromatic and multispectral images, using Gram–Schmidt image fusion technique to extract multiple vegetation indices and analyze their correlation with aphid damage indices. After fusing the panchromatic and multispectral images, the correlation between vegetation indices and aphid infestation degree was significantly improved, which could more accurately reflect the spatial distribution characteristics of aphid infestation. Subsequently, these machine learning techniques were applied for modeling and evaluation of the performance of multispectral and fused image data. The results of the validation revealed that the GBDT (Gradient-Boosting Decision Tree) model for GLI, RVI, DVI, and SAVI vegetation indices based on the fused data performed the best, with an estimation accuracy of R2 of 0.88 and an RMSE of 0.0918, which was obviously better than that of the other five models, and that the monitoring method of combining fusion of panchromatic and multispectral imagery with the accuracy and efficiency of the GBDT model were noticeably higher than those of single multispectral imaging. The fused panchromatic and multispectral images combined with the GBDT model significantly outperformed the single multispectral image in terms of precision and efficiency. In conclusion, this study demonstrated the effectiveness of image fusion combined with GBDT modeling in cotton aphid pest monitoring. Full article
(This article belongs to the Section Drones in Agriculture and Forestry)
Show Figures

Figure 1

22 pages, 10652 KiB  
Article
An Enhanced Cycle Generative Adversarial Network Approach for Nighttime Pineapple Detection of Automated Harvesting Robots
by Fengyun Wu, Rong Zhu, Fan Meng, Jiajun Qiu, Xiaopei Yang, Jinhui Li and Xiangjun Zou
Agronomy 2024, 14(12), 3002; https://doi.org/10.3390/agronomy14123002 - 17 Dec 2024
Cited by 11 | Viewed by 1163
Abstract
Nighttime pineapple detection for automated harvesting robots is a significant challenge in intelligent agriculture. As a crucial component of robotic vision systems, accurate fruit detection is essential for round-the-clock operations. The study compared advanced end-to-end style transfer models, including U-GAT-IT, SCTNet, and CycleGAN, [...] Read more.
Nighttime pineapple detection for automated harvesting robots is a significant challenge in intelligent agriculture. As a crucial component of robotic vision systems, accurate fruit detection is essential for round-the-clock operations. The study compared advanced end-to-end style transfer models, including U-GAT-IT, SCTNet, and CycleGAN, finding that CycleGAN produced relatively good-quality images but had issues such as the inadequate restoration of nighttime details, color distortion, and artifacts. Therefore, this study further proposed an enhanced CycleGAN approach to address limited nighttime datasets and poor visibility, combining style transfer with small-sample object detection. The improved model features a novel generator structure with ResNeXtBlocks, an optimized upsampling module, and a hyperparameter optimization strategy. This approach achieves a 29.7% reduction in FID score compared to the original CycleGAN. When applied to YOLOv7-based detection, this method significantly outperforms existing approaches, improving precision, recall, average precision, and F1 score by 13.34%, 45.11%, 56.52%, and 30.52%, respectively. These results demonstrate the effectiveness of our enhanced CycleGAN in expanding limited nighttime datasets and supporting efficient automated harvesting in low-light conditions, contributing to the development of more versatile agricultural robots capable of continuous operation. Full article
Show Figures

Figure 1

13 pages, 6283 KiB  
Article
New Plum Detection in Complex Environments Based on Improved YOLOv8n
by Xiaokang Chen, Genggeng Dong, Xiangpeng Fan, Yan Xu, Xiangjun Zou, Jianping Zhou and Hong Jiang
Agronomy 2024, 14(12), 2931; https://doi.org/10.3390/agronomy14122931 - 9 Dec 2024
Viewed by 802
Abstract
To address the challenge of accurately detecting new plums amidst trunk and leaf occlusion and fruit overlap, this study presents a novel target detection model, YOLOv8n-CRS. A specialized dataset, specifically designed for new plums, was created under real orchard conditions, with the advanced [...] Read more.
To address the challenge of accurately detecting new plums amidst trunk and leaf occlusion and fruit overlap, this study presents a novel target detection model, YOLOv8n-CRS. A specialized dataset, specifically designed for new plums, was created under real orchard conditions, with the advanced YOLOv8n model serving as the base network. Initially, the CA attention mechanism was introduced to the backbone network to improve the model’s ability to extract crucial features of new plums. Subsequently, the RFB module was incorporated into the neck layer to leverage multiscale information, mitigating inaccuracies caused by fruit overlap and thereby enhancing detection performance. Finally, the original CIOU loss function was replaced with the SIOU loss function to further enhance the model’s detection accuracy. Test results show that the YOLOv8n-CRS model achieved a recall rate of 88.9%, with average precision scores of mAP@0.5 and mAP@0.5:0.95 recorded at 96.1% and 87.1%, respectively. The model’s F1 score reached 90.0%, and it delivered a real-time detection speed of 88.5 frames per second. Compared to the YOLOv8n model, the YOLOv8n-CRS exhibited a 2.2-percentage-point improvement in recall rate, alongside increases of 0.7 percentage points and 1.2 percentage points in mAP@0.5 and mAP@0.5:0.95, respectively. In comparison to the Faster R-CNN, YOLOv4, YOLOv5s, and YOLOv7 models, the YOLOv8n-CRS model features the smallest size of 6.9 MB. This streamlined design meets the demands for real-time identification of new plums in intricate orchard settings, providing strong technical backing for the visual perception systems of advanced plum-picking robots. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

20 pages, 33767 KiB  
Article
Multi-Source Data-Driven Extraction of Urban Residential Space: A Case Study of the Guangdong–Hong Kong–Macao Greater Bay Area Urban Agglomeration
by Xiaodie Yuan, Xiangjun Dai, Zeduo Zou, Xiong He, Yucong Sun and Chunshan Zhou
Remote Sens. 2024, 16(19), 3631; https://doi.org/10.3390/rs16193631 - 29 Sep 2024
Cited by 4 | Viewed by 1942
Abstract
The accurate extraction of urban residential space (URS) is of great significance for recognizing the spatial structure of urban function, understanding the complex urban operating system, and scientific allocation and management of urban resources. The traditional URS identification process is generally conducted through [...] Read more.
The accurate extraction of urban residential space (URS) is of great significance for recognizing the spatial structure of urban function, understanding the complex urban operating system, and scientific allocation and management of urban resources. The traditional URS identification process is generally conducted through statistical analysis or a manual field survey. Currently, there are also superpixel segmentation and wavelet transform (WT) processes to extract urban spatial information, but these methods have shortcomings in extraction efficiency and accuracy. The superpixel wavelet fusion (SWF) method proposed in this paper is a convenient method to extract URS by integrating multi-source data such as Point of Interest (POI) data, Nighttime Light (NTL) data, LandScan (LDS) data, and High-resolution Image (HRI) data. This method fully considers the distribution law of image information in HRI and imparts the spatial information of URS into the WT so as to obtain the recognition results of URS based on multi-source data fusion under the perception of spatial structure. The steps of this study are as follows: Firstly, the SLIC algorithm is used to segment HRI in the Guangdong–Hong Kong–Macao Greater Bay Area (GBA) urban agglomeration. Then, the discrete cosine wavelet transform (DCWT) is applied to POI–NTL, POI–LDS, and POI–NTL–LDS data sets, and the SWF is carried out based on different superpixel scale perspectives. Finally, the OSTU adaptive threshold algorithm is used to extract URS. The results show that the extraction accuracy of the NLT–POI data set is 81.52%, that of the LDS–POI data set is 77.70%, and that of the NLT–LDS–POI data set is 90.40%. The method proposed in this paper not only improves the accuracy of the extraction of URS, but also has good practical value for the optimal layout of residential space and regional planning of urban agglomerations. Full article
(This article belongs to the Special Issue Nighttime Light Remote Sensing Products for Urban Applications)
Show Figures

Figure 1

18 pages, 6855 KiB  
Article
YOLOv8n-CSE: A Model for Detecting Litchi in Nighttime Environments
by Hao Cao, Gengming Zhang, Anbang Zhao, Quanchao Wang, Xiangjun Zou and Hongjun Wang
Agronomy 2024, 14(9), 1924; https://doi.org/10.3390/agronomy14091924 - 27 Aug 2024
Cited by 1 | Viewed by 1334
Abstract
The accurate detection of litchi fruit cluster is the key technology of litchi picking robot. In the natural environment during the day, due to the unstable light intensity, uncertain light angle, background clutter and other factors, the identification and positioning accuracy of litchi [...] Read more.
The accurate detection of litchi fruit cluster is the key technology of litchi picking robot. In the natural environment during the day, due to the unstable light intensity, uncertain light angle, background clutter and other factors, the identification and positioning accuracy of litchi fruit cluster is greatly affected. Therefore, we proposed a method to detect litchi fruit cluster in the night environment. The use of artificial light source and fixed angle can effectively improve the identification and positioning accuracy of litchi fruit cluster. In view of the weak light intensity and reduced image features in the nighttime environment, we proposed the YOLOv8n-CSE model. The model improves the recognition of litchi clusters in night environment. Specifically, we use YOLOv8n as the initial model, and introduce the CPA-Enhancer module with chain thinking prompt mechanism in the neck part of the model, so that the network can alleviate problems such as image feature degradation in the night environment. In addition, the VoVGSCSP design pattern in Slimneck was adopted for the neck part, which made the model more lightweight. The multi-scale linear attention mechanism and the EfficientViT module, which can be deeply divided, further improved the detection accuracy and detection rate of YOLOv8n-CSE. The experimental results show that the proposed YOLOv8n-CSE model can not only recognize litchi clusters in the night scene, but also has a significant improvement over previous models. In mAP@0.5 and F1, YOLOv8n-CSE achieved 98.86% and 95.54% respectively. Compared with the original YOLOv8n, RT-DETR-l and YOLOv10n, mAP@0.5 is increased by 4.03%, 3.46% and 3.96%, respectively. When the number of parameters is only 4.93 m, F1 scores are increased by 5.47%, 2.96% and 6.24%, respectively. YOLOv8n-CSE achieves an inference time of 36.5ms for the desired detection results. To sum up, the model can satisfy the criteria of the litchi cluster detection system for extremely accurate nighttime environment identification. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

42 pages, 18854 KiB  
Review
A Review of Perception Technologies for Berry Fruit-Picking Robots: Advantages, Disadvantages, Challenges, and Prospects
by Chenglin Wang, Weiyu Pan, Tianlong Zou, Chunjiang Li, Qiyu Han, Haoming Wang, Jing Yang and Xiangjun Zou
Agriculture 2024, 14(8), 1346; https://doi.org/10.3390/agriculture14081346 - 12 Aug 2024
Cited by 8 | Viewed by 4856
Abstract
Berries are nutritious and valuable, but their thin skin, soft flesh, and fragility make harvesting and picking challenging. Manual and traditional mechanical harvesting methods are commonly used, but they are costly in labor and can damage the fruit. To overcome these challenges, it [...] Read more.
Berries are nutritious and valuable, but their thin skin, soft flesh, and fragility make harvesting and picking challenging. Manual and traditional mechanical harvesting methods are commonly used, but they are costly in labor and can damage the fruit. To overcome these challenges, it may be worth exploring alternative harvesting methods. Using berry fruit-picking robots with perception technology is a viable option to improve the efficiency of berry harvesting. This review presents an overview of the mechanisms of berry fruit-picking robots, encompassing their underlying principles, the mechanics of picking and grasping, and an examination of their structural design. The importance of perception technology during the picking process is highlighted. Then, several perception techniques commonly used by berry fruit-picking robots are described, including visual perception, tactile perception, distance measurement, and switching sensors. The methods of these four perceptual techniques used by berry-picking robots are described, and their advantages and disadvantages are analyzed. In addition, the technical characteristics of perception technologies in practical applications are analyzed and summarized, and several advanced applications of berry fruit-picking robots are presented. Finally, the challenges that perception technologies need to overcome and the prospects for overcoming these challenges are discussed. Full article
Show Figures

Figure 1

20 pages, 6246 KiB  
Article
YOLOv8n-DDA-SAM: Accurate Cutting-Point Estimation for Robotic Cherry-Tomato Harvesting
by Gengming Zhang, Hao Cao, Yangwen Jin, Yi Zhong, Anbang Zhao, Xiangjun Zou and Hongjun Wang
Agriculture 2024, 14(7), 1011; https://doi.org/10.3390/agriculture14071011 - 26 Jun 2024
Cited by 5 | Viewed by 2799
Abstract
Accurately identifying cherry-tomato picking points and obtaining their coordinate locations is critical to the success of cherry-tomato picking robots. However, previous methods for semantic segmentation alone or combining object detection with traditional image processing have struggled to accurately determine the cherry-tomato picking point [...] Read more.
Accurately identifying cherry-tomato picking points and obtaining their coordinate locations is critical to the success of cherry-tomato picking robots. However, previous methods for semantic segmentation alone or combining object detection with traditional image processing have struggled to accurately determine the cherry-tomato picking point due to challenges such as leaves as well as targets that are too small. In this study, we propose a YOLOv8n-DDA-SAM model that adds a semantic segmentation branch to target detection to achieve the desired detection and compute the picking point. To be specific, YOLOv8n is used as the initial model, and a dynamic snake convolutional layer (DySnakeConv) that is more suitable for the detection of the stems of cherry-tomato is used in neck of the model. In addition, the dynamic large convolutional kernel attention mechanism adopted in backbone and the use of ADown convolution resulted in a better fusion of the stem features with the neck features and a certain decrease in the number of model parameters without loss of accuracy. Combined with semantic branch SAM, the mask of picking points is effectively obtained and then the accurate picking point is obtained by simple shape-centering calculation. As suggested by the experimental results, the proposed YOLOv8n-DDA-SAM model is significantly improved from previous models not only in detecting stems but also in obtaining stem’s masks. In the mAP@0.5 and F1-score, the YOLOv8n-DDA-SAM achieved 85.90% and 86.13% respectively. Compared with the original YOLOv8n, YOLOv7, RT-DETR-l and YOLOv9c, the mAP@0.5 has improved by 24.7%, 21.85%, 19.76%, 15.99% respectively. F1-score has increased by 16.34%, 12.11%, 10.09%, 8.07% respectively, and the number of parameters is only 6.37M. In the semantic segmentation branch, not only does it not need to produce relevant datasets, but also improved its mIOU by 11.43%, 6.94%, 5.53%, 4.22% and mAP@0.5 by 12.33%, 7.49%, 6.4%, 5.99% compared to Deeplabv3+, Mask2former, DDRNet and SAN respectively. In summary, the model can well satisfy the requirements of high-precision detection and provides a strategy for the detection system of the cherry-tomato. Full article
Show Figures

Figure 1

17 pages, 14672 KiB  
Article
Strawberry Detection and Ripeness Classification Using YOLOv8+ Model and Image Processing Method
by Chenglin Wang, Haoming Wang, Qiyu Han, Zhaoguo Zhang, Dandan Kong and Xiangjun Zou
Agriculture 2024, 14(5), 751; https://doi.org/10.3390/agriculture14050751 - 11 May 2024
Cited by 19 | Viewed by 5921
Abstract
As strawberries are a widely grown cash crop, the development of strawberry fruit-picking robots for an intelligent harvesting system should match the rapid development of strawberry cultivation technology. Ripeness identification is a key step to realizing selective harvesting by strawberry fruit-picking robots. Therefore, [...] Read more.
As strawberries are a widely grown cash crop, the development of strawberry fruit-picking robots for an intelligent harvesting system should match the rapid development of strawberry cultivation technology. Ripeness identification is a key step to realizing selective harvesting by strawberry fruit-picking robots. Therefore, this study proposes combining deep learning and image processing for target detection and classification of ripe strawberries. First, the YOLOv8+ model is proposed for identifying ripe and unripe strawberries and extracting ripe strawberry targets in images. The ECA attention mechanism is added to the backbone network of YOLOv8+ to improve the performance of the model, and Focal-EIOU loss is used in loss function to solve the problem of imbalance between easy- and difficult-to-classify samples. Second, the centerline of the ripe strawberries is extracted, and the red pixels in the centerline of the ripe strawberries are counted according to the H-channel of their hue, saturation, and value (HSV). The percentage of red pixels in the centerline is calculated as a new parameter to quantify ripeness, and the ripe strawberries are classified as either fully ripe strawberries or not fully ripe strawberries. The results show that the improved YOLOv8+ model can accurately and comprehensively identify whether the strawberries are ripe or not, and the mAP50 curve steadily increases and converges to a relatively high value, with an accuracy of 97.81%, a recall of 96.36%, and an F1 score of 97.07. The accuracy of the image processing method for classifying ripe strawberries was 91.91%, FPR was 5.03%, and FNR was 14.28%. This study demonstrates the program’s ability to quickly and accurately identify strawberries at different stages of ripeness in a facility environment, which can provide guidance for selective picking by subsequent fruit-picking robots. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

16 pages, 2959 KiB  
Article
A Novel Method for the Object Detection and Weight Prediction of Chinese Softshell Turtles Based on Computer Vision and Deep Learning
by Yangwen Jin, Xulin Xiao, Yaoqiang Pan, Xinzhao Zhou, Kewei Hu, Hongjun Wang and Xiangjun Zou
Animals 2024, 14(9), 1368; https://doi.org/10.3390/ani14091368 - 1 May 2024
Cited by 2 | Viewed by 1755
Abstract
With the rapid development of the turtle breeding industry in China, the demand for automated turtle sorting is increasing. The automatic sorting of Chinese softshell turtles mainly consists of three parts: visual recognition, weight prediction, and individual sorting. This paper focuses on two [...] Read more.
With the rapid development of the turtle breeding industry in China, the demand for automated turtle sorting is increasing. The automatic sorting of Chinese softshell turtles mainly consists of three parts: visual recognition, weight prediction, and individual sorting. This paper focuses on two aspects, i.e., visual recognition and weight prediction, and a novel method for the object detection and weight prediction of Chinese softshell turtles is proposed. In the individual sorting process, computer vision technology is used to estimate the weight of Chinese softshell turtles and classify them by weight. For the visual recognition of the body parts of Chinese softshell turtles, a color space model is proposed in this paper to separate the turtles from the background effectively. By applying multiple linear regression analysis for modeling, the relationship between the weight and morphological parameters of Chinese softshell turtles is obtained, which can be used to estimate the weight of turtles well. An improved deep learning object detection network is used to extract the features of the plastron and carapace of the Chinese softshell turtles, achieving excellent detection results. The mAP of the improved network reached 96.23%, which can meet the requirements for the accurate identification of the body parts of Chinese softshell turtles. Full article
(This article belongs to the Section Aquatic Animals)
Show Figures

Figure 1

22 pages, 11632 KiB  
Article
Assisting the Planning of Harvesting Plans for Large Strawberry Fields through Image-Processing Method Based on Deep Learning
by Chenglin Wang, Qiyu Han, Chunjiang Li, Jianian Li, Dandan Kong, Faan Wang and Xiangjun Zou
Agriculture 2024, 14(4), 560; https://doi.org/10.3390/agriculture14040560 - 1 Apr 2024
Cited by 12 | Viewed by 2557
Abstract
Reasonably formulating the strawberry harvesting sequence can improve the quality of harvested strawberries and reduce strawberry decay. Growth information based on drone image processing can assist the strawberry harvesting, however, it is still a challenge to develop a reliable method for object identification [...] Read more.
Reasonably formulating the strawberry harvesting sequence can improve the quality of harvested strawberries and reduce strawberry decay. Growth information based on drone image processing can assist the strawberry harvesting, however, it is still a challenge to develop a reliable method for object identification in drone images. This study proposed a deep learning method, including an improved YOLOv8 model and a new image-processing framework, which could accurately and comprehensively identify mature strawberries, immature strawberries, and strawberry flowers in drone images. The improved YOLOv8 model used the shuffle attention block and the VoV–GSCSP block to enhance identification accuracy and detection speed. The environmental stability-based region segmentation was used to extract the strawberry plant area (including fruits, stems, and leaves). Edge extraction and peak detection were used to estimate the number of strawberry plants. Based on the number of strawberry plants and the distribution of mature strawberries, we draw a growth chart of strawberries (reflecting the urgency of picking in different regions). The experiment showed that the improved YOLOv8 model demonstrated an average accuracy of 82.50% in identifying immature strawberries, 87.40% for mature ones, and 82.90% for strawberry flowers in drone images. The model exhibited an average detection speed of 6.2 ms and a model size of 20.1 MB. The proposed new image-processing technique estimated the number of strawberry plants in a total of 100 images. The bias of the error for images captured at a height of 2 m is 1.1200, and the rmse is 1.3565; The bias of the error for the images captured at a height of 3 m is 2.8400, and the rmse is 3.0199. The assessment of picking priorities for various regions of the strawberry field in this study yielded an average accuracy of 80.53%, based on those provided by 10 experts. By capturing images throughout the entire growth cycle, we can calculate the harvest index for different regions. This means farmers can not only obtain overall ripeness information of strawberries in different regions but also adjust agricultural strategies based on the harvest index to improve both the quantity and quality of fruit set on strawberry plants, as well as plan the harvesting sequence for high-quality strawberry yields. Full article
(This article belongs to the Special Issue Application of Machine Learning and Data Analysis in Agriculture)
Show Figures

Figure 1

20 pages, 15162 KiB  
Article
ODN-Pro: An Improved Model Based on YOLOv8 for Enhanced Instance Detection in Orchard Point Clouds
by Yaoqiang Pan, Xvlin Xiao, Kewei Hu, Hanwen Kang, Yangwen Jin, Yan Chen and Xiangjun Zou
Agronomy 2024, 14(4), 697; https://doi.org/10.3390/agronomy14040697 - 28 Mar 2024
Cited by 6 | Viewed by 2144
Abstract
In an unmanned orchard, various tasks such as seeding, irrigation, health monitoring, and harvesting of crops are carried out by unmanned vehicles. These vehicles need to be able to distinguish which objects are fruit trees and which are not, rather than relying on [...] Read more.
In an unmanned orchard, various tasks such as seeding, irrigation, health monitoring, and harvesting of crops are carried out by unmanned vehicles. These vehicles need to be able to distinguish which objects are fruit trees and which are not, rather than relying on human guidance. To address this need, this study proposes an efficient and robust method for fruit tree detection in orchard point cloud maps. Feature extraction is performed on the 3D point cloud to form a two-dimensional feature vector containing three-dimensional information of the point cloud and the tree target is detected through the customized deep learning network. The impact of various feature extraction methods such as average height, density, PCA, VFH, and CVFH on the detection accuracy of the network is compared in this study. The most effective feature extraction method for the detection of tree point cloud objects is determined. The ECA attention module and the EVC feature pyramid structure are introduced into the YOLOv8 network. The experimental results show that the deep learning network improves the precision, recall, and mean average precision by 1.5%, 0.9%, and 1.2%, respectively. The proposed framework is deployed in unmanned orchards for field testing. The experimental results demonstrate that the framework can accurately identify tree targets in orchard point cloud maps, meeting the requirements for constructing semantic orchard maps. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

22 pages, 9068 KiB  
Article
YOLO-BLBE: A Novel Model for Identifying Blueberry Fruits with Different Maturities Using the I-MSRCR Method
by Chenglin Wang, Qiyu Han, Jianian Li, Chunjiang Li and Xiangjun Zou
Agronomy 2024, 14(4), 658; https://doi.org/10.3390/agronomy14040658 - 24 Mar 2024
Cited by 19 | Viewed by 2699
Abstract
Blueberry is among the fruits with high economic gains for orchard farmers. Identification of blueberry fruits with different maturities has economic significance to help orchard farmers plan pesticide application, estimate yield, and conduct harvest operations efficiently. Vision systems for automated orchard yield estimation [...] Read more.
Blueberry is among the fruits with high economic gains for orchard farmers. Identification of blueberry fruits with different maturities has economic significance to help orchard farmers plan pesticide application, estimate yield, and conduct harvest operations efficiently. Vision systems for automated orchard yield estimation have received growing attention toward fruit identification with different maturity stages. However, due to interfering factors such as varying outdoor illuminations, similar colors with the surrounding canopy, imaging distance, and occlusion in natural environments, it remains a serious challenge to develop reliable visual methods for identifying blueberry fruits with different maturities. This study constructed a YOLO-BLBE (Blueberry) model combined with an innovative I-MSRCR (Improved MSRCR (Multi-Scale Retinex with Color Restoration)) method to accurately identify blueberry fruits with different maturities. The color feature of blueberry fruit in the original image was enhanced by the I-MSRCR algorithm, which was improved based on the traditional MSRCR algorithm by adjusting the proportion of color restoration factors. The GhostNet model embedded by the CA (coordinate attention) mechanism module replaced the original backbone network of the YOLOv5s model to form the backbone of the YOLO-BLBE model. The BIFPN (Bidirectional Feature Pyramid Network) structure was applied in the neck network of the YOLO-BLBE model, and Alpha-EIOU was used as the loss function of the model to determine and filter candidate boxes. The main contributions of this study are as follows: (1) The I-MSRCR algorithm proposed in this paper can effectively amplify the color differences between blueberry fruits of different maturities. (2) Adding the synthesized blueberry images processed by the I-MSRCR algorithm to the training set for training can improve the model’s recognition accuracy for blueberries of different maturity levels. (3) The YOLO-BLBE model achieved an average identification accuracy of 99.58% for mature blueberry fruits, 96.77% for semi-mature blueberry fruits, and 98.07% for immature blueberry fruits. (4) The YOLO-BLBE model had a size of 12.75 MB and an average detection speed of 0.009 s. Full article
Show Figures

Figure 1

18 pages, 6141 KiB  
Article
A Forest Fire Recognition Method Based on Modified Deep CNN Model
by Shaoxiong Zheng, Xiangjun Zou, Peng Gao, Qin Zhang, Fei Hu, Yufei Zhou, Zepeng Wu, Weixing Wang and Shihong Chen
Forests 2024, 15(1), 111; https://doi.org/10.3390/f15010111 - 5 Jan 2024
Cited by 18 | Viewed by 3331
Abstract
Controlling and extinguishing spreading forest fires is a challenging task that often leads to irreversible losses. Moreover, large-scale forest fires generate smoke and dust, causing environmental pollution and posing potential threats to human life. In this study, we introduce a modified deep convolutional [...] Read more.
Controlling and extinguishing spreading forest fires is a challenging task that often leads to irreversible losses. Moreover, large-scale forest fires generate smoke and dust, causing environmental pollution and posing potential threats to human life. In this study, we introduce a modified deep convolutional neural network model (MDCNN) designed for the recognition and localization of fire in video imagery, employing a deep learning-based recognition approach. We apply transfer learning to refine the model and adapt it for the specific task of fire image recognition. To combat the issue of imprecise detection of flame characteristics, which are prone to misidentification, we integrate a deep CNN with an original feature fusion algorithm. We compile a diverse set of fire and non-fire scenarios to construct a training dataset of flame images, which is then employed to calibrate the model for enhanced flame detection accuracy. The proposed MDCNN model demonstrates a low false alarm rate of 0.563%, a false positive rate of 12.7%, a false negative rate of 5.3%, and a recall rate of 95.4%, and achieves an overall accuracy of 95.8%. The experimental results demonstrate that this method significantly improves the accuracy of flame recognition. The achieved recognition results indicate the model’s strong generalization ability. Full article
(This article belongs to the Section Natural Hazards and Risk Management)
Show Figures

Figure 1

25 pages, 4882 KiB  
Article
Assimilation and Evaluation of the COSMIC–2 and Sounding Data in Tropospheric Atmospheric Refractivity Forecasting across the Yellow Sea through an Ocean–Atmosphere–Wave Coupled Model
by Sheng Wu, Jiayu Song, Jing Zou, Xiangjun Tian, Zhijin Qiu, Bo Wang, Tong Hu, Zhiqian Li and Zhiyang Zhang
Atmosphere 2023, 14(12), 1776; https://doi.org/10.3390/atmos14121776 - 30 Nov 2023
Viewed by 1423
Abstract
In this study, a forecasting model was developed based on the COAWST and atmospheric 3D EnVar module to investigate the effects of assimilation of the sounding and COSMIC–2 data on the forecasting of the revised atmospheric refraction. Three groups of 72 h forecasting [...] Read more.
In this study, a forecasting model was developed based on the COAWST and atmospheric 3D EnVar module to investigate the effects of assimilation of the sounding and COSMIC–2 data on the forecasting of the revised atmospheric refraction. Three groups of 72 h forecasting tests, with assimilation of different data obtained for a period of one month, were constructed over the Yellow Sea. The results revealed that the bias of the revised atmospheric refraction was the lowest if both the sounding and COSMIC–2 data were assimilated. As a result of the assimilation of the hybrid data, the mean bias reduced by 6.09–6.28% within an altitude of 10 km, and the greatest reduction occurred below the altitude of 3000 m. In contrast, the test that assimilated only the sounding data led to an increase in bias at several levels. This increased bias was corrected after the introduction of the COSMIC–2 data, with the mean correction of 1.6 M within the middle and lower troposphere. During the typhoon period, the improvements in the assimilation were more significant than usual. The improved forecasts of the revised atmospheric refraction were mainly due to the moisture changes within the middle and lower troposphere, while the changes in the upper troposphere were influenced by multiple factors. Full article
Show Figures

Figure 1

21 pages, 18000 KiB  
Article
A Performance Analysis of a Litchi Picking Robot System for Actively Removing Obstructions, Using an Artificial Intelligence Algorithm
by Chenglin Wang, Chunjiang Li, Qiyu Han, Fengyun Wu and Xiangjun Zou
Agronomy 2023, 13(11), 2795; https://doi.org/10.3390/agronomy13112795 - 11 Nov 2023
Cited by 49 | Viewed by 3776
Abstract
Litchi is a highly favored fruit with high economic value. Mechanical automation of litchi picking is a key link for improving the quality and efficiency of litchi harvesting. Our research team has been conducting experiments to develop a visual-based litchi picking robot. However, [...] Read more.
Litchi is a highly favored fruit with high economic value. Mechanical automation of litchi picking is a key link for improving the quality and efficiency of litchi harvesting. Our research team has been conducting experiments to develop a visual-based litchi picking robot. However, in the early physical prototype experiments, we found that, although picking points were successfully located, litchi picking failed due to random obstructions of the picking points. In this study, the physical prototype of the litchi picking robot previously developed by our research team was upgraded by integrating a visual system for actively removing obstructions. A framework for an artificial intelligence algorithm was proposed for a robot vision system to locate picking points and to identify obstruction situations at picking points. An intelligent control algorithm was developed to control the obstruction removal device to implement obstruction removal operations by combining with the obstruction situation at the picking point. Based on the spatial redundancy of a picking point and the obstruction, the feeding posture of the robot was determined. The experiment showed that the precision of segmenting litchi fruits and branches was 88.1%, the recognition success rate of picking point recognition was 88%, the average error of picking point localization was 2.8511 mm, and an overall success rate of end-effector feeding was 81.3%. These results showed that the developed litchi picking robot could effectively implement obstruction removal. Full article
Show Figures

Figure 1

Back to TopTop