Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (3)

Search Parameters:
Authors = Jitong Cai

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 2193 KiB  
Article
Distance Measurement and Data Analysis for Civil Aviation at 1000 Frames per Second Using Single-Photon Detection Technology
by Yiming Shan, Xinyu Pang, Huan Wang, Jitong Zhao, Shuai Yang, Yunlong Li, Guicheng Xu, Lihua Cai, Zhenyu Liu, Xiaoming Wang and Yi Yu
Sensors 2025, 25(13), 3918; https://doi.org/10.3390/s25133918 - 24 Jun 2025
Viewed by 367
Abstract
During high-speed maneuvers, aircraft experience rapid distance changes, necessitating high-frame-rate ranging for accurate characterization. However, existing optical ranging technologies often lack simplicity, affordability, and sufficient frame rates. While dual-station triangulation enables high-frame-rate distance calculation via geometry, it suffers from complex and costly deployment. [...] Read more.
During high-speed maneuvers, aircraft experience rapid distance changes, necessitating high-frame-rate ranging for accurate characterization. However, existing optical ranging technologies often lack simplicity, affordability, and sufficient frame rates. While dual-station triangulation enables high-frame-rate distance calculation via geometry, it suffers from complex and costly deployment. Conventional laser rangefinders are limited by low repetition rates. Single-photon ranging, using high-frequency low-energy pulses and detecting single reflected photons, offers a promising alternative. This study presents a kilohertz-level single-photon ranging system validated through civil aviation field tests. At 1000 Hz, relative distance, velocity, and acceleration were successfully captured. Simulating lower frame rates (100 Hz, 50 Hz, 10 Hz) via misalignment merging revealed standard deviations of 0.1661 m, 0.2361 m, and 0.2683 m, respectively, indicating that higher frame rates enhance distance measurement reproducibility. Error analysis against the 1000 Hz baseline further confirms that high-frame-rate ranging improves precision when monitoring high-speed maneuvers. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

22 pages, 5880 KiB  
Article
DFCANet: A Novel Lightweight Convolutional Neural Network Model for Corn Disease Identification
by Yang Chen, Xiaoyulong Chen, Jianwu Lin, Renyong Pan, Tengbao Cao, Jitong Cai, Dianzhi Yu, Tomislav Cernava and Xin Zhang
Agriculture 2022, 12(12), 2047; https://doi.org/10.3390/agriculture12122047 - 29 Nov 2022
Cited by 35 | Viewed by 3501
Abstract
The identification of corn leaf diseases in a real field environment faces several difficulties, such as complex background disturbances, variations and irregularities in the lesion areas, and large intra-class and small inter-class disparities. Traditional Convolutional Neural Network (CNN) models have a low recognition [...] Read more.
The identification of corn leaf diseases in a real field environment faces several difficulties, such as complex background disturbances, variations and irregularities in the lesion areas, and large intra-class and small inter-class disparities. Traditional Convolutional Neural Network (CNN) models have a low recognition accuracy and a large number of parameters. In this study, a lightweight corn disease identification model called DFCANet (Double Fusion block with Coordinate Attention Network) is proposed. The DFCANet consists mainly of two components: The dual feature fusion with coordinate attention and the Down-Sampling (DS) modules. The DFCA block contains dual feature fusion and Coordinate Attention (CA) modules. In order to completely fuse the shallow and deep features, these features were fused twice. The CA module suppresses the background noise and focuses on the diseased area. In addition, the DS module is used for down-sampling. It reduces the loss of information by expanding the feature channel dimension and the Depthwise convolution. The results show that DFCANet has an average recognition accuracy of 98.47%. It is more efficient at identifying corn leaf diseases in real scene images, compared with VGG16 (96.63%), ResNet50 (93.27%), EffcientNet-B0 (97.24%), ConvNeXt-B (94.18%), DenseNet121 (95.71%), MobileNet-V2 (95.41%), MobileNetv3-Large (96.33%), and ShuffleNetV2-1.0× (94.80%) methods. Moreover, the model’s Params and Flops are 1.91M and 309.1M, respectively, which are lower than heavyweight network models and most lightweight network models. In general, this study provides a novel, lightweight, and efficient convolutional neural network model for corn disease identification. Full article
(This article belongs to the Special Issue Digital Innovations in Agriculture)
Show Figures

Figure 1

17 pages, 7466 KiB  
Article
GrapeNet: A Lightweight Convolutional Neural Network Model for Identification of Grape Leaf Diseases
by Jianwu Lin, Xiaoyulong Chen, Renyong Pan, Tengbao Cao, Jitong Cai, Yang Chen, Xishun Peng, Tomislav Cernava and Xin Zhang
Agriculture 2022, 12(6), 887; https://doi.org/10.3390/agriculture12060887 - 20 Jun 2022
Cited by 63 | Viewed by 6939
Abstract
Most convolutional neural network (CNN) models have various difficulties in identifying crop diseases owing to morphological and physiological changes in crop tissues, and cells. Furthermore, a single crop disease can show different symptoms. Usually, the differences in symptoms between early crop disease and [...] Read more.
Most convolutional neural network (CNN) models have various difficulties in identifying crop diseases owing to morphological and physiological changes in crop tissues, and cells. Furthermore, a single crop disease can show different symptoms. Usually, the differences in symptoms between early crop disease and late crop disease stages include the area of disease and color of disease. This also poses additional difficulties for CNN models. Here, we propose a lightweight CNN model called GrapeNet for the identification of different symptom stages for specific grape diseases. The main components of GrapeNet are residual blocks, residual feature fusion blocks (RFFBs), and convolution block attention modules. The residual blocks are used to deepen the network depth and extract rich features. To alleviate the CNN performance degradation associated with a large number of hidden layers, we designed an RFFB module based on the residual block. It fuses the average pooled feature map before the residual block input and the high-dimensional feature maps after the residual block output by a concatenation operation, thereby achieving feature fusion at different depths. In addition, the convolutional block attention module (CBAM) is introduced after each RFFB module to extract valid disease information. The obtained results show that the identification accuracy was determined as 82.99%, 84.01%, 82.74%, 84.77%, 80.96%, 82.74%, 80.96%, 83.76%, and 86.29% for GoogLeNet, Vgg16, ResNet34, DenseNet121, MobileNetV2, MobileNetV3_large, ShuffleNetV2_×1.0, EfficientNetV2_s, and GrapeNet. The GrapeNet model achieved the best classification performance when compared with other classical models. The total number of parameters of the GrapeNet model only included 2.15 million. Compared with DenseNet121, which has the highest accuracy among classical network models, the number of parameters of GrapeNet was reduced by 4.81 million, thereby reducing the training time of GrapeNet by about two times compared with that of DenseNet121. Moreover, the visualization results of Grad-cam indicate that the introduction of CBAM can emphasize disease information and suppress irrelevant information. The overall results suggest that the GrapeNet model is useful for the automatic identification of grape leaf diseases. Full article
(This article belongs to the Special Issue Internet and Computers for Agriculture)
Show Figures

Figure 1

Back to TopTop