Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (4)

Search Parameters:
Keywords = Inner-EIoU

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 4920 KiB  
Article
Martian Skylight Identification Based on the Deep Learning Model
by Lihong Li, Lingli Mu, Wei Zhang, Weihua Dong and Yuqing He
Remote Sens. 2025, 17(15), 2571; https://doi.org/10.3390/rs17152571 - 24 Jul 2025
Viewed by 300
Abstract
As a type of distinctive pit on Mars, skylights are entrances to subsurface lava caves. They are very important for studying volcanic activity and potential preserved water ice, and are also considered as potential sites for human extraterrestrial bases in the future. Most [...] Read more.
As a type of distinctive pit on Mars, skylights are entrances to subsurface lava caves. They are very important for studying volcanic activity and potential preserved water ice, and are also considered as potential sites for human extraterrestrial bases in the future. Most skylights are manually identified, which has low efficiency and is highly subjective. Although deep learning methods have recently been used to identify skylights, they face challenges of few effective samples and low identification accuracy. In this article, 151 positive samples and 920 negative samples based on the MRO-HiRISE image data was used to create an initial skylight dataset, which contained few positive samples. To augment the initial dataset, StyleGAN2-ADA was selected to synthesize some positive samples and generated an augmented dataset with 896 samples. On the basis of the augmented skylight dataset, we proposed YOLOv9-Skylight for skylight identification by incorporating Inner-EIoU loss and DySample to enhance localization accuracy and feature extracting ability. Compared with YOLOv9, the P, R, and the F1 of YOLOv9-Skylight were improved by about 9.1%, 2.8%, and 5.6%, respectively. Compared with other mainstream models such as YOLOv5, YOLOv10, Faster R-CNN, Mask R-CNN, and DETR, YOLOv9-Skylight achieved the highest accuracy (F1 = 92.5%), which shows a strong performance in skylight identification. Full article
(This article belongs to the Special Issue Remote Sensing and Photogrammetry Applied to Deep Space Exploration)
Show Figures

Figure 1

25 pages, 16964 KiB  
Article
AAB-YOLO: An Improved YOLOv11 Network for Apple Detection in Natural Environments
by Liusong Yang, Tian Zhang, Shihan Zhou and Jingtan Guo
Agriculture 2025, 15(8), 836; https://doi.org/10.3390/agriculture15080836 - 12 Apr 2025
Cited by 3 | Viewed by 748
Abstract
Apple detection in natural environments is crucial for advancing agricultural automation. However, orchards often employ bagging techniques to protect apples from pests and improve quality, which introduces significant detection challenges due to the varied appearance and occlusion of apples caused by bags. Additionally, [...] Read more.
Apple detection in natural environments is crucial for advancing agricultural automation. However, orchards often employ bagging techniques to protect apples from pests and improve quality, which introduces significant detection challenges due to the varied appearance and occlusion of apples caused by bags. Additionally, the complex and variable natural backgrounds further complicate the detection process. To address these multifaceted challenges, this study introduces AAB-YOLO, a lightweight apple detection model based on an improved YOLOv11 framework. AAB-YOLO incorporates ADown modules to reduce model complexity, the C3k2_ContextGuided module for enhanced understanding of complex scenes, and the Detect_SEAM module for improved handling of occluded apples. Furthermore, the Inner_EIoU loss function is employed to boost detection accuracy and efficiency. The experimental results demonstrate significant improvements: mAP@50 increases from 0.917 to 0.921, precision rises from 0.948 to 0.951, and recall improves by 1.04%, while the model’s parameter count and computational complexity are reduced by 37.7% and 38.1%, respectively. By achieving lightweight performance while maintaining high accuracy, AAB-YOLO effectively meets the real-time apple detection needs in natural environments, overcoming the challenges posed by orchard bagging techniques and complex backgrounds. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

20 pages, 7477 KiB  
Article
A Ship’s Maritime Critical Target Identification Method Based on Lightweight and Triple Attention Mechanisms
by Pu Wang, Shenhua Yang, Guoquan Chen, Weijun Wang, Zeyang Huang and Yuanliang Jiang
J. Mar. Sci. Eng. 2024, 12(10), 1839; https://doi.org/10.3390/jmse12101839 - 14 Oct 2024
Cited by 3 | Viewed by 1525
Abstract
The ability to classify and recognize maritime targets based on visual images plays an important role in advancing ship intelligence and digitalization. The current target recognition algorithms for common maritime targets, such as buoys, reefs, other ships, and bridges of different colors, face [...] Read more.
The ability to classify and recognize maritime targets based on visual images plays an important role in advancing ship intelligence and digitalization. The current target recognition algorithms for common maritime targets, such as buoys, reefs, other ships, and bridges of different colors, face challenges such as incomplete classification, low recognition accuracy, and a large number of model parameters. To address these issues, this paper proposes a novel maritime target recognition method called DTI-YOLO (DualConv Triple Attention InnerEIOU-You Only Look Once). This method is based on a triple attention mechanism designed to enhance the model’s ability to classify and recognize buoys of different colors in the channel while also making the feature extraction network more lightweight. First, the lightweight double convolution kernel feature extraction layer is constructed using group convolution technology to replace the Conv structure of YOLOv9 (You Only Look Once Version 9), effectively reducing the number of parameters in the original model. Second, an improved three-branch structure is designed to capture cross-dimensional interactions of input image features. This structure forms a triple attention mechanism that accounts for the mutual dependencies between input channels and spatial positions, allowing for the calculation of attention weights for targets such as bridges, buoys, and other ships. Finally, InnerEIoU is used to replace CIoU to improve the loss function, thereby optimizing loss regression for targets with large scale differences. To verify the effectiveness of these algorithmic improvements, the DTI-YOLO algorithm was tested on a self-made dataset of 2300 ship navigation images. The experimental results show that the average accuracy of this method in identifying seven types of targets—including buoys, bridges, islands and reefs, container ships, bulk carriers, passenger ships, and other ships—reached 92.1%, with a 12% reduction in the number of parameters. This enhancement improves the model’s ability to recognize and distinguish different targets and buoy colors. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

22 pages, 16578 KiB  
Article
YOLOv8-RMDA: Lightweight YOLOv8 Network for Early Detection of Small Target Diseases in Tea
by Rong Ye, Guoqi Shao, Yun He, Quan Gao and Tong Li
Sensors 2024, 24(9), 2896; https://doi.org/10.3390/s24092896 - 1 May 2024
Cited by 27 | Viewed by 3539
Abstract
In order to efficiently identify early tea diseases, an improved YOLOv8 lesion detection method is proposed to address the challenges posed by the complex background of tea diseases, difficulty in detecting small lesions, and low recognition rate of similar phenotypic symptoms. This method [...] Read more.
In order to efficiently identify early tea diseases, an improved YOLOv8 lesion detection method is proposed to address the challenges posed by the complex background of tea diseases, difficulty in detecting small lesions, and low recognition rate of similar phenotypic symptoms. This method focuses on detecting tea leaf blight, tea white spot, tea sooty leaf disease, and tea ring spot as the research objects. This paper presents an enhancement to the YOLOv8 network framework by introducing the Receptive Field Concentration-Based Attention Module (RFCBAM) into the backbone network to replace C2f, thereby improving feature extraction capabilities. Additionally, a mixed pooling module (Mixed Pooling SPPF, MixSPPF) is proposed to enhance information blending between features at different levels. In the neck network, the RepGFPN module replaces the C2f module to further enhance feature extraction. The Dynamic Head module is embedded in the detection head part, applying multiple attention mechanisms to improve multi-scale spatial location and multi-task perception capabilities. The inner-IoU loss function is used to replace the original CIoU, improving learning ability for small lesion samples. Furthermore, the AKConv block replaces the traditional convolution Conv block to allow for the arbitrary sampling of targets of various sizes, reducing model parameters and enhancing disease detection. the experimental results using a self-built dataset demonstrate that the enhanced YOLOv8-RMDA exhibits superior detection capabilities in detecting small target disease areas, achieving an average accuracy of 93.04% in identifying early tea lesions. When compared to Faster R-CNN, MobileNetV2, and SSD, the average precision rates of YOLOv5, YOLOv7, and YOLOv8 have shown improvements of 20.41%, 17.92%, 12.18%, 12.18%, 10.85%, 7.32%, and 5.97%, respectively. Additionally, the recall rate (R) has increased by 15.25% compared to the lowest-performing Faster R-CNN model and by 8.15% compared to the top-performing YOLOv8 model. With an FPS of 132, YOLOv8-RMDA meets the requirements for real-time detection, enabling the swift and accurate identification of early tea diseases. This advancement presents a valuable approach for enhancing the ecological tea industry in Yunnan, ensuring its healthy development. Full article
Show Figures

Figure 1

Back to TopTop