Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (7)

Search Parameters:
Keywords = VarifocalNet

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 8310 KB  
Article
An Economically Viable Minimalistic Solution for 3D Display Discomfort in Virtual Reality Headsets Using Vibrating Varifocal Fluidic Lenses
by Tridib Ghosh, Mohit Karkhanis and Carlos H. Mastrangelo
Virtual Worlds 2025, 4(3), 38; https://doi.org/10.3390/virtualworlds4030038 - 26 Aug 2025
Viewed by 1019
Abstract
Herein, we report a USB-powered VR-HMD prototype integrated with our 33 mm aperture varifocal liquid lenses and electronic drive components, all assembled in a conventional VR-HMD form-factor. In this volumetric-display-based VR system, a sequence of virtual images are rapidly flash-projected at different plane [...] Read more.
Herein, we report a USB-powered VR-HMD prototype integrated with our 33 mm aperture varifocal liquid lenses and electronic drive components, all assembled in a conventional VR-HMD form-factor. In this volumetric-display-based VR system, a sequence of virtual images are rapidly flash-projected at different plane depths in front of the observer and are synchronized with the correct accommodations provided by the varifocal lenses for depth-matched focusing at chosen sweep frequency. This projection mechanism aids in resolving the VAC that is present in conventional fixed-depth VR. Additionally, this system can address refractive error corrections like myopia and hyperopia for prescription users and do not require any eye-tracking systems. We experimentally demonstrate these lenses can vibrate up to frequencies approaching 100 Hz and report the frequency response of the varifocal lenses and their focal characteristics in real time as a function of the drive frequency. When integrated with the prototype’s 120 fps VR display system, these lenses produce a net diopter change of 2.3 D at a sweep frequency of 45 Hz while operating at ~70% of its maximum actuation voltage. The components add a total weight of around 50 g to the off-the-shelf VR set, making it a cost-effective but lightweight minimal solution. Full article
Show Figures

Figure 1

16 pages, 6553 KB  
Article
Cucumber Leaf Segmentation Based on Bilayer Convolutional Network
by Tingting Qian, Yangxin Liu, Shenglian Lu, Linyi Li, Xiuguo Zheng, Qingqing Ju, Yiyang Li, Chun Xie and Guo Li
Agronomy 2024, 14(11), 2664; https://doi.org/10.3390/agronomy14112664 - 12 Nov 2024
Cited by 1 | Viewed by 1798
Abstract
When monitoring crop growth using top-down images of the plant canopies, leaves in agricultural fields appear very dense and significantly overlap each other. Moreover, the image can be affected by external conditions such as background environment and light intensity, impacting the effectiveness of [...] Read more.
When monitoring crop growth using top-down images of the plant canopies, leaves in agricultural fields appear very dense and significantly overlap each other. Moreover, the image can be affected by external conditions such as background environment and light intensity, impacting the effectiveness of image segmentation. To address the challenge of segmenting dense and overlapping plant leaves under natural lighting conditions, this study employed a Bilayer Convolutional Network (BCNet) method for accurate leaf segmentation across various lighting environments. The major contributions of this study are as follows: (1) Utilized Fully Convolutional Object Detection (FCOS) for plant leaf detection, incorporating ResNet-50 with the Convolutional Block Attention Module (CBAM) and Feature Pyramid Network (FPN) to enhance Region of Interest (RoI) feature extraction from canopy top-view images. (2) Extracted the sub-region of the RoI based on the position of the detection box, using this region as input for the BCNet, ensuring precise segmentation. (3) Utilized instance segmentation of canopy top-view images using BCNet, improving segmentation accuracy. (4) Applied the Varifocal Loss Function to improve the classification loss function in FCOS, leading to better performance metrics. The experimental results on cucumber canopy top-view images captured in glass greenhouse and plastic greenhouse environments show that our method is highly effective. For cucumber leaves at different growth stages and under various lighting conditions, the Precision, Recall and Average Precision (AP) metrics for object recognition are 97%, 94% and 96.57%, respectively. For instance segmentation, the Precision, Recall and Average Precision (AP) metrics are 87%, 83% and 84.71%, respectively. Our algorithm outperforms commonly used deep learning algorithms such as Faster R-CNN, Mask R-CNN, YOLOv4 and PANet, showcasing its superior capability in complex agricultural settings. The results of this study demonstrate the potential of our method for accurate recognition and segmentation of highly overlapping leaves in diverse agricultural environments, significantly contributing to the application of deep learning algorithms in smart agriculture. Full article
(This article belongs to the Special Issue AI, Sensors and Robotics for Smart Agriculture—2nd Edition)
Show Figures

Figure 1

17 pages, 7240 KB  
Article
YOLO-BFRV: An Efficient Model for Detecting Printed Circuit Board Defects
by Jiaxin Liu, Bingyu Kang, Chao Liu, Xunhui Peng and Yan Bai
Sensors 2024, 24(18), 6055; https://doi.org/10.3390/s24186055 - 19 Sep 2024
Cited by 5 | Viewed by 3508
Abstract
The small area of a printed circuit board (PCB) results in densely distributed defects, leading to a lower detection accuracy, which subsequently impacts the safety and stability of the circuit board. This paper proposes a new YOLO-BFRV network model based on the improved [...] Read more.
The small area of a printed circuit board (PCB) results in densely distributed defects, leading to a lower detection accuracy, which subsequently impacts the safety and stability of the circuit board. This paper proposes a new YOLO-BFRV network model based on the improved YOLOv8 framework to identify PCB defects more efficiently and accurately. First, a bidirectional feature pyramid network (BIFPN) is introduced to expand the receptive field of each feature level and enrich the semantic information to improve the feature extraction capability. Second, the YOLOv8 backbone network is refined into a lightweight FasterNet network, reducing the computational load while improving the detection accuracy of minor defects. Subsequently, the high-speed re-parameterized detection head (RepHead) reduces inference complexity and boosts the detection speed without compromising accuracy. Finally, the VarifocalLoss is employed to enhance the detection accuracy for densely distributed PCB defects. The experimental results demonstrate that the improved model increases the mAP by 4.12% compared to the benchmark YOLOv8s model, boosts the detection speed by 45.89%, and reduces the GFLOPs by 82.53%, further confirming the superiority of the algorithm presented in this paper. Full article
(This article belongs to the Topic Applications in Image Analysis and Pattern Recognition)
Show Figures

Figure 1

15 pages, 5989 KB  
Article
Instance Segmentation of Lentinus edodes Images Based on YOLOv5seg-BotNet
by Xingmei Xu, Xiangyu Su, Lei Zhou, Helong Yu and Jian Zhang
Agronomy 2024, 14(8), 1808; https://doi.org/10.3390/agronomy14081808 - 16 Aug 2024
Cited by 1 | Viewed by 1531
Abstract
The shape and quantity of Lentinus edodes (commonly known as shiitake) fruiting bodies significantly affect their quality and yield. Accurate and rapid segmentation of these fruiting bodies is crucial for quality grading and yield prediction. This study proposed the YOLOv5seg-BotNet, a model for [...] Read more.
The shape and quantity of Lentinus edodes (commonly known as shiitake) fruiting bodies significantly affect their quality and yield. Accurate and rapid segmentation of these fruiting bodies is crucial for quality grading and yield prediction. This study proposed the YOLOv5seg-BotNet, a model for the instance segmentation of Lentinus edodes, to research its application for the mushroom industry. First, the backbone network was replaced with the BoTNet, and the spatial convolutions in the local backbone network were replaced with global self-attention modules to enhance the feature extraction ability. Subsequently, the PANet was adopted to effectively manage and integrate Lentinus edodes images in complex backgrounds at various scales. Finally, the Varifocal Loss function was employed to adjust the weights of different samples, addressing the issues of missed segmentation and mis-segmentation. The enhanced model demonstrated improvements in the precision, recall, Mask_AP, F1-Score, and FPS, achieving 97.58%, 95.74%, 95.90%, 96.65%, and 32.86 frames per second, respectively. These values represented the increases of 2.37%, 4.55%, 4.56%, 3.50%, and 2.61% compared to the original model. The model achieved dual improvements in segmentation accuracy and speed, exhibiting excellent detection and segmentation performance on Lentinus edodes fruiting bodies. This study provided technical fundamentals for future application of image detection and decision-making processes to evaluate mushroom production, including quality grading and intelligent harvesting. Full article
Show Figures

Figure 1

21 pages, 19045 KB  
Article
Research on Remote-Sensing Identification Method of Typical Disaster-Bearing Body Based on Deep Learning and Spatial Constraint Strategy
by Lei Wang, Yingjun Xu, Qiang Chen, Jidong Wu, Jianhui Luo, Xiaoxuan Li, Ruyi Peng and Jiaxin Li
Remote Sens. 2024, 16(7), 1161; https://doi.org/10.3390/rs16071161 - 27 Mar 2024
Cited by 7 | Viewed by 1890
Abstract
The census and management of hazard-bearing entities, along with the integrity of data quality, form crucial foundations for disaster risk assessment and zoning. By addressing the challenge of feature confusion, prevalent in single remotely sensed image recognition methods, this paper introduces a novel [...] Read more.
The census and management of hazard-bearing entities, along with the integrity of data quality, form crucial foundations for disaster risk assessment and zoning. By addressing the challenge of feature confusion, prevalent in single remotely sensed image recognition methods, this paper introduces a novel method, Spatially Constrained Deep Learning (SCDL), that combines deep learning with spatial constraint strategies for the extraction of disaster-bearing bodies, focusing on dams as a typical example. The methodology involves the creation of a dam dataset using a database of dams, followed by the training of YOLOv5, Varifocal Net, Faster R-CNN, and Cascade R-CNN models. These models are trained separately, and highly confidential dam location information is extracted through parameter thresholding. Furthermore, three spatial constraint strategies are employed to mitigate the impact of other factors, particularly confusing features, in the background region. To assess the method’s applicability and efficiency, Qinghai Province serves as the experimental area, with dam images from the Google Earth Pro database used as validation samples. The experimental results demonstrate that the recognition accuracy of SCDL reaches 94.73%, effectively addressing interference from background factors. Notably, the proposed method identifies six dams not recorded in the GOODD database, while also detecting six dams in the database that were previously unrecorded. Additionally, four dams misdirected in the database are corrected, contributing to the enhancement and supplementation of the global dam geo-reference database and providing robust support for disaster risk assessment. In conclusion, leveraging open geographic data products, the comprehensive framework presented in this paper, encompassing deep learning target detection technology and spatial constraint strategies, enables more efficient and accurate intelligent retrieval of disaster-bearing bodies, specifically dams. The findings offer valuable insights and inspiration for future advancements in related fields. Full article
(This article belongs to the Special Issue Deep Learning for Remote Sensing and Geodata)
Show Figures

Figure 1

20 pages, 85581 KB  
Article
Multi-Scale Object Detection Model for Autonomous Ship Navigation in Maritime Environment
by Zeyuan Shao, Hongguang Lyu, Yong Yin, Tao Cheng, Xiaowei Gao, Wenjun Zhang, Qianfeng Jing, Yanjie Zhao and Lunping Zhang
J. Mar. Sci. Eng. 2022, 10(11), 1783; https://doi.org/10.3390/jmse10111783 - 19 Nov 2022
Cited by 29 | Viewed by 6224
Abstract
Accurate detection of sea-surface objects is vital for the safe navigation of autonomous ships. With the continuous development of artificial intelligence, electro-optical (EO) sensors such as video cameras are used to supplement marine radar to improve the detection of objects that produce weak [...] Read more.
Accurate detection of sea-surface objects is vital for the safe navigation of autonomous ships. With the continuous development of artificial intelligence, electro-optical (EO) sensors such as video cameras are used to supplement marine radar to improve the detection of objects that produce weak radar signals and small sizes. In this study, we propose an enhanced convolutional neural network (CNN) named VarifocalNet * that improves object detection in harsh maritime environments. Specifically, the feature representation and learning ability of the VarifocalNet model are improved by using a deformable convolution module, redesigning the loss function, introducing a soft non-maximum suppression algorithm, and incorporating multi-scale prediction methods. These strategies improve the accuracy and reliability of our CNN-based detection results under complex sea conditions, such as in turbulent waves, sea fog, and water reflection. Experimental results under different maritime conditions show that our method significantly outperforms similar methods (such as SSD, YOLOv3, RetinaNet, Faster R-CNN, Cascade R-CNN) in terms of the detection accuracy and robustness for small objects. The maritime obstacle detection results were obtained under harsh imaging conditions to demonstrate the performance of our network model. Full article
(This article belongs to the Special Issue Application of Advanced Technologies in Maritime Safety)
Show Figures

Figure 1

13 pages, 2265 KB  
Article
Detection of Weeds Growing in Alfalfa Using Convolutional Neural Networks
by Jie Yang, Yundi Wang, Yong Chen and Jialin Yu
Agronomy 2022, 12(6), 1459; https://doi.org/10.3390/agronomy12061459 - 17 Jun 2022
Cited by 29 | Viewed by 3440
Abstract
Alfalfa (Medicago sativa L.) is used as a high-nutrient feed for animals. Weeds are a significant challenge that affects alfalfa production. Although weeds are unevenly distributed, herbicides are broadcast-applied in alfalfa fields. In this research, object detection convolutional neural networks, including Faster [...] Read more.
Alfalfa (Medicago sativa L.) is used as a high-nutrient feed for animals. Weeds are a significant challenge that affects alfalfa production. Although weeds are unevenly distributed, herbicides are broadcast-applied in alfalfa fields. In this research, object detection convolutional neural networks, including Faster R-CNN, VarifocalNet (VFNet), and You Only Look Once Version 3 (YOLOv3), were used to indiscriminately detect all weed species (1-class) and discriminately detect between broadleaves and grasses (2-class). YOLOv3 outperformed other object detection networks in detecting grass weeds. The performances of using image classification networks (GoogLeNet and VGGNet) and object detection networks (Faster R-CNN and YOLOv3) for detecting broadleaves and grasses were compared. GoogLeNet and VGGNet (F1 scores ≥ 0.98) outperformed Faster R-CNN and YOLOv3 (F1 scores ≤ 0.92). Classifying and training various broadleaf and grass weeds did not improve the performance of the neural networks for weed detection. VGGNet was the most effective neural network (F1 scores ≥ 0.99) tested to detect broadleaf and grass weeds growing in alfalfa. Future research will integrate the VGGNet into the machine vision subsystem of smart sprayers for site-specific herbicide applications. Full article
Show Figures

Figure 1

Back to TopTop