Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (5)

Search Parameters:
Keywords = binocular competition

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 1514 KB  
Article
LBT Italia: Current Achievements and Future Directions
by Silvia Tosi, Ester Marini, Felice Cusano, Andrea Rossi, Roberto Speziali and Roberta Carini
Instruments 2025, 9(4), 24; https://doi.org/10.3390/instruments9040024 - 21 Oct 2025
Viewed by 926
Abstract
The Large Binocular Telescope (LBT) is a world-leading astronomical observatory, where the Italian partnership has played an important role in increasing the telescope’s productivity, both through an optimized observing strategy and through peer-reviewed publications that are well recognized by the international astronomical community. [...] Read more.
The Large Binocular Telescope (LBT) is a world-leading astronomical observatory, where the Italian partnership has played an important role in increasing the telescope’s productivity, both through an optimized observing strategy and through peer-reviewed publications that are well recognized by the international astronomical community. This manuscript provides an updated overview of the active and past instruments at LBT, together with key usage statistics. In particular, we analyze the operational performance recorded in the LBT Italia night logs during INAF’s observing time and assess the scientific impact of each instrument. Between 2014 and 2025, LBT Italia produced an average of 14 refereed publications per year, based on an annual average of 311 h of on-sky time. This corresponds to approximately 2.2 nights of telescope time per publication. The results of this analysis are placed in an international context to evaluate the competitiveness of LBT, and we outline future perspectives for scientific exploitation. Full article
Show Figures

Figure 1

22 pages, 16339 KB  
Article
MFSM-Net: Multimodal Feature Fusion for the Semantic Segmentation of Urban-Scale Textured 3D Meshes
by Xinjie Hao, Jiahui Wang, Wei Leng, Rongting Zhang and Guangyun Zhang
Remote Sens. 2025, 17(9), 1573; https://doi.org/10.3390/rs17091573 - 28 Apr 2025
Viewed by 1441
Abstract
The semantic segmentation of textured 3D meshes is a critical step in constructing city-scale realistic 3D models. Compared to colored point clouds, textured 3D meshes have the advantage of high-resolution texture image patches embedded on each mesh face. However, existing studies predominantly focus [...] Read more.
The semantic segmentation of textured 3D meshes is a critical step in constructing city-scale realistic 3D models. Compared to colored point clouds, textured 3D meshes have the advantage of high-resolution texture image patches embedded on each mesh face. However, existing studies predominantly focus on their geometric structures, with limited utilization of these high-resolution textures. Inspired by the binocular perception of humans, this paper proposes a multimodal feature fusion network based on 3D geometric structures and 2D high-resolution texture images for the semantic segmentation of textured 3D meshes. Methodologically, the 3D feature extraction branch computes the centroid coordinates and face normals of mesh faces as initial 3D features, followed by a multi-scale Transformer network to extract high-level 3D features. The 2D feature extraction branch employs orthographic views of city scenes captured from a top-down perspective and uses a U-Net to extract high-level 2D features. To align features across 2D and 3D modalities, a Bridge view-based alignment algorithm is proposed, which visualizes the 3D mesh indices to establish pixel-level associations with orthographic views, achieving the precise alignment of multimodal features. Experimental results demonstrate that the proposed method achieves competitive performance in city-scale textured 3D mesh semantic segmentation, validating the effectiveness and potential of the cross-modal fusion strategy. Full article
(This article belongs to the Special Issue Urban Planning Supported by Remote Sensing Technology II)
Show Figures

Figure 1

24 pages, 3316 KB  
Article
Exploring Binocular Visual Attention by Presenting Rapid Dichoptic and Dioptic Series
by Manuel Moreno-Sánchez, Elton H. Matsushima and Jose Antonio Aznar-Casanova
Brain Sci. 2024, 14(5), 518; https://doi.org/10.3390/brainsci14050518 - 20 May 2024
Cited by 1 | Viewed by 1927
Abstract
This study addresses an issue in attentional distribution in a binocular visual system using RSVP tasks under Attentional Blink (AB) experimental protocols. In Experiment 1, we employed dichoptic RSVP to verify whether, under interocular competition, attention may be captured by a monocular channel. [...] Read more.
This study addresses an issue in attentional distribution in a binocular visual system using RSVP tasks under Attentional Blink (AB) experimental protocols. In Experiment 1, we employed dichoptic RSVP to verify whether, under interocular competition, attention may be captured by a monocular channel. Experiment 2 was a control experiment, where a monoptic RSVP assessed by both or only one eye determines whether Experiment 1 monocular condition results were due to an allocation of attention to one eye. Experiment 3 was also a control experiment designed to determine whether Experiment 1 results were due to the effect of interocular competition or to a diminished visual contrast. Results from Experiment 1 revealed that dichoptic presentations caused a delay in the type stage of the Wyble’s eSTST model, postponing the subsequent tokenization process. The delay in monocular conditions may be further explained by a visual attenuation, due to fusion of target and an empty frame. Experiment 2 evidenced the attentional allocation to monocular channels when forced by eye occlusion. Experiment 3 disclosed that monocular performance in Experiment 1 differs significantly from conditions with interocular competition. While both experiments revealed similar performance in monocular conditions, rivalry conditions exhibit lower detection rates, suggesting that competing stimuli was not responsible for Experiment 1 results. These findings highlight the differences between dichoptic and monoptic presentations of stimuli, particularly on the AB effect, which appears attenuated or absent in dichoptic settings. Furthermore, results suggest that monoptic presentation and binocular fusion stages were a necessary condition for the attentional allocation. Full article
Show Figures

Figure 1

15 pages, 4597 KB  
Article
An Apple Detection and Localization Method for Automated Harvesting under Adverse Light Conditions
by Guoyu Zhang, Ye Tian, Wenhan Yin and Change Zheng
Agriculture 2024, 14(3), 485; https://doi.org/10.3390/agriculture14030485 - 16 Mar 2024
Cited by 9 | Viewed by 2746
Abstract
The use of automation technology in agriculture has become particularly important as global agriculture is challenged by labor shortages and efficiency gains. The automated process for harvesting apples, an important agricultural product, relies on efficient and accurate detection and localization technology to ensure [...] Read more.
The use of automation technology in agriculture has become particularly important as global agriculture is challenged by labor shortages and efficiency gains. The automated process for harvesting apples, an important agricultural product, relies on efficient and accurate detection and localization technology to ensure the quality and quantity of production. Adverse lighting conditions can significantly reduce the accuracy of fruit detection and localization in automated apple harvesting. Based on deep-learning techniques, this study aims to develop an accurate fruit detection and localization method under adverse light conditions. This paper explores the LE-YOLO model for accurate and robust apple detection and localization. The traditional YOLOv5 network was enhanced by adding an image enhancement module and an attention mechanism. Additionally, the loss function was improved to enhance detection performance. Secondly, the enhanced network was integrated with a binocular camera to achieve precise apple localization even under adverse lighting conditions. This was accomplished by calculating the 3D coordinates of feature points using the binocular localization principle. Finally, detection and localization experiments were conducted on the established dataset of apples under adverse lighting conditions. The experimental results indicate that LE-YOLO achieves higher accuracy in detection and localization compared to other target detection models. This demonstrates that LE-YOLO is more competitive in apple detection and localization under adverse light conditions. Compared to traditional manual and general automated harvesting, our method enables automated work under various adverse light conditions, significantly improving harvesting efficiency, reducing labor costs, and providing a feasible solution for automation in the field of apple harvesting. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

23 pages, 7367 KB  
Article
Trajectory Tracking and Load Monitoring for Moving Vehicles on Bridge Based on Axle Position and Dual Camera Vision
by Dongdong Zhao, Wei He, Lu Deng, Yuhan Wu, Hong Xie and Jianjun Dai
Remote Sens. 2021, 13(23), 4868; https://doi.org/10.3390/rs13234868 - 30 Nov 2021
Cited by 16 | Viewed by 5331
Abstract
Monitoring traffic loads is vital for ensuring bridge safety and overload controlling. Bridge weigh-in-motion (BWIM) technology, which uses an instrumented bridge as a scale platform, has been proven as an efficient and durable vehicle weight identification method. However, there are still challenges with [...] Read more.
Monitoring traffic loads is vital for ensuring bridge safety and overload controlling. Bridge weigh-in-motion (BWIM) technology, which uses an instrumented bridge as a scale platform, has been proven as an efficient and durable vehicle weight identification method. However, there are still challenges with traditional BWIM methods in solving the inverse problem under certain circumstances, such as vehicles running at a non-constant speed, or multiple vehicle presence. For conventional BWIM systems, the velocity of a moving vehicle is usually assumed to be constant. Thus, the positions of loads, which are vital in the identification process, is predicted from the acquired speed and axle spacing by utilizing dedicated axle detectors (installed on the bridge surface or under the bridge soffit). In reality, vehicles may change speed. It is therefore difficult or even impossible for axle detectors to accurately monitor the true position of a moving vehicle. If this happens, the axle loads and bridge response cannot be properly matched, and remarkable errors can be induced to the influence line calibration process and the axle weight identification results. To overcome this problem, a new BWIM method was proposed in this study. This approach estimated the bridge influence line and axle weight by associating the bridge response and axle loads with their accurate positions. Binocular vision technology was used to continuously track the spatial position of the vehicle while it traveled over the bridge. Based on the obtained time–spatial information of the vehicle axles, the ordinate of influence line, axle load, and bridge response were correctly matched in the objective function of the BWIM algorithm. The influence line of the bridge, axle, and gross weight of the vehicle could then be reliably determined. Laboratory experiments were conducted to evaluate the performance of the proposed method. The negative effect of non-constant velocity on the identification result of traditional BWIM methods and the reason were also studied. Results showed that the proposed method predicted bridge influence line and vehicle weight with a much better accuracy than conventional methods under the considered adverse situations, and the stability of BWIM technique also was effectively improved. The proposed method provides a competitive alternative for future traffic load monitoring. Full article
(This article belongs to the Special Issue Bridge Monitoring Using Remote Sensors)
Show Figures

Graphical abstract

Back to TopTop