Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (127)

Search Parameters:
Keywords = omnidirectional image

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 7556 KB  
Article
OA-YOLOv8: A Multiscale Feature Optimization Network for Remote Sensing Object Detection
by Jiahao Shi, Jian Liu, Jianqiang Zhang, Lei Zhang and Sihang Sun
Appl. Sci. 2026, 16(3), 1467; https://doi.org/10.3390/app16031467 - 31 Jan 2026
Viewed by 354
Abstract
Object recognition in remote sensing images is essential for applications such as land resource monitoring, maritime vessel detection, and emergency disaster assessment. However, detection accuracy is often limited by complex backgrounds, densely distributed targets, and multiscale variations. To address these challenges, this study [...] Read more.
Object recognition in remote sensing images is essential for applications such as land resource monitoring, maritime vessel detection, and emergency disaster assessment. However, detection accuracy is often limited by complex backgrounds, densely distributed targets, and multiscale variations. To address these challenges, this study aims to improve the detection of small-scale and densely distributed objects in complex remote sensing scenes. An improved object detection network is proposed, called omnidirectional and adaptive YOLOv8 (OA-YOLOv8), based on the YOLOv8 architecture. Two targeted enhancements are introduced. First, an omnidirectional perception refinement (OPR) network is embedded into the backbone to strengthen multiscale feature representation through the incorporation of receptive-field convolution with a triplet attention mechanism. Second, an adaptive channel dynamic upsampling (ACDU) module is designed by combining DySample, the Haar wavelet transform, and a self-supervised equivariant attention mechanism (SEAM) to dynamically optimize channel information and preserve fine-grained features during upsampling. Experiments on the satellite imagery multi-vehicle dataset (SIMD) demonstrate that OA-YOLOv8 outperforms the original YOLOv8 by 4.6%, 6.7%, and 4.1% in terms of mAP@0.5, precision, and recall, respectively. Visualization results further confirm its superior performance in detecting small and dense targets, indicating strong potential for practical remote sensing applications. Full article
Show Figures

Figure 1

22 pages, 2078 KB  
Article
A Multi-Modal Fusion Algorithm for Space Target Recognition Based on Spatial Attention and Multi-Scale Temporal Network
by Xiaoyu Cong, Yubing Han, Cheng Chen and Shichen Shan
Aerospace 2025, 12(12), 1081; https://doi.org/10.3390/aerospace12121081 - 4 Dec 2025
Viewed by 603
Abstract
When fusing inverse synthetic aperture radar (ISAR) images and high-resolution range profile (HRRP), the significant heterogeneity existing between the feature spaces of the two is not adequately considered, resulting in a low accuracy rate of space target recognition. A multi-modal fusion algorithm based [...] Read more.
When fusing inverse synthetic aperture radar (ISAR) images and high-resolution range profile (HRRP), the significant heterogeneity existing between the feature spaces of the two is not adequately considered, resulting in a low accuracy rate of space target recognition. A multi-modal fusion algorithm based on spatial attention and multi-scale temporal network is proposed in this paper. We carefully consider the data characteristics of HRRP and ISAR and design feature extraction networks, respectively. For HRRP, the local invariant features are extracted using dynamic convolution (DyConv), and the convolution depth is reduced. An improved multi-scale temporal convolution network is designed based on the temporal characteristics of HRRP to extract temporal features for target recognition. For ISAR images, an omnidirectional attention feature extraction module is designed to extract the deep semantic features of the images, and a noise reduction module with a spatial attention mechanism is designed before extracting the image features to reduce the background noise in the fused image. The superiority of the designed ISAR recognition network and HRRP recognition network for space target was verified through comparative and ablation experiments. The recognition rate for the target of the proposed algorithm is 98.41%. Full article
(This article belongs to the Section Astronautics & Space Science)
Show Figures

Figure 1

17 pages, 7634 KB  
Article
CLSM-Guided Imaging to Visualize the Depth of Effective Disinfection in Endodontics
by Rebecca Mattern, Sarah Böcher, Gerhard Müller-Newen, Georg Conrads, Johannes-Simon Wenzler and Andreas Braun
Antibiotics 2025, 14(12), 1201; https://doi.org/10.3390/antibiotics14121201 - 1 Dec 2025
Viewed by 604
Abstract
Background/Objectives: Important goals of endodontic treatment procedures are to effectively eliminate microorganisms from the root canal system and prevent reinfection. Despite advances in techniques, these goals continue to be difficult to achieve due to the complex anatomy of the root canal system and [...] Read more.
Background/Objectives: Important goals of endodontic treatment procedures are to effectively eliminate microorganisms from the root canal system and prevent reinfection. Despite advances in techniques, these goals continue to be difficult to achieve due to the complex anatomy of the root canal system and bacterial invasion into the dentinal tubules of the surrounding root dentin. This pilot study aimed to refine a confocal laser scanning microscopy (CLSM) model with LIVE/DEAD staining to quantitatively assess the depth of effective disinfection by endodontic disinfection measures. Methods: Thirty caries-free human teeth underwent standardized chemo-mechanical root canal preparation and were inoculated with Enterococcus faecalis. Following treatment, CLSM-guided imaging with LIVE/DEAD staining allowed for differentiation between vital and dead bacteria and quantification of the depth of effective disinfection. Results: An average depth of bacterial eradication of 450 µm for conventional and 520 µm for sonically activated irrigation (EDDY) could be observed with significant differences (p < 0.05) in the coronal and medial positions. Conclusions: The results indicated that sonically activated irrigation (EDDY) provided a more homogeneous (omnidirectional) irrigation pattern compared to conventional irrigation. The study highlights the importance of effective disinfection strategies in endodontics, emphasizing the need for further research on the depth of effective disinfection of endodontic disinfection measures and the optimization of disinfection protocols. Full article
Show Figures

Figure 1

17 pages, 490 KB  
Article
Knowledge-Guided Symbolic Regression for Interpretable Camera Calibration
by Rui Pimentel de Figueiredo
J. Imaging 2025, 11(11), 389; https://doi.org/10.3390/jimaging11110389 - 2 Nov 2025
Viewed by 916
Abstract
Calibrating cameras accurately requires the identification of projection and distortion models that effectively account for lens-specific deviations. Conventional formulations, like the pinhole model or radial–tangential corrections, often struggle to represent the asymmetric and nonlinear distortions encountered in complex environments such as autonomous navigation, [...] Read more.
Calibrating cameras accurately requires the identification of projection and distortion models that effectively account for lens-specific deviations. Conventional formulations, like the pinhole model or radial–tangential corrections, often struggle to represent the asymmetric and nonlinear distortions encountered in complex environments such as autonomous navigation, robotics, and immersive imaging. Although neural methods offer greater adaptability, they demand extensive training data, are computationally intensive, and often lack transparency. This work introduces a symbolic model discovery framework guided by physical knowledge, where symbolic regression and genetic programming (GP) are used in tandem to identify calibration models tailored to specific optical behaviors. The approach incorporates a broad class of known distortion models, including Brown–Conrady, Mei–Rives, Kannala–Brandt, and double-sphere, as modular components, while remaining extensible to any predefined or domain-specific formulation. Embedding these models directly into the symbolic search process constrains the solution space, enabling efficient parameter fitting and robust model selection without overfitting. Through empirical evaluation across a variety of lens types, including fisheye, omnidirectional, catadioptric, and traditional cameras, we show that our method produces results on par with or surpassing those of established calibration techniques. The outcome is a flexible, interpretable, and resource-efficient alternative suitable for deployment scenarios where calibration data are scarce or computational resources are constrained. Full article
(This article belongs to the Special Issue Celebrating the 10th Anniversary of the Journal of Imaging)
Show Figures

Figure 1

22 pages, 59687 KB  
Article
Multi-View Omnidirectional Vision and Structured Light for High-Precision Mapping and Reconstruction
by Qihui Guo, Maksim A. Grigorev, Zihan Zhang, Ivan Kholodilin and Bing Li
Sensors 2025, 25(20), 6485; https://doi.org/10.3390/s25206485 - 20 Oct 2025
Cited by 1 | Viewed by 1575
Abstract
Omnidirectional vision systems enable panoramic perception for autonomous navigation and large-scale mapping, but physical testbeds are costly, resource-intensive, and carry operational risks. We develop a virtual simulation platform for multi-view omnidirectional vision that supports flexible camera configuration and cross-platform data streaming for efficient [...] Read more.
Omnidirectional vision systems enable panoramic perception for autonomous navigation and large-scale mapping, but physical testbeds are costly, resource-intensive, and carry operational risks. We develop a virtual simulation platform for multi-view omnidirectional vision that supports flexible camera configuration and cross-platform data streaming for efficient processing. Building on this platform, we propose and validate a reconstruction and ranging method that fuses multi-view omnidirectional images with structured-light projection. The method achieves high-precision obstacle contour reconstruction and distance estimation without extensive physical calibration or rigid hardware setups. Experiments in simulation and the real world demonstrate distance errors within 8 mm and robust performance across diverse camera configurations, highlighting the practicality of the platform for omnidirectional vision research. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Graphical abstract

22 pages, 3608 KB  
Article
A Multi-Scale Feature Fusion Dual-Branch Mamba-CNN Network for Landslide Extraction
by Zhiheng Yang, Hua Zhang and Nanshan Zheng
Appl. Sci. 2025, 15(18), 10063; https://doi.org/10.3390/app151810063 - 15 Sep 2025
Viewed by 1528
Abstract
Automatically extracting landslide regions from remote sensing images plays a vital role in the landslide inventory compilation. However, this task remains challenging due to the considerable diversity of landslides in terms of morphology, triggering mechanisms, and internal structure. Thanks to its efficient long-sequence [...] Read more.
Automatically extracting landslide regions from remote sensing images plays a vital role in the landslide inventory compilation. However, this task remains challenging due to the considerable diversity of landslides in terms of morphology, triggering mechanisms, and internal structure. Thanks to its efficient long-sequence modeling, Mamba has emerged as a promising candidate for semantic segmentation tasks. This study adopts Mamba for landslide extraction to improve the recognition of complex geomorphic features. While Mamba demonstrates strong performance, it still faces challenges in capturing spatial dependencies and preserving fine-grained local information. To address these challenges, we propose a multi-scale spatial context-guided network (MSCG-Net). MSCG-Net features a dual-branch architecture, comprising a convolutional neural network (CNN) branch that captures detailed spatial features and an omnidirectional multi-scale Mamba (OMM) branch that models long-range contextual dependencies. We introduce an adaptive feature enhancement module (AFEM) to further enhance feature representation by effectively integrating global context with local details, which enhances both multiscale feature richness and boundary clarity. Additionally, we develop an omnidirectional multiscale scanning (OMSS) mechanism to improve contextual modeling and preserve computational efficiency by integrating omnidirectional attention with multi-scale feature extraction. Comprehensive evaluations on two benchmark datasets demonstrate that MSCG-Net outperforms existing approaches, achieving IoU scores of 78.04% on the Bijie dataset and 81.13% on the GVLM dataset. Furthermore, it exceeds the second-best methods by 2.28% and 4.25% in Boundary IoU, respectively. Full article
(This article belongs to the Section Environmental Sciences)
Show Figures

Figure 1

22 pages, 6827 KB  
Article
Metaheuristics-Assisted Placement of Omnidirectional Image Sensors for Visually Obstructed Environments
by Fernando Fausto, Gemma Corona, Adrian Gonzalez and Marco Pérez-Cisneros
Biomimetics 2025, 10(9), 579; https://doi.org/10.3390/biomimetics10090579 - 2 Sep 2025
Viewed by 729
Abstract
Optimal camera placement (OCP) is a crucial task for ensuring adequate surveillance of both indoor and outdoor environments. While several solutions to this problem have been documented in the literature, there are still research gaps related to the maximization of surveillance coverage, particularly [...] Read more.
Optimal camera placement (OCP) is a crucial task for ensuring adequate surveillance of both indoor and outdoor environments. While several solutions to this problem have been documented in the literature, there are still research gaps related to the maximization of surveillance coverage, particularly in terms of optimal placement of omnidirectional camera (OC) sensors in indoor and partially occluded environments via metaheuristic optimization algorithms (MOAs). In this paper, we present a study centered on several popular MOAs and their application to OCP for OC sensors in indoor environments. For our experiments we considered two experimental layouts consisting of both a deployment area, and visual obstructions, as well as two different omnidirectional camera models. The tested MOAs include popular algorithms such as PSO, GWO, SSO, GSA, SMS, SA, DE, GA, and CMA-ES. Experimental results suggest that the success in MOA-based OCP is strongly tied with the specific search strategy applied by the metaheuristic method, thus making certain approaches preferred over others for this kind of problem. Full article
Show Figures

Figure 1

24 pages, 4671 KB  
Article
OSSMDNet: An Omni-Selective Scanning Mechanism for a Remote Sensing Image Denoising Network Based on the State-Space Model
by Na Deng, Jie Han, Haiyong Ding, Dongsheng Liu, Zhichao Zhang, Wenping Song and Xudong Tong
Remote Sens. 2025, 17(16), 2759; https://doi.org/10.3390/rs17162759 - 8 Aug 2025
Cited by 1 | Viewed by 1127
Abstract
Remote sensing images often degrade during acquisition due to various environmental factors, leading to noise contamination and loss of texture details. Existing methods based on convolutional neural networks (CNNs) are limited by their local receptive fields, making it difficult to effectively model long-range [...] Read more.
Remote sensing images often degrade during acquisition due to various environmental factors, leading to noise contamination and loss of texture details. Existing methods based on convolutional neural networks (CNNs) are limited by their local receptive fields, making it difficult to effectively model long-range dependencies. Although Transformers possess global modeling capabilities, they face high computational costs and poor scalability in high-resolution remote sensing images. To address these challenges, this paper proposes an efficient remote sensing image denoising network—OSSMDNet—based on the Mamba network and incorporating an omni-directional selective scanning mechanism (OSSM). Its advantages include (1) introducing a multi-directional state-space modeling mechanism to enhance spatial structure perception capabilities and mitigate the limitations of traditional unidirectional modeling; (2) OSSMDNet is designed based on the Mamba architecture, achieving efficient fusion of global context and local details while maintaining linear computational complexity. On multiple remote sensing and natural image denoising datasets such as CBSD68 and DOTA, OSSMDNet significantly outperforms existing CNN-, Transformer-, and Mamba-based methods in terms of Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM) metrics, with PSNR and SSIM values 0.14 dB and 0.0033 higher than the most iconic Mamba baseline method, respectively. This demonstrates that the proposed OSSMDNet achieves an excellent balance between accuracy and efficiency. Full article
(This article belongs to the Special Issue Deep Learning for Remote Sensing Image Enhancement)
Show Figures

Figure 1

21 pages, 2267 KB  
Article
Dual-Branch Network for Blind Quality Assessment of Stereoscopic Omnidirectional Images: A Spherical and Perceptual Feature Integration Approach
by Zhe Wang, Yi Liu and Yang Song
Electronics 2025, 14(15), 3035; https://doi.org/10.3390/electronics14153035 - 30 Jul 2025
Viewed by 676
Abstract
Stereoscopic omnidirectional images (SOIs) have gained significant attention for their immersive viewing experience by providing binocular depth with panoramic scenes. However, evaluating their visual quality remains challenging due to its unique spherical geometry, binocular disparity, and viewing conditions. To address these challenges, this [...] Read more.
Stereoscopic omnidirectional images (SOIs) have gained significant attention for their immersive viewing experience by providing binocular depth with panoramic scenes. However, evaluating their visual quality remains challenging due to its unique spherical geometry, binocular disparity, and viewing conditions. To address these challenges, this paper proposes a dual-branch deep learning framework that integrates spherical structural features and perceptual binocular cues to assess the quality of SOIs without reference. Specifically, the global branch leverages spherical convolutions to capture wide-range spatial distortions, while the local branch utilizes a binocular difference module based on discrete wavelet transform to extract depth-aware perceptual information. A feature complementarity module is introduced to fuse global and local representations for final quality prediction. Experimental evaluations on two public SOIQA datasets—NBU-SOID and SOLID—demonstrate that the proposed method achieves state-of-the-art performance, with PLCC/SROCC values of 0.926/0.918 and 0.918/0.891, respectively. These results validate the effectiveness and robustness of our approach in stereoscopic omnidirectional image quality assessment tasks. Full article
(This article belongs to the Special Issue AI in Signal and Image Processing)
Show Figures

Figure 1

23 pages, 3907 KB  
Article
Woodot: An AI-Driven Mobile Robotic System for Sustainable Defect Repair in Custom Glulam Beams
by Pierpaolo Ruttico, Federico Bordoni and Matteo Deval
Sustainability 2025, 17(12), 5574; https://doi.org/10.3390/su17125574 - 17 Jun 2025
Viewed by 1784
Abstract
Defect repair on custom-curved glulam beams is still performed manually because knots are irregular, numerous, and located on elements that cannot pass through linear production lines, limiting the scalability of timber-based architecture. This study presents Woodot, an autonomous mobile robotic platform that combines [...] Read more.
Defect repair on custom-curved glulam beams is still performed manually because knots are irregular, numerous, and located on elements that cannot pass through linear production lines, limiting the scalability of timber-based architecture. This study presents Woodot, an autonomous mobile robotic platform that combines an omnidirectional rover, a six-dof collaborative arm, and a fine-tuned Segment Anything computer vision pipeline to identify, mill, and plug surface knots on geometrically variable beams. The perception model was trained on a purpose-built micro-dataset and reached an F1 score of 0.69 on independent test images, while the integrated system located defects with a 4.3 mm mean positional error. Full repair cycles averaged 74 s per knot, reducing processing time by more than 60% compared with skilled manual operations, and achieved flush plug placement in 87% of trials. These outcomes demonstrate that a lightweight AI model coupled with mobile manipulation can deliver reliable, shop-floor automation for low-volume, high-variation timber production. By shortening cycle times and lowering worker exposure to repetitive tasks, Woodot offers a viable pathway to enhance the environmental, economic, and social sustainability of digital timber construction. Nevertheless, some limitations remain, such as dependency on stable lighting conditions for optimal vision performance and the need for tool calibration checks. Full article
Show Figures

Figure 1

16 pages, 6530 KB  
Article
Reduction of Aerial Image Misalignment in Face-to-Face 3D Aerial Display
by Atsutoshi Kurihara and Yue Bao
J. Imaging 2025, 11(5), 150; https://doi.org/10.3390/jimaging11050150 - 9 May 2025
Viewed by 1213
Abstract
A Micromirror Array Plate (MMAP) has been proposed as a type of aerial display that allows users to directly touch the floating image. However, the aerial images generated by this optical element have a limited viewing angle, making them difficult to use in [...] Read more.
A Micromirror Array Plate (MMAP) has been proposed as a type of aerial display that allows users to directly touch the floating image. However, the aerial images generated by this optical element have a limited viewing angle, making them difficult to use in face-to-face interactions. Conventional methods enable face-to-face usability by displaying multiple aerial images corresponding to different viewpoints. However, because these images are two-dimensional, they cannot be displayed at the same position due to the inherent characteristics of MMAP. An omnidirectional 3D autostereoscopic aerial display has been developed to address this issue, but it requires multiple expensive and specially shaped MMAPs to generate aerial images. To overcome this limitation, this study proposes a method that combines a single MMAP with integral photography (IP) to produce 3D aerial images with depth while reducing image misalignment. The experimental results demonstrate that the proposed method successfully displays a 3D aerial image using a single MMAP and reduces image misalignment to 1.1 mm. Full article
Show Figures

Figure 1

22 pages, 4976 KB  
Article
MambaOSR: Leveraging Spatial-Frequency Mamba for Distortion-Guided Omnidirectional Image Super-Resolution
by Weilei Wen, Qianqian Zhao and Xiuli Shao
Entropy 2025, 27(4), 446; https://doi.org/10.3390/e27040446 - 20 Apr 2025
Cited by 1 | Viewed by 2649
Abstract
Omnidirectional image super-resolution (ODISR) is critical for VR/AR applications, as high-quality 360° visual content significantly enhances immersive experiences. However, existing ODISR methods suffer from limited receptive fields and high computational complexity, which restricts their ability to model long-range dependencies and extract global structural [...] Read more.
Omnidirectional image super-resolution (ODISR) is critical for VR/AR applications, as high-quality 360° visual content significantly enhances immersive experiences. However, existing ODISR methods suffer from limited receptive fields and high computational complexity, which restricts their ability to model long-range dependencies and extract global structural features. Consequently, these limitations hinder the effective reconstruction of high-frequency details. To address these issues, we propose a novel Mamba-based ODISR network, termed MambaOSR, which consists of three key modules working collaboratively for accurate reconstruction. Specifically, we first introduce a spatial-frequency visual state space model (SF-VSSM) to capture global contextual information via dual-domain representation learning, thereby enhancing the preservation of high-frequency details. Subsequently, we design a distortion-guided module (DGM) that leverages distortion map priors to adaptively model geometric distortions, effectively suppressing artifacts resulting from equirectangular projections. Finally, we develop a multi-scale feature fusion module (MFFM) that integrates complementary features across multiple scales, further improving reconstruction quality. Extensive experiments conducted on the SUN360 dataset demonstrate that our proposed MambaOSR achieves a 0.16 dB improvement in WS-PSNR and increases the mutual information by 1.99% compared with state-of-the-art methods, significantly enhancing both visual quality and the information richness of omnidirectional images. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

30 pages, 14034 KB  
Article
A Novel 3D Point Cloud Reconstruction Method for Single-Pass Circular SAR Based on Inverse Mapping with Target Contour Constraints
by Qiming Zhang, Jinping Sun, Fei Teng, Yun Lin and Wen Hong
Remote Sens. 2025, 17(7), 1275; https://doi.org/10.3390/rs17071275 - 3 Apr 2025
Viewed by 1135
Abstract
Circular synthetic aperture radar (CSAR) is an advanced imaging mechanism with three-dimensional (3D) imaging capability, enabling the acquisition of omnidirectional scattering information for observation regions. The existing 3D point cloud reconstruction method for single-pass CSAR is capable of obtaining the 3D scattering points [...] Read more.
Circular synthetic aperture radar (CSAR) is an advanced imaging mechanism with three-dimensional (3D) imaging capability, enabling the acquisition of omnidirectional scattering information for observation regions. The existing 3D point cloud reconstruction method for single-pass CSAR is capable of obtaining the 3D scattering points for targets by inversely mapping the projection points in multi-aspect sub-aperture images and subsequently voting on the scattering candidates. However, due to the influence of non-target background points in multi-aspect sub-aperture images, there are several false points in the 3D reconstruction result, which affect the quality of the produced 3D point cloud. In this paper, we propose a novel 3D point cloud reconstruction method for single-pass CSAR based on inverse mapping with target contour constraints. The proposed method can constrain the range and height of inverse mapping by extracting the contour information of targets from multi-aspect sub-aperture CSAR images, which contributes to improving the reconstruction quality of 3D point clouds for targets. The performance of the proposed method was verified based on X-band CSAR measured data sets. Full article
(This article belongs to the Special Issue 3D Scene Reconstruction, Modeling and Analysis Using Remote Sensing)
Show Figures

Figure 1

22 pages, 5968 KB  
Article
The Optimization of PID Controller and Color Filter Parameters with a Genetic Algorithm for Pineapple Tracking Using an ROS2 and MicroROS-Based Robotic Head
by Carolina Maldonado-Mendez, Sergio Fabian Ruiz-Paz, Isaac Machorro-Cano, Antonio Marin-Hernandez and Sergio Hernandez-Mendez
Computation 2025, 13(3), 69; https://doi.org/10.3390/computation13030069 - 7 Mar 2025
Cited by 1 | Viewed by 2136
Abstract
This work proposes a vision system mounted on the head of an omnidirectional robot to track pineapples and maintain them at the center of its field of view. The robot head is equipped with a pan–tilt unit that facilitates dynamic adjustments. The system [...] Read more.
This work proposes a vision system mounted on the head of an omnidirectional robot to track pineapples and maintain them at the center of its field of view. The robot head is equipped with a pan–tilt unit that facilitates dynamic adjustments. The system architecture, implemented in Robot Operating System 2 (ROS2), performs the following tasks: it captures images from a webcam embedded in the robot head, segments the object of interest based on color, and computes its centroid. If the centroid deviates from the center of the image plane, a proportional–integral–derivative (PID) controller adjusts the pan–tilt unit to reposition the object at the center, enabling continuous tracking. A multivariate Gaussian function is employed to segment objects with complex color patterns, such as the body of a pineapple. The parameters of both the PID controller and the multivariate Gaussian filter are optimized using a genetic algorithm. The PID controller receives as input the (x, y) positions of the pan–tilt unit, obtained via an embedded board and MicroROS, and generates control signals for the servomotors that drive the pan–tilt mechanism. The experimental results demonstrate that the robot successfully tracks a moving pineapple. Additionally, the color segmentation filter can be further optimized to detect other textured fruits, such as soursop and melon. This research contributes to the advancement of smart agriculture, particularly for fruit crops with rough textures and complex color patterns. Full article
Show Figures

Figure 1

22 pages, 11585 KB  
Article
Marine Radar Target Ship Echo Generation Algorithm and Simulation Based on Radar Cross-Section
by Chang Li, Xiao Yang, Hongxiang Ren, Shihao Li and Xiaoyu Feng
J. Mar. Sci. Eng. 2025, 13(2), 348; https://doi.org/10.3390/jmse13020348 - 14 Feb 2025
Cited by 2 | Viewed by 2530
Abstract
In this study, a simplified radar echo signal model suitable for radar simulators and a Radar Cross-Section (RCS) calculation model based on the Physical Optics (PO) method was developed. A comprehensive radar target ship echo generation algorithm was designed, and the omnidirectional radar [...] Read more.
In this study, a simplified radar echo signal model suitable for radar simulators and a Radar Cross-Section (RCS) calculation model based on the Physical Optics (PO) method was developed. A comprehensive radar target ship echo generation algorithm was designed, and the omnidirectional radar RCS values of three typical ships were calculated. The simulation generates radar target ship echo images under varying incident angles (0–360°), detection distances (0–24 nautical miles), and three common target material properties. The simulation results, compared with those from existing radar simulators and real radar systems, show that the method proposed in this study, based on RCS values, generates highly realistic radar target ship echoes. It accurately simulates radar echoes under different target ship headings, distances, and material influences, fully meeting the technical requirements of the STCW international convention for radar simulators. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

Back to TopTop