Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (59)

Search Parameters:
Keywords = depth of field (DOF)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 20381 KiB  
Article
A Study on the Force/Position Hybrid Control Strategy for Eight-Axis Robotic Friction Stir Welding
by Wenjun Yan and Yue Yu
Metals 2025, 15(4), 442; https://doi.org/10.3390/met15040442 - 16 Apr 2025
Viewed by 762
Abstract
In aerospace and new-energy vehicle manufacturing, there is an increasing demand for the high-quality joining of large, curved aluminum alloy structures. This study presents a robotic friction stir welding (RFSW) system employing a force/position hybrid control. An eight-axis linkage platform integrates an electric [...] Read more.
In aerospace and new-energy vehicle manufacturing, there is an increasing demand for the high-quality joining of large, curved aluminum alloy structures. This study presents a robotic friction stir welding (RFSW) system employing a force/position hybrid control. An eight-axis linkage platform integrates an electric spindle, multidimensional force sensors, and a laser displacement sensor, ensuring trajectory coordination between the robot and the positioner. By combining long-range constant displacement with small-range constant pressure—supplemented by an adaptive transition algorithm—the system regulates the axial stirring depth and downward force. The experimental results confirm that this approach effectively compensates for robotic flexibility, keeping weld depth and pressure deviations within 5%, significantly improving seam quality. Further welding verification was performed on typical curved panels for aerospace applications, and the results demonstrated strong adaptability under high-load, multi-DOF conditions, without crack formation. This research could advance the field toward more robust, automated, and adaptive RFSW solutions for aerospace, automotive, and other high-end manufacturing applications. Full article
(This article belongs to the Section Welding and Joining)
Show Figures

Figure 1

11 pages, 2570 KiB  
Article
Three-Dimensional Scanning Virtual Aperture Imaging with Metasurface
by Zhan Ou, Yuan Liang, Hua Cai and Guangjian Wang
Sensors 2025, 25(1), 280; https://doi.org/10.3390/s25010280 - 6 Jan 2025
Viewed by 1016
Abstract
Metasurface-based imaging is attractive due to its low hardware costs and system complexity. However, most of the current metasurface-based imaging systems require stochastic wavefront modulation, complex computational post-processing, and are restricted to 2D imaging. To overcome these limitations, we propose a scanning virtual [...] Read more.
Metasurface-based imaging is attractive due to its low hardware costs and system complexity. However, most of the current metasurface-based imaging systems require stochastic wavefront modulation, complex computational post-processing, and are restricted to 2D imaging. To overcome these limitations, we propose a scanning virtual aperture imaging system. The system first uses a focused beam to achieve near-field focal plane scanning, meanwhile forming a virtual aperture. Secondly, an adapted range migration algorithm (RMA) with a pre-processing step is applied to the virtual aperture to achieve a 3D high-resolution reconstruction. The pre-processing step fully exploits the feature of near-field beamforming that only a time delay is added on the received signal, which introduces ignorable additional calculation time. We build a compact prototype system working at a frequency from 38 to 40 GHz. Both the simulations and the experiments demonstrate that the proposed system can achieve high-quality imaging without complex implementations. Our method can be widely used for single-transceiver coherent systems to significantly improve the imaging depth of field (DOF). Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

15 pages, 6968 KiB  
Article
Biomimetic Active Stereo Camera System with Variable FOV
by Yanmiao Zhou and Xin Wang
Biomimetics 2024, 9(12), 740; https://doi.org/10.3390/biomimetics9120740 - 4 Dec 2024
Viewed by 1272
Abstract
Inspired by the biological eye movements of fish such as pipefish and sandlances, this paper presents a novel dynamic calibration method specifically for active stereo vision systems to address the challenges of active cameras with varying fields of view (FOVs). By integrating static [...] Read more.
Inspired by the biological eye movements of fish such as pipefish and sandlances, this paper presents a novel dynamic calibration method specifically for active stereo vision systems to address the challenges of active cameras with varying fields of view (FOVs). By integrating static calibration based on camera rotation angles with dynamic updates of extrinsic parameters, the method leverages relative pose adjustments between the rotation axis and cameras to update extrinsic parameters continuously in real-time. It facilitates epipolar rectification as the FOV changes, and enables precise disparity computation and accurate depth information acquisition. Based on the dynamic calibration method, we develop a two-DOF bionic active camera system including two cameras driven by motors to mimic the movement of biological eyes; this compact system has a large range of visual data. Experimental results show that the calibration method is effective, and achieves high accuracy in extrinsic parameter calculations during FOV adjustments. Full article
(This article belongs to the Special Issue Design and Control of a Bio-Inspired Robot: 3rd Edition)
Show Figures

Figure 1

21 pages, 12097 KiB  
Article
Infrared Camera Array System and Self-Calibration Method for Enhanced Dim Target Perception
by Yaning Zhang, Tianhao Wu, Jungang Yang and Wei An
Remote Sens. 2024, 16(16), 3075; https://doi.org/10.3390/rs16163075 - 21 Aug 2024
Viewed by 1685
Abstract
Camera arrays can enhance the signal-to-noise ratio (SNR) between dim targets and backgrounds through multi-view synthesis. This is crucial for the detection of dim targets. To this end, we design and develop an infrared camera array system with a large baseline. The multi-view [...] Read more.
Camera arrays can enhance the signal-to-noise ratio (SNR) between dim targets and backgrounds through multi-view synthesis. This is crucial for the detection of dim targets. To this end, we design and develop an infrared camera array system with a large baseline. The multi-view synthesis of camera arrays relies heavily on the calibration accuracy of relative poses in the sub-cameras. However, the sub-cameras within a camera array lack strict geometric constraints. Therefore, most current calibration methods still consider the camera array as multiple pinhole cameras for calibration. Moreover, when detecting distant targets, the camera array usually needs to adjust the focal length to maintain a larger depth of field (DoF), so that the distant targets are located on the camera’s focal plane. This means that the calibration scene should be selected within this DoF range to obtain clear images. Nevertheless, the small parallax between the distant sub-aperture views limits the calibration. To address these issues, we propose a calibration model for camera arrays in distant scenes. In this model, we first extend the parallax by employing dual-array frames (i.e., recording a scene at two spatial locations). Secondly, we investigate the linear constraints between the dual-array frames, to maintain the minimum degrees of freedom of the model. We develop a real-world light field dataset called NUDT-Dual-Array using an infrared camera array to evaluate our method. Experimental results on our self-developed datasets demonstrate the effectiveness of our method. Using the calibrated model, we improve the SNR of distant dim targets, which ultimately enhances the detection and perception of dim targets. Full article
Show Figures

Figure 1

14 pages, 17218 KiB  
Article
Fast Three-Dimensional Profilometry with Large Depth of Field
by Wei Zhang, Jiongguang Zhu, Yu Han, Manru Zhang and Jiangbo Li
Sensors 2024, 24(13), 4037; https://doi.org/10.3390/s24134037 - 21 Jun 2024
Viewed by 1255
Abstract
By applying a high projection rate, the binary defocusing technique can dramatically increase 3D imaging speed. However, existing methods are sensitive to the varied defocusing degree, and have limited depth of field (DoF). To this end, a time–domain Gaussian fitting method is proposed [...] Read more.
By applying a high projection rate, the binary defocusing technique can dramatically increase 3D imaging speed. However, existing methods are sensitive to the varied defocusing degree, and have limited depth of field (DoF). To this end, a time–domain Gaussian fitting method is proposed in this paper. The concept of a time–domain Gaussian curve is firstly put forward, and the procedure of determining projector coordinates with a time–domain Gaussian curve is illustrated in detail. The neural network technique is applied to rapidly compute peak positions of time-domain Gaussian curves. Relying on the computing power of the neural network, the proposed method can reduce the computing time greatly. The binary defocusing technique can be combined with the neural network, and fast 3D profilometry with a large depth of field is achieved. Moreover, because the time–domain Gaussian curve is extracted from individual image pixel, it will not deform according to a complex surface, so the proposed method is also suitable for measuring a complex surface. It is demonstrated by the experiment results that our proposed method can extends the system DoF by five times, and both the data acquisition time and computing time can be reduced to less than 35 ms. Full article
Show Figures

Figure 1

27 pages, 14228 KiB  
Article
High-Magnification Object Tracking with Ultra-Fast View Adjustment and Continuous Autofocus Based on Dynamic-Range Focal Sweep
by Tianyi Zhang, Kohei Shimasaki, Idaku Ishii and Akio Namiki
Sensors 2024, 24(12), 4019; https://doi.org/10.3390/s24124019 - 20 Jun 2024
Cited by 5 | Viewed by 1977
Abstract
Active vision systems (AVSs) have been widely used to obtain high-resolution images of objects of interest. However, tracking small objects in high-magnification scenes is challenging due to shallow depth of field (DoF) and narrow field of view (FoV). To address this, we introduce [...] Read more.
Active vision systems (AVSs) have been widely used to obtain high-resolution images of objects of interest. However, tracking small objects in high-magnification scenes is challenging due to shallow depth of field (DoF) and narrow field of view (FoV). To address this, we introduce a novel high-speed AVS with a continuous autofocus (C-AF) approach based on dynamic-range focal sweep and a high-frame-rate (HFR) frame-by-frame tracking pipeline. Our AVS leverages an ultra-fast pan-tilt mechanism based on a Galvano mirror, enabling high-frequency view direction adjustment. Specifically, the proposed C-AF approach uses a 500 fps high-speed camera and a focus-tunable liquid lens operating at a sine wave, providing a 50 Hz focal sweep around the object’s optimal focus. During each focal sweep, 10 images with varying focuses are captured, and the one with the highest focus value is selected, resulting in a stable output of well-focused images at 50 fps. Simultaneously, the object’s depth is measured using the depth-from-focus (DFF) technique, allowing dynamic adjustment of the focal sweep range. Importantly, because the remaining images are only slightly less focused, all 500 fps images can be utilized for object tracking. The proposed tracking pipeline combines deep-learning-based object detection, K-means color clustering, and HFR tracking based on color filtering, achieving 500 fps frame-by-frame tracking. Experimental results demonstrate the effectiveness of the proposed C-AF approach and the advanced capabilities of the high-speed AVS for magnified object tracking. Full article
(This article belongs to the Special Issue Advanced Optical and Optomechanical Sensors)
Show Figures

Figure 1

25 pages, 26769 KiB  
Article
SIDGAN: Efficient Multi-Module Architecture for Single Image Defocus Deblurring
by Shenggui Ling, Hongmin Zhan and Lijia Cao
Electronics 2024, 13(12), 2265; https://doi.org/10.3390/electronics13122265 - 9 Jun 2024
Cited by 1 | Viewed by 1744
Abstract
In recent years, with the rapid developments in deep learning and graphics processing units, learning-based defocus deblurring has made favorable achievements. However, the current methods are not effective in processing blurred images with a large depth of field. The greater the depth of [...] Read more.
In recent years, with the rapid developments in deep learning and graphics processing units, learning-based defocus deblurring has made favorable achievements. However, the current methods are not effective in processing blurred images with a large depth of field. The greater the depth of field, the blurrier the image, namely, the image contains large blurry regions and encounters severe blur. The fundamental reason for the unsatisfactory results is that it is difficult to extract effective features from the blurred images with large blurry regions. For this reason, a new FFEM (Fuzzy Feature Extraction Module) is proposed to enhance the encoder’s ability to extract features from images with large blurry regions. After using the FFEM during encoding, its PSNR (Peak Signal-to-Noise Ratio) is improved by 1.33% on the DPDD (Dual-Pixel Defocus Deblurring). Moreover, images with large blurry regions often cause the current algorithms to generate artifacts in their results. Therefore, a new module named ARM (Artifact Removal Module) is proposed in this work and employed during decoding. After utilizing the ARM during decoding, its PSNR is improved by 2.49% on the DPDD. After using the FFEM and the ARM simultaneously, compared to the latest algorithms, the PSNR of our method is improved by 3.29% on the DPDD. Following the previous research in this field, qualitative and quantitative experiments are conducted on the DPDD and the RealDOF (Real Depth of Field), and the experimental results indicate that our method surpasses the state-of-the-art algorithms in three objective metrics. Full article
(This article belongs to the Special Issue Artificial Intelligence in Image Processing and Computer Vision)
Show Figures

Figure 1

15 pages, 4411 KiB  
Article
Compound Acoustic Radiation Force Impulse Imaging of Bovine Eye by Using Phase-Inverted Ultrasound Transducer
by Gil Su Kim, Hak Hyun Moon, Hee Su Lee and Jong Seob Jeong
Sensors 2024, 24(9), 2700; https://doi.org/10.3390/s24092700 - 24 Apr 2024
Viewed by 1357
Abstract
In general, it is difficult to visualize internal ocular structure and detect a lesion such as a cataract or glaucoma using the current ultrasound brightness-mode (B-mode) imaging. This is because the internal structure of the eye is rich in moisture, resulting in a [...] Read more.
In general, it is difficult to visualize internal ocular structure and detect a lesion such as a cataract or glaucoma using the current ultrasound brightness-mode (B-mode) imaging. This is because the internal structure of the eye is rich in moisture, resulting in a lack of contrast between tissues in the B-mode image, and the penetration depth is low due to the attenuation of the ultrasound wave. In this study, the entire internal ocular structure of a bovine eye was visualized in an ex vivo environment using the compound acoustic radiation force impulse (CARFI) imaging scheme based on the phase-inverted ultrasound transducer (PIUT). In the proposed method, the aperture of the PIUT is divided into four sections, and the PIUT is driven by the out-of-phase input signal capable of generating split-focusing at the same time. Subsequently, the compound imaging technique was employed to increase signal-to-noise ratio (SNR) and to reduce displacement error. The experimental results demonstrated that the proposed technique could provide an acoustic radiation force impulse (ARFI) image of the bovine eye with a broader depth-of-field (DOF) and about 80% increased SNR compared to the conventional ARFI image obtained using the in-phase input signal. Therefore, the proposed technique can be one of the useful techniques capable of providing the image of the entire ocular structure to diagnose various eye diseases. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

18 pages, 14531 KiB  
Article
Safety Analysis of Initial Separation Phase for AUV Deployment of Mission Payloads
by Zhengwei Wang, Haitao Gu, Jichao Lang and Lin Xing
J. Mar. Sci. Eng. 2024, 12(4), 608; https://doi.org/10.3390/jmse12040608 - 31 Mar 2024
Cited by 4 | Viewed by 1800
Abstract
This study verifies the effects of deployment parameters on the safe separation of Autonomous Underwater Vehicles (AUVs) and mission payloads. The initial separation phase is meticulously modeled based on computational fluid dynamics (CFD) simulations employing the cubic constitutive Shear Stress Transport (SST) k-ω [...] Read more.
This study verifies the effects of deployment parameters on the safe separation of Autonomous Underwater Vehicles (AUVs) and mission payloads. The initial separation phase is meticulously modeled based on computational fluid dynamics (CFD) simulations employing the cubic constitutive Shear Stress Transport (SST) k-ω turbulence model and overset grid technologies. This phase is characterized by a 6-degree-of-freedom (6-DOF) framework incorporating Dynamic Fluid-Body Interaction (DFBI), supported by empirical validation. The SST k-ω turbulence model demonstrates superior performance in managing flows characterized by adverse pressure gradients and separation. DFBI entails computationally modeling fluid–solid interactions during motion or deformation. The utilization of overset grids presents several advantages, including enhanced computational efficiency by concentrating computational resources solely on regions of interest, simplified handling of intricate geometries and moving bodies, and adaptability in adjusting grids to accommodate changing simulation conditions. This research analyzes mission payloads’ trajectories and attitude adjustments after release from AUVs under various cruising speeds and initial release dynamics, such as descent and angular velocities. Additionally, this study evaluates the effects of varying ocean currents at different depths on separation safety. Results indicate that the interaction between AUVs and mission payloads during separation increases under higher navigational speeds, reducing the separation speed and degrading the stability. As the initial drop velocities increase, fast transition through the AUV’s immediate flow field promotes separation. The core of this process is the initial pitch angle management upon deployment. Optimizing initial pitching angular velocity prolongs the time for mission payloads to reach their maximum pitch angle, thus decreasing horizontal displacement and improving separation safety. Deploying AUVs at greater depths alleviates the influence of ocean currents, thereby reducing disturbances during payload separation. Full article
Show Figures

Figure 1

19 pages, 6901 KiB  
Article
Learning-Based Proof of the State-of-the-Art Geometric Hypothesis on Depth-of-Field Scaling and Shifting Influence on Image Sharpness
by Siamak Khatibi, Wei Wen and Sayyed Mohammad Emam
Appl. Sci. 2024, 14(7), 2748; https://doi.org/10.3390/app14072748 - 25 Mar 2024
Viewed by 1107
Abstract
Today, we capture and store images in a way that has never been possible. However, huge numbers of degraded and blurred images are captured unintentionally or by mistake. In this paper, we propose a geometrical hypothesis stating that blurring occurs by shifting or [...] Read more.
Today, we capture and store images in a way that has never been possible. However, huge numbers of degraded and blurred images are captured unintentionally or by mistake. In this paper, we propose a geometrical hypothesis stating that blurring occurs by shifting or scaling the depth of field (DOF). The validity of the hypothesis is proved by an independent method based on depth estimation from a single image. The image depth is modeled regarding its edges to extract amplitude comparison ratios between the generated blurred images and the sharp/blurred images. Blurred images are generated by a stepwise variation in the standard deviation of the Gaussian filter estimate in the improved model. This process acts as virtual image recording used to mimic the recording of several image instances. A historical documentation database is used to validate the hypothesis and classify sharp images from blurred ones and different blur types. The experimental results show that distinguishing unintentionally blurred images from non-blurred ones by a comparison of their depth of field is applicable. Full article
Show Figures

Figure 1

21 pages, 3610 KiB  
Article
Comparing and Contrasting Near-Field, Object Space, and a Novel Hybrid Interaction Technique for Distant Object Manipulation in VR
by Wei-An Hsieh, Hsin-Yi Chien, David Brickler, Sabarish V. Babu and Jung-Hong Chuang
Virtual Worlds 2024, 3(1), 94-114; https://doi.org/10.3390/virtualworlds3010005 - 21 Feb 2024
Cited by 1 | Viewed by 1720
Abstract
In this contribution, we propose a hybrid interaction technique that integrates near-field and object-space interaction techniques for manipulating objects at a distance in virtual reality (VR). The objective of the hybrid interaction technique was to seamlessly leverage the strengths of both the near-field [...] Read more.
In this contribution, we propose a hybrid interaction technique that integrates near-field and object-space interaction techniques for manipulating objects at a distance in virtual reality (VR). The objective of the hybrid interaction technique was to seamlessly leverage the strengths of both the near-field and object-space manipulation techniques. We employed bimanual near-field metaphor with scaled replica (BMSR) as our near-field interaction technique, which enabled us to perform multilevel degrees-of-freedom (DoF) separation transformations, such as 1~3DoF translation, 1~3DoF uniform and anchored scaling, 1DoF and 3DoF rotation, and 6DoF simultaneous translation and rotation, with enhanced depth perception and fine motor control provided by near-field manipulation techniques. The object-space interaction technique we utilized was the classic Scaled HOMER, which is known to be effective and appropriate for coarse transformations in distant object manipulation. In a repeated measures within-subjects evaluation, we empirically evaluated the three interaction techniques for their accuracy, efficiency, and economy of movement in pick-and-place, docking, and tunneling tasks in VR. Our findings revealed that the near-field BMSR technique outperformed the object space Scaled HOMER technique in terms of accuracy and economy of movement, but the participants performed more slowly overall with BMSR. Additionally, our results revealed that the participants preferred to use the hybrid interaction technique, as it allowed them to switch and transition seamlessly between the constituent BMSR and Scaled HOMER interaction techniques, depending on the level of accuracy, precision and efficiency required. Full article
Show Figures

Figure 1

17 pages, 6037 KiB  
Article
Depth–Depth of Focus Moiré Fringe Alignment via Broad-Spectrum Modulation
by Dajie Yu, Junbo Liu, Ji Zhou, Haifeng Sun, Chuan Jin and Jian Wang
Photonics 2024, 11(2), 138; https://doi.org/10.3390/photonics11020138 - 31 Jan 2024
Cited by 2 | Viewed by 1813
Abstract
Alignment precision is a crucial factor that directly impacts overlay accuracy, which is one of three fundamental indicators of lithography. The alignment method based on the Moiré fringe has the advantages of a simple measurement optical path and high measurement accuracy. However, it [...] Read more.
Alignment precision is a crucial factor that directly impacts overlay accuracy, which is one of three fundamental indicators of lithography. The alignment method based on the Moiré fringe has the advantages of a simple measurement optical path and high measurement accuracy. However, it requires strict control of the distance between the mask and wafer to ensure imaging quality. This limitation restricts its application scenarios. A depth–DOF (depth of focus) Moiré fringe alignment by broad–spectrum modulation is presented to enhance the range of the alignment signals. This method establishes a broad–spectrum Moiré fringe model based on the Talbot effect principle, and it effectively covers the width of dark field (WDF) between different wavelength imaging ranges, thereby extending the DOF range of the alignment process, and employs a hybrid of genetic algorithms and the particle-swarm optimization (GA–PSO) algorithm to combine various spectral components in a white spectrum. By calculating the optimal ratio of each wavelength and using white light incoherent illumination in combination with this ratio, it achieves the optimal DOF range of a broad–spectrum Moiré fringe imaging model. The simulation results demonstrate that the available DOF range of the alignment system has been expanded from 400 μm to 800 μm. Additionally, the alignment precision of the system was analyzed, under the same conditions, and the accuracy analysis of the noise resistance, translation amount, and tilt amount was conducted for the Moiré fringe and broad–spectrum Moiré fringe. Compared to a single wavelength, the alignment precision of the broad–spectrum Moiré fringe decreased by an average of 0.0495 nm, equivalent to a 1.5% reduction in the original alignment precision, when using a 4 μm mask and a 4.4 μm wafer. However, the alignment precision can still reach 3.795 nm, effectively enhancing the available depth of focus range and reducing the loss of alignment precision. Full article
(This article belongs to the Section Data-Science Based Techniques in Photonics)
Show Figures

Figure 1

15 pages, 8180 KiB  
Article
Thin and Large Depth-Of-Field Compound-Eye Imaging for Close-Up Photography
by Dewen Cheng, Da Wang, Cheng Yao, Yue Liu, Xilong Dai and Yongtian Wang
Photonics 2024, 11(2), 107; https://doi.org/10.3390/photonics11020107 - 25 Jan 2024
Cited by 1 | Viewed by 2228
Abstract
Large depth of field (DOF) and stereo photography are challenging yet rewarding areas of research in close-up photography. In this study, a compound-eye imaging system based on a discrete microlens array (MLA) was implemented for close-range thin imaging. A compact imaging system with [...] Read more.
Large depth of field (DOF) and stereo photography are challenging yet rewarding areas of research in close-up photography. In this study, a compound-eye imaging system based on a discrete microlens array (MLA) was implemented for close-range thin imaging. A compact imaging system with a total length of 3.5 mm and a DOF of 7 mm was realized using two planar aspherical MLAs in a hexagonal arrangement. A new three-layer structure and discrete arrangement of sublenses were proposed to suppress stray light and enable the spatial refocusing method, which restores image information at different object depths. The system is successfully fabricated, and the system performance is carefully investigated. Our system offers a large depth of field, high resolution, and portability, making it ideal for close-up photography applications requiring a short conjugate distance and small device volume, while also addressing the issue of crosstalk between adjacent channels. Full article
Show Figures

Figure 1

12 pages, 2258 KiB  
Article
Embedded Processing for Extended Depth of Field Imaging Systems: From Infinite Impulse Response Wiener Filter to Learned Deconvolution
by Alice Fontbonne, Pauline Trouvé-Peloux, Frédéric Champagnat, Gabriel Jobert and Guillaume Druart
Sensors 2023, 23(23), 9462; https://doi.org/10.3390/s23239462 - 28 Nov 2023
Cited by 1 | Viewed by 1542
Abstract
Many works in the state of the art are interested in the increase of the camera depth of field (DoF) via the joint optimization of an optical component (typically a phase mask) and a digital processing step with an infinite deconvolution support or [...] Read more.
Many works in the state of the art are interested in the increase of the camera depth of field (DoF) via the joint optimization of an optical component (typically a phase mask) and a digital processing step with an infinite deconvolution support or a neural network. This can be used either to see sharp objects from a greater distance or to reduce manufacturing costs due to tolerance regarding the sensor position. Here, we study the case of an embedded processing with only one convolution with a finite kernel size. The finite impulse response (FIR) filter coefficients are learned or computed based on a Wiener filter paradigm. It involves an optical model typical of codesigned systems for DoF extension and a scene power spectral density, which is either learned or modeled. We compare different FIR filters and present a method for dimensioning their sizes prior to a joint optimization. We also show that, among the filters compared, the learning approach enables an easy adaptation to a database, but the other approaches are equally robust. Full article
(This article belongs to the Special Issue Advances in Sensing, Imaging and Computing for Autonomous Driving)
Show Figures

Figure 1

26 pages, 1534 KiB  
Article
Performance Comparison of Meta-Heuristics Applied to Optimal Signal Design for Parameter Identification
by Accacio Ferreira dos Santos Neto, Murillo Ferreira dos Santos, Mathaus Ferreira da Silva, Leonardo de Mello Honório, Edimar José de Oliveira and Edvaldo Soares Araújo Neto
Sensors 2023, 23(22), 9085; https://doi.org/10.3390/s23229085 - 10 Nov 2023
Cited by 2 | Viewed by 1291
Abstract
This paper presents a comparative study that explores the performance of various meta-heuristics employed for Optimal Signal Design, specifically focusing on estimating parameters in nonlinear systems. The study introduces the Robust Sub-Optimal Excitation Signal Generation and Optimal Parameter Estimation (rSOESGOPE) methodology, which is [...] Read more.
This paper presents a comparative study that explores the performance of various meta-heuristics employed for Optimal Signal Design, specifically focusing on estimating parameters in nonlinear systems. The study introduces the Robust Sub-Optimal Excitation Signal Generation and Optimal Parameter Estimation (rSOESGOPE) methodology, which is originally derived from the well-known Particle Swarm Optimization (PSO) algorithm. Through a real-life case study involving an Autonomous Surface Vessel (ASV) equipped with three Degrees of Freedom (DoFs) and an aerial holonomic propulsion system, the effectiveness of different meta-heuristics is thoroughly evaluated. By conducting an in-depth analysis and comparison of the obtained results from the diverse meta-heuristics, this study offers valuable insights for selecting the most suitable optimization technique for parameter estimation in nonlinear systems. Researchers and experimental tests in the field can benefit from the comprehensive examination of these techniques, aiding them in making informed decisions about the optimal approach for optimizing parameter estimation in nonlinear systems. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

Back to TopTop