Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (59)

Search Parameters:
Keywords = intrinsic and extrinsic calibration

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
37 pages, 4647 KB  
Review
Multi-Camera Simultaneous Localization and Mapping for Unmanned Systems: A Survey
by Guoyan Wang, Likun Wang, Jun He, Yanwen Jiang, Qiming Qi and Yueshang Zhou
Electronics 2026, 15(3), 602; https://doi.org/10.3390/electronics15030602 - 29 Jan 2026
Abstract
Autonomous navigation in unmanned systems increasingly relies on robust perception and mapping capabilities under large-scale, dynamic, and unstructured environments. Multi-camera simultaneous localization and mapping (MCSLAM) has emerged as a promising solution due to its improved field-of-view coverage, redundancy, and robustness compared to single-camera [...] Read more.
Autonomous navigation in unmanned systems increasingly relies on robust perception and mapping capabilities under large-scale, dynamic, and unstructured environments. Multi-camera simultaneous localization and mapping (MCSLAM) has emerged as a promising solution due to its improved field-of-view coverage, redundancy, and robustness compared to single-camera systems. However, the deployment of MCSLAM introduces several technical challenges that remain insufficiently addressed in existing literature. These challenges include the high-dimensional nature of multi-view visual data, the computational cost associated with multi-view geometry and large-scale bundle adjustment, and the strict requirements on camera calibration, temporal synchronization, and geometric consistency across heterogeneous viewpoints. This survey provides a comprehensive review of recent advances in MCSLAM for unmanned systems, categorizing existing approaches based on system configuration, field-of-view overlap, calibration strategies, and optimization frameworks. We further analyze common failure modes, evaluate representative algorithms, and identify emerging research trends toward scalable, real-time, and uncertainty-aware MCSLAM in complex operational environments. Full article
51 pages, 4796 KB  
Review
Review of Optical Fiber Sensors: Principles, Classifications and Applications in Emerging Technologies
by Denzel A. Rodriguez-Ramirez, Jose R. Martinez-Angulo, Jose D. Filoteo-Razo, Juan C. Elizondo-Leal, Alan Diaz-Manriquez, Daniel Jauregui-Vazquez, Jesus P. Lauterio-Cruz and Vicente P. Saldivar-Alonso
Photonics 2026, 13(1), 40; https://doi.org/10.3390/photonics13010040 - 31 Dec 2025
Viewed by 953
Abstract
Optical fiber sensors (OFSs) have emerged as essential tools in the monitoring of physical, chemical, and bio-medical parameters in harsh situations due to their high sensitivity, electromagnetic interference (EMI) immunity, and long-term stability. However, the current literature contains scattered information in most reviews [...] Read more.
Optical fiber sensors (OFSs) have emerged as essential tools in the monitoring of physical, chemical, and bio-medical parameters in harsh situations due to their high sensitivity, electromagnetic interference (EMI) immunity, and long-term stability. However, the current literature contains scattered information in most reviews regarding individual sensing technologies or domains. This study provides a structured exploratory review in a novel inter-family analysis of both intrinsic and extrinsic configurations by analyzing more than 23,000 publications between 2019 and 2025 in five key domains: industry, medicine and biomedicine, environmental chemistry, civil/structural engineering, and aerospace. The analysis aims to critically discuss how functional principles/parameters and methods of interrogation affect the applicability of different OFS categories. The results reveal leading trends in the use of techniques like the use of fiber Bragg gratings (FBG) and distributed sensing in high-accuracy conditions or the rising role of extrinsic sensors in selective chemical situations and point out new approaches in areas like Artificial Intelligence (AI)- or Internet of Things (IoT)-integrated sensors. Further, this synthesis not only connects pieces of knowledge but also defines the technological barriers in terms of calibration cost and standardization: this provides strategic insight regarding future research and the scalability of industry deployment. Full article
(This article belongs to the Special Issue Advancements in Mode-Locked Lasers)
Show Figures

Graphical abstract

21 pages, 4909 KB  
Article
Rapid 3D Camera Calibration for Large-Scale Structural Monitoring
by Fabio Bottalico, Nicholas A. Valente, Christopher Niezrecki, Kshitij Jerath, Yan Luo and Alessandro Sabato
Remote Sens. 2025, 17(15), 2720; https://doi.org/10.3390/rs17152720 - 6 Aug 2025
Cited by 1 | Viewed by 1912
Abstract
Computer vision techniques such as three-dimensional digital image correlation (3D-DIC) and three-dimensional point tracking (3D-PT) have demonstrated broad applicability for monitoring the conditions of large-scale engineering systems by reconstructing and tracking dynamic point clouds corresponding to the surface of a structure. Accurate stereophotogrammetry [...] Read more.
Computer vision techniques such as three-dimensional digital image correlation (3D-DIC) and three-dimensional point tracking (3D-PT) have demonstrated broad applicability for monitoring the conditions of large-scale engineering systems by reconstructing and tracking dynamic point clouds corresponding to the surface of a structure. Accurate stereophotogrammetry measurements require the stereo cameras to be calibrated to determine their intrinsic and extrinsic parameters by capturing multiple images of a calibration object. This image-based approach becomes cumbersome and time-consuming as the size of the tested object increases. To streamline the calibration and make it scale-insensitive, a multi-sensor system embedding inertial measurement units and a laser sensor is developed to compute the extrinsic parameters of the stereo cameras. In this research, the accuracy of the proposed sensor-based calibration method in performing stereophotogrammetry is validated experimentally and compared with traditional approaches. Tests conducted at various scales reveal that the proposed sensor-based calibration enables reconstructing both static and dynamic point clouds, measuring displacements with an accuracy higher than 95% compared to image-based traditional calibration, while being up to an order of magnitude faster and easier to deploy. The novel approach has broad applications for making static, dynamic, and deformation measurements to transform how large-scale structural health monitoring can be performed. Full article
(This article belongs to the Special Issue New Perspectives on 3D Point Cloud (Third Edition))
Show Figures

Figure 1

20 pages, 4468 KB  
Article
A Matrix Effect Calibration Method of Laser-Induced Breakdown Spectroscopy Based on Laser Ablation Morphology
by Hongliang Pei, Qingwen Fan, Yixiang Duan and Mingtao Zhang
Appl. Sci. 2025, 15(15), 8640; https://doi.org/10.3390/app15158640 - 4 Aug 2025
Cited by 2 | Viewed by 1629
Abstract
To improve the accuracy of three-dimensional (3D) reconstruction under microscopic conditions for laser-induced breakdown spectroscopy (LIBS), this study developed a novel visual platform by integrating an industrial CCD camera with a microscope. A customized microscale calibration target was designed to calibrate intrinsic and [...] Read more.
To improve the accuracy of three-dimensional (3D) reconstruction under microscopic conditions for laser-induced breakdown spectroscopy (LIBS), this study developed a novel visual platform by integrating an industrial CCD camera with a microscope. A customized microscale calibration target was designed to calibrate intrinsic and extrinsic camera parameters accurately. Based on the pinhole imaging model, disparity maps were obtained via pixel matching to reconstruct high-precision 3D ablation morphology. A mathematical model was established to analyze how key imaging parameters—baseline distance, focal length, and depth of field—affect reconstruction accuracy in micro-imaging environments. Focusing on trace element detection in WC-Co alloy samples, the reconstructed ablation craters enabled the precise calculation of ablation volumes and revealed their correlations with laser parameters (energy, wavelength, pulse duration) and the physical-chemical properties of the samples. Multivariate regression analysis was employed to investigate how ablation morphology and plasma evolution jointly influence LIBS quantification. A nonlinear calibration model was proposed, significantly suppressing matrix effects, achieving R2 = 0.987, and reducing RMSE to 0.1. This approach enhances micro-scale LIBS accuracy and provides a methodological reference for high-precision spectral analysis in environmental and materials applications. Full article
(This article belongs to the Special Issue Novel Laser-Based Spectroscopic Techniques and Applications)
Show Figures

Figure 1

20 pages, 4400 KB  
Article
Fast Intrinsic–Extrinsic Calibration for Pose-Only Structure-from-Motion
by Xiaoyang Tian, Yangbing Ge, Zhen Tan, Xieyuanli Chen, Ming Li and Dewen Hu
Remote Sens. 2025, 17(13), 2247; https://doi.org/10.3390/rs17132247 - 30 Jun 2025
Viewed by 2915
Abstract
Structure-from-motion (SfM) is a foundational technology that facilitates 3D scene understanding and visual localization. However, bundle adjustment (BA)-based SfM is usually very time-consuming, especially when dealing with numerous unknown focal length cameras. To address these limitations, we proposed a novel SfM system based [...] Read more.
Structure-from-motion (SfM) is a foundational technology that facilitates 3D scene understanding and visual localization. However, bundle adjustment (BA)-based SfM is usually very time-consuming, especially when dealing with numerous unknown focal length cameras. To address these limitations, we proposed a novel SfM system based on pose-only adjustment (PA) for intrinsic and extrinsic joint optimization to accelerate computing. Firstly, we propose a base frame selection method based on depth uncertainty, which integrates the focal length and parallax angle under a multi-camera system to provide more stable depth estimation for subsequent optimization. We explicitly derive a global PA of joint intrinsic and extrinsic parameters to reduce the high dimensionality of the parameter space and deal with cameras with unknown focal lengths, improving the efficiency of optimization. Finally, a novel pose-only re-triangulation (PORT) mechanism is proposed for enhanced reconstruction completeness by recovering failed triangulations from incomplete point tracks. The proposed framework has been demonstrated to be both faster and comparable in accuracy to state-of-the-art SfM systems, as evidenced by public benchmarking and analysis of the visitor photo dataset. Full article
Show Figures

Figure 1

23 pages, 1849 KB  
Article
Calibration of Mobile Robots Using ATOM
by Bruno Silva, Diogo Vieira, Manuel Gomes, Miguel Riem Oliveira and Eurico Pedrosa
Sensors 2025, 25(8), 2501; https://doi.org/10.3390/s25082501 - 16 Apr 2025
Viewed by 1532
Abstract
The calibration of mobile manipulators requires accurate estimation of both the transformations provided by the localization system and the transformations between sensors and the motion coordinate system. Current works offer limited flexibility when dealing with mobile robotic systems with many different sensor modalities. [...] Read more.
The calibration of mobile manipulators requires accurate estimation of both the transformations provided by the localization system and the transformations between sensors and the motion coordinate system. Current works offer limited flexibility when dealing with mobile robotic systems with many different sensor modalities. In this work, we propose a calibration approach that simultaneously estimates these transformations, enabling precise calibration even when the localization system is imprecise. This approach is integrated into Atomic Transformations Optimization Method (ATOM), a versatile calibration framework designed for multi-sensor, multi-modal robotic systems. By formulating calibration as an extended optimization problem, ATOM estimates both sensor poses and calibration pattern positions. The proposed methodology is validated through simulations and real-world case studies, demonstrating its effectiveness in improving calibration accuracy for mobile manipulators equipped with diverse sensor modalities. Full article
(This article belongs to the Collection Sensors and Data Processing in Robotics)
Show Figures

Figure 1

15 pages, 11293 KB  
Article
An Assessment of the Stereo and Near-Infrared Camera Calibration Technique Using a Novel Real-Time Approach in the Context of Resource Efficiency
by Larisa Ivascu, Vlad-Florin Vinatu and Mihail Gaianu
Processes 2025, 13(4), 1198; https://doi.org/10.3390/pr13041198 - 15 Apr 2025
Viewed by 1436
Abstract
This paper provides a comparative analysis of calibration techniques applicable to stereo and near-infrared (NIR) camera systems, with a specific emphasis on the Intel RealSense SR300 alongside a standard 2-megapixel NIR camera. This study investigates the pivotal function of calibration within both stereo [...] Read more.
This paper provides a comparative analysis of calibration techniques applicable to stereo and near-infrared (NIR) camera systems, with a specific emphasis on the Intel RealSense SR300 alongside a standard 2-megapixel NIR camera. This study investigates the pivotal function of calibration within both stereo vision and NIR imaging applications, which are essential across various domains, including robotics, augmented reality, and low-light imaging. For stereo systems, we scrutinise the conventional method involving a 9 × 6 chessboard pattern utilised to ascertain the intrinsic and extrinsic camera parameters. The proposed methodology consists of three main steps: (1) real-time calibration error classification for stereo cameras, (2) NIR-specific calibration techniques, and (3) a comprehensive evaluation framework. This research introduces a novel real-time evaluation methodology that classifies calibration errors predicated on the pixel offsets between corresponding points in the left and right images. Conversely, NIR camera calibration techniques are modified to address the distinctive properties of near-infrared light. We deliberate on the difficulties encountered in devising NIR–visible calibration patterns and the imperative to consider the spectral response and temperature sensitivity within the calibration procedure. The paper also puts forth an innovative calibration assessment application that is relevant to both systems. Stereo cameras evaluate the corner detection accuracy in real time across multiple image pairs, whereas NIR cameras concentrate on assessing the distortion correction and intrinsic parameter accuracy under varying lighting conditions. Our experiments validate the necessity of routine calibration assessment, as environmental factors may compromise the calibration quality over time. We conclude by underscoring the disparities in the calibration requirements between stereo and NIR systems, thereby emphasising the need for specialised approaches tailored to each domain to guarantee an optimal performance in their respective applications. Full article
(This article belongs to the Special Issue Circular Economy and Efficient Use of Resources (Volume II))
Show Figures

Figure 1

16 pages, 4599 KB  
Article
Investigation of the Effect of Gate Oxide Screening with Adjustment Pulse on Commercial SiC Power MOSFETs
by Michael Jin, Monikuntala Bhattacharya, Hengyu Yu, Jiashu Qian, Shiva Houshmand, Atsushi Shimbori, Marvin H. White and Anant K. Agarwal
Electronics 2025, 14(7), 1366; https://doi.org/10.3390/electronics14071366 - 28 Mar 2025
Cited by 1 | Viewed by 2004
Abstract
This paper presents a method to recover the negative threshold voltage shift during high field gate oxide screening of 1.2 kV 4H-SiC MOSFETs with an additional adjustment gate voltage pulse. To reduce field failure rates of the MOSFETs in operation, manufacturers perform a [...] Read more.
This paper presents a method to recover the negative threshold voltage shift during high field gate oxide screening of 1.2 kV 4H-SiC MOSFETs with an additional adjustment gate voltage pulse. To reduce field failure rates of the MOSFETs in operation, manufacturers perform a screening treatment to remove devices with extrinsic defects in the oxide. Current gate oxide screening procedures are limited to oxide fields at or below ~9 MV/cm for short durations (<1 s), which is not enough to remove all the devices with extrinsic defects. The results show that by implementing a lower field gate pulse, the threshold voltage shift can be partially recovered, and therefore the maximum screening field and time can be increased. However, both the initial screening pulse and the adjustment pulse require careful calibration to prevent significant degradation of the device threshold voltage, on-resistance, interface state density, or intrinsic lifetime. With a well calibrated set of pulses, higher screening fields can be utilized without significantly damaging the devices. This leads to an improvement in the overall screening efficiency of the process, reducing the number of devices with extrinsic oxide defects entering the field, and improving the reliability of the SiC MOSFETs in operation. Full article
Show Figures

Figure 1

21 pages, 6484 KB  
Article
A Perspective Distortion Correction Method for Planar Imaging Based on Homography Mapping
by Chen Wang, Yabin Ding, Kai Cui, Jianhui Li, Qingpo Xu and Jiangping Mei
Sensors 2025, 25(6), 1891; https://doi.org/10.3390/s25061891 - 18 Mar 2025
Cited by 5 | Viewed by 4014
Abstract
In monocular vision measurement, a barrier to implementation is the perspective distortion caused by manufacturing errors in the imaging chip and non-parallelism between the measurement plane and its image, which seriously affects the accuracy of pixel equivalent and measurement results. This paper proposed [...] Read more.
In monocular vision measurement, a barrier to implementation is the perspective distortion caused by manufacturing errors in the imaging chip and non-parallelism between the measurement plane and its image, which seriously affects the accuracy of pixel equivalent and measurement results. This paper proposed a perspective distortion correction method for planar imaging based on homography mapping. Factors causing perspective distortion from the camera’s intrinsic and extrinsic parameters were analyzed, followed by constructing a perspective transformation model. Then, a corrected imaging plane was constructed, and the model was further calibrated by utilizing the homography between the measurement plane, the actual imaging plane, and the corrected imaging plane. The nonlinear and perspective distortions were simultaneously corrected by transforming the original image to the corrected imaging plane. The experiment measuring the radius, length, angle, and area of a designed pattern shows that the root mean square errors will be 0.016 mm, 0.052 mm, 0.16°, and 0.68 mm2, and the standard deviations will be 0.016 mm, 0.045 mm, 0.033° and 0.65 mm2, respectively. The proposed method can effectively solve the problem of high-precision planar measurement under perspective distortion. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

19 pages, 30440 KB  
Article
A Method for the Calibration of a LiDAR and Fisheye Camera System
by Álvaro Martínez, Antonio Santo, Monica Ballesta, Arturo Gil and Luis Payá
Appl. Sci. 2025, 15(4), 2044; https://doi.org/10.3390/app15042044 - 15 Feb 2025
Cited by 4 | Viewed by 3355
Abstract
LiDAR and camera systems are frequently used together to gain a more complete understanding of the environment in different fields, such as mobile robotics, autonomous driving, or intelligent surveillance. Accurately calibrating the extrinsic parameters is crucial for the accurate fusion of the data [...] Read more.
LiDAR and camera systems are frequently used together to gain a more complete understanding of the environment in different fields, such as mobile robotics, autonomous driving, or intelligent surveillance. Accurately calibrating the extrinsic parameters is crucial for the accurate fusion of the data captured by both systems, which is equivalent to finding the transformation between the reference systems of both sensors. Traditional calibration methods for LiDAR and camera systems are developed for pinhole cameras and are not directly applicable to fisheye cameras. This work proposes a target-based calibration method for LiDAR and fisheye camera systems that avoids the need to transform images to a pinhole camera model, reducing the computation time. Instead, the method uses the spherical projection of the image, obtained with the intrinsic calibration parameters and the corresponding point cloud for LiDAR–fisheye calibration. Thus, unlike a pinhole-camera-based system, a wider field of view is provided, adding more information, which will lead to a better understanding of the environment itself, as well as enabling using fewer image sensors to cover a wider area. Full article
Show Figures

Figure 1

47 pages, 20555 KB  
Article
Commissioning an All-Sky Infrared Camera Array for Detection of Airborne Objects
by Laura Domine, Ankit Biswas, Richard Cloete, Alex Delacroix, Andriy Fedorenko, Lucas Jacaruso, Ezra Kelderman, Eric Keto, Sarah Little, Abraham Loeb, Eric Masson, Mike Prior, Forrest Schultz, Matthew Szenher, Wesley Andrés Watters and Abigail White
Sensors 2025, 25(3), 783; https://doi.org/10.3390/s25030783 - 28 Jan 2025
Cited by 6 | Viewed by 5882
Abstract
To date, there is little publicly available scientific data on unidentified aerial phenomena (UAP) whose properties and kinematics purportedly reside outside the performance envelope of known phenomena. To address this deficiency, the Galileo Project is designing, building, and commissioning a multi-modal, multi-spectral ground-based [...] Read more.
To date, there is little publicly available scientific data on unidentified aerial phenomena (UAP) whose properties and kinematics purportedly reside outside the performance envelope of known phenomena. To address this deficiency, the Galileo Project is designing, building, and commissioning a multi-modal, multi-spectral ground-based observatory to continuously monitor the sky and collect data for UAP studies via a rigorous long-term aerial census of all aerial phenomena, including natural and human-made. One of the key instruments is an all-sky infrared camera array using eight uncooled long-wave-infrared FLIR Boson 640 cameras. In addition to performing intrinsic and thermal calibrations, we implement a novel extrinsic calibration method using airplane positions from Automatic Dependent Surveillance–Broadcast (ADS-B) data that we collect synchronously on site. Using a You Only Look Once (YOLO) machine learning model for object detection and the Simple Online and Realtime Tracking (SORT) algorithm for trajectory reconstruction, we establish a first baseline for the performance of the system over five months of field operation. Using an automatically generated real-world dataset derived from ADS-B data, a dataset of synthetic 3D trajectories, and a hand-labeled real-world dataset, we find an acceptance rate (fraction of in-range airplanes passing through the effective field of view of at least one camera that are recorded) of 41% for ADS-B-equipped aircraft, and a mean frame-by-frame aircraft detection efficiency (fraction of recorded airplanes in individual frames which are successfully detected) of 36%. The detection efficiency is heavily dependent on weather conditions, range, and aircraft size. Approximately 500,000 trajectories of various aerial objects are reconstructed from this five-month commissioning period. These trajectories are analyzed with a toy outlier search focused on the large sinuosity of apparent 2D reconstructed object trajectories. About 16% of the trajectories are flagged as outliers and manually examined in the IR images. From these ∼80,000 outliers and 144 trajectories remain ambiguous, which are likely mundane objects but cannot be further elucidated at this stage of development without information about distance and kinematics or other sensor modalities. We demonstrate the application of a likelihood-based statistical test to evaluate the significance of this toy outlier analysis. Our observed count of ambiguous outliers combined with systematic uncertainties yields an upper limit of 18,271 outliers for the five-month interval at a 95% confidence level. This test is applicable to all of our future outlier searches. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

22 pages, 2851 KB  
Article
Enhanced Three-Axis Frame and Wand-Based Multi-Camera Calibration Method Using Adaptive Iteratively Reweighted Least Squares and Comprehensive Error Integration
by Oleksandr Yuhai, Yubin Cho, Ahnryul Choi and Joung Hwan Mun
Photonics 2024, 11(9), 867; https://doi.org/10.3390/photonics11090867 - 15 Sep 2024
Cited by 1 | Viewed by 2076
Abstract
The accurate transformation of multi-camera 2D coordinates into 3D coordinates is critical for applications like animation, gaming, and medical rehabilitation. This study unveils an enhanced multi-camera calibration method that alleviates the shortcomings of existing approaches by incorporating a comprehensive cost function and Adaptive [...] Read more.
The accurate transformation of multi-camera 2D coordinates into 3D coordinates is critical for applications like animation, gaming, and medical rehabilitation. This study unveils an enhanced multi-camera calibration method that alleviates the shortcomings of existing approaches by incorporating a comprehensive cost function and Adaptive Iteratively Reweighted Least Squares (AIRLS) optimization. By integrating static error components (3D coordinate, distance, angle, and reprojection errors) with dynamic wand distance errors, the proposed comprehensive cost function facilitates precise multi-camera parameter calculations. The AIRLS optimization effectively balances the optimization of both static and dynamic error elements, enhancing the calibration’s robustness and efficiency. Comparative validation against advanced multi-camera calibration methods shows this method’s superior accuracy (average error 0.27 ± 0.22 mm) and robustness. Evaluation metrics including average distance error, standard deviation, and range (minimum and maximum) of errors, complemented by statistical analysis using ANOVA and post-hoc tests, underscore its efficacy. The method markedly enhances the accuracy of calculating intrinsic, extrinsic, and distortion parameters, proving highly effective for precise 3D reconstruction in diverse applications. This study represents substantial progression in multi-camera calibration, offering a dependable and efficient solution for intricate calibration challenges. Full article
(This article belongs to the Special Issue Recent Advances in 3D Optical Measurement)
Show Figures

Figure 1

15 pages, 6834 KB  
Article
Research on Target Ranging Method for Live-Line Working Robots
by Guoxiang Hua, Guo Chen, Qingxin Luo and Jiyuan Yan
Symmetry 2024, 16(4), 487; https://doi.org/10.3390/sym16040487 - 17 Apr 2024
Cited by 2 | Viewed by 1742
Abstract
Due to the operation of live-line working robots at elevated heights for precision tasks, a suitable visual assistance system is essential to determine the position and distance of the robotic arm or gripper relative to the target object. In this study, we propose [...] Read more.
Due to the operation of live-line working robots at elevated heights for precision tasks, a suitable visual assistance system is essential to determine the position and distance of the robotic arm or gripper relative to the target object. In this study, we propose a method for distance measurement in live-line working robots by integrating the YOLOv5 algorithm with binocular stereo vision. The camera’s intrinsic and extrinsic parameters, as well as distortion coefficients, are obtained using the Zhang Zhengyou calibration method. Subsequently, stereo rectification is performed on the images to establish a standardized binocular stereovision model. The Census and Sum of Absolute Differences (SAD) fused stereo matching algorithm is applied to compute the disparity map. We train a dataset of transmission line bolts within the YOLO framework to derive the optimal model. The identified bolts are framed, and the depth distance of the target is ultimately calculated. And through the experimental verification of the bolt positioning, the results show that the method can achieve a relative error of 1% in the proximity of positioning. This approach provides real-time and accurate environmental perception for symmetrical structural live-line working robots, enhancing the stability of these robots. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

23 pages, 1847 KB  
Article
AC Magnetic Susceptibility: Mathematical Modeling and Experimental Realization on Poly-Crystalline and Single-Crystalline High-Tc Superconductors YBa2Cu3O7−δ and Bi2−xPbxSr2Ca2Cu3O10+y
by Petros Moraitis, Loukas Koutsokeras and Dimosthenis Stamopoulos
Materials 2024, 17(8), 1744; https://doi.org/10.3390/ma17081744 - 10 Apr 2024
Cited by 6 | Viewed by 2376
Abstract
The multifaceted inductive technique of AC magnetic susceptibility (ACMS) provides versatile and reliable means for the investigation of the respective properties of magnetic and superconducting materials. Here, we explore, both mathematically and experimentally, the ACMS set-up, based on four coaxial pick-up coils assembled [...] Read more.
The multifaceted inductive technique of AC magnetic susceptibility (ACMS) provides versatile and reliable means for the investigation of the respective properties of magnetic and superconducting materials. Here, we explore, both mathematically and experimentally, the ACMS set-up, based on four coaxial pick-up coils assembled in the second-derivative configuration, when employed in the investigation of differently shaped superconducting specimens of poly-crystalline YBa2Cu3O7−δ and Bi2−xPbxSr2Ca2Cu3O10+y and single-crystalline YBa2Cu3O7−δ. Through the mathematical modeling of both the ACMS set-up and of linearly responding superconducting specimens, we obtain a closed-form relation for the DC voltage output signal. The latter is translated directly to the so-called extrinsic ACMS of the studied specimen. By taking into account the specific characteristics of the studied high-Tc specimens (such as the shape and dimensions for the demagnetizing effect, porosity for the estimation of the superconducting volume fraction, etc.), we eventually draw the truly intrinsic ACMS of the parent material. Importantly, this is carried out without the need for any calibration specimen. The comparison of the mathematical modeling with the experimental data of the aforementioned superconducting specimens evidences fair agreement. Full article
Show Figures

Figure 1

22 pages, 13045 KB  
Article
A Non-Contact Measurement of Animal Body Size Based on Structured Light
by Fangzhou Xu, Yuxuan Zhang, Zelin Zhang and Nan Geng
Appl. Sci. 2024, 14(2), 903; https://doi.org/10.3390/app14020903 - 20 Jan 2024
Cited by 3 | Viewed by 3057
Abstract
To improve the accuracy of non-contact measurements of animal body size and reduce costs, a new monocular camera scanning equipment based on structured light was built with a matched point cloud generation algorithm. Firstly, using the structured light 3D measurement model, the camera [...] Read more.
To improve the accuracy of non-contact measurements of animal body size and reduce costs, a new monocular camera scanning equipment based on structured light was built with a matched point cloud generation algorithm. Firstly, using the structured light 3D measurement model, the camera intrinsic matrix and extrinsic matrix could be calculated. Secondly, the least square method and the improved segment–facet intersection method were used to implement and optimize the calibration of the light plane. Then, a new algorithm was proposed to extract gray- centers as well as a denoising and matching algorithm, both of which alleviate the astigmatism of light on animal fur and the distortion or fracture of light stripes caused by the irregular shape of an animal’s body. Thirdly, the point cloud was generated via the line–plane intersection method from which animal body sizes could be measured. Finally, an experiment on live animals such as rabbits and animal specimens such as fox and the goat was conducted in order to compare our equipment with a depth camera and a 3D scanner. The result shows that the error of our equipment is approximately 5%, which is much smaller than the error of the other two pieces of equipment. This equipment provides a practicable option for measuring animal body size. Full article
Show Figures

Figure 1

Back to TopTop