Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (89)

Search Parameters:
Keywords = stereovision

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
33 pages, 16534 KiB  
Article
Design of 3D Scanning Technology Using a Method with No External Reference Elements and Without Repositioning of the Device Relative to the Object
by Adrián Vodilka, Marek Kočiško, Martin Pollák, Jakub Kaščak and Jozef Török
Appl. Sci. 2025, 15(8), 4533; https://doi.org/10.3390/app15084533 - 19 Apr 2025
Viewed by 661
Abstract
The use of 3D scanning technologies for surface scanning of objects is limited by environmental conditions and technology requirements based on their characteristics. Among the emerging fields is technical diagnostics in areas of hard-to-reach places with varying surface characteristics of objects of different [...] Read more.
The use of 3D scanning technologies for surface scanning of objects is limited by environmental conditions and technology requirements based on their characteristics. Among the emerging fields is technical diagnostics in areas of hard-to-reach places with varying surface characteristics of objects of different materials, where the use of commercially available 3D scanning technologies is limited by space. Furthermore, in these areas it is not convenient to use external reference elements or to move the equipment during the digitization process. This paper presents a novel markerless 3D scanning system capable of digitizing objects in confined spaces without requiring external reference elements or repositioning the device relative to the object and aims to address this challenge by designing a 3D scanning technology using the Active Shape from Stereo technique utilizing laser vertical line projection. For this purpose, a testing and prototype design and a software solution using a unique method of calculating 3D surface coordinates have been proposed. In addition to hard-to-reach places, this solution can be used as a desktop 3D scanner and for other 3D digitizing applications for objects of different materials and surface characteristics. Furthermore, the device is well suited to inspecting 3D printed objects, enabling quick, markerless checks of surface geometry and dimensions during the process of 3D printing to ensure printing accuracy and quality. Full article
Show Figures

Figure 1

16 pages, 8058 KiB  
Article
Design of a Prototype of an Innovative 3D Scanning Technology for Use in the Digitization of Hard-to-Reach Places
by Adrián Vodilka, Marek Kočiško and Jakub Kaščak
Appl. Sci. 2025, 15(5), 2817; https://doi.org/10.3390/app15052817 - 5 Mar 2025
Cited by 1 | Viewed by 924
Abstract
This research addresses the challenge of digitizing the surface of objects in hard-to-reach areas and focuses on the integration of reverse engineering techniques with innovative digitization approaches. Conventional non-destructive testing techniques, such as industrial videoscope inspection, lack the ability to capture accurate geometric [...] Read more.
This research addresses the challenge of digitizing the surface of objects in hard-to-reach areas and focuses on the integration of reverse engineering techniques with innovative digitization approaches. Conventional non-destructive testing techniques, such as industrial videoscope inspection, lack the ability to capture accurate geometric and surface information without the need for disassembly of the components. To overcome these limitations, this research proposes a 3D digitizing prototype that integrates structured light, laser scanning, and active stereo techniques. The device utilizes ESP32-CAM modules and compact mechanical components designed for portability and usability in confined spaces. Experimental validation involved scanning complex and reflective surfaces, including printer components and the engine compartment of an automobile, demonstrating the device’s ability to produce detailed point clouds in challenging environments. Key innovations include a unique approach for utilizing 3D scanning techniques of active stereovision using a folding mechanism. The findings highlight the device’s potential for applications in technical diagnostics, industrial inspection, and environments where traditional digitizing technologies could not be utilized. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

11 pages, 215 KiB  
Article
Pediatric and Juvenile Strabismus Surgery Under General Anesthesia: Functional Outcomes and Safety
by Jakob Briem, Sandra Rezar-Dreindl, Lorenz Wassermann, Katharina Eibenberger, Franz Pusch, Ursula Schmidt-Erfurth and Eva Stifter
J. Clin. Med. 2025, 14(4), 1076; https://doi.org/10.3390/jcm14041076 - 8 Feb 2025
Viewed by 1368
Abstract
Background/Objectives: The aim of this paper was to evaluate the safety of surgical intervention using anesthesia and ophthalmological parameters in pediatric strabismus patients. The design involved retrospective case series. Methods: The setting was the Department of Ophthalmology, Medical University Vienna, Austria. Participants: In [...] Read more.
Background/Objectives: The aim of this paper was to evaluate the safety of surgical intervention using anesthesia and ophthalmological parameters in pediatric strabismus patients. The design involved retrospective case series. Methods: The setting was the Department of Ophthalmology, Medical University Vienna, Austria. Participants: In total, 208 children aged 0–18 years who underwent strabismus surgery due to exotropia or esotropia between 2013 and 2020 were included. Main outcomes and measures: Information regarding the duration of surgery, intra- and postoperative complications, the postoperative angle of deviation (AoD), and functional outcomes (visual acuity, stereovision) were analyzed. Results: The mean age at the time of surgery was 6.0 ± 4.1 years (range 0.6–18.0). The mean anesthesia time among all patients was 75.9 ± 19.3 min. The mean surgery and anesthesia time did not differ between the age groups. Longer anesthesia durations and surgery durations did not have a significant effect on the occurrence of intraoperation complications (p = 0.610 and p = 0.190, respectively). Intraoperative complications were recorded in 53% (most frequent triggering of oculocardiac reflex (OCR)) of the patients, and postoperative complications in 22% (the most frequent were postoperative nausea and vomiting and pain). An OCR was triggered more often in children older than 6 years than in younger children (p = 0.016). The mean angle of deviation was significantly reduced from preoperative to postoperative measurements. Preoperative stereovision tests were positive in 35% of the patients and increased to over 80% postoperatively. Conclusions: Strabismus surgery performed under general anesthesia in children aged 0 to 18 years is safe with regard to both surgical and anesthetic complications. A significant decrease in the angle of deviation and high rate of stereovision could be achieved with a low rate of re-treatments. However, the retrospective design, absence of standardized documentation, and limited sample size may affect the consistency and comparability of this study’s findings. Full article
(This article belongs to the Section Ophthalmology)
19 pages, 2560 KiB  
Article
Evaluation of Rapeseed Leave Segmentation Accuracy Using Binocular Stereo Vision 3D Point Clouds
by Lili Zhang, Shuangyue Shi, Muhammad Zain, Binqian Sun, Dongwei Han and Chengming Sun
Agronomy 2025, 15(1), 245; https://doi.org/10.3390/agronomy15010245 - 20 Jan 2025
Cited by 2 | Viewed by 1209
Abstract
Point cloud segmentation is necessary for obtaining highly precise morphological traits in plant phenotyping. Although a huge development has occurred in point cloud segmentation, the segmentation of point clouds from complex plant leaves still remains challenging. Rapeseed leaves are critical in cultivation and [...] Read more.
Point cloud segmentation is necessary for obtaining highly precise morphological traits in plant phenotyping. Although a huge development has occurred in point cloud segmentation, the segmentation of point clouds from complex plant leaves still remains challenging. Rapeseed leaves are critical in cultivation and breeding, yet traditional two-dimensional imaging is susceptible to reduced segmentation accuracy due to occlusions between plants. The current study proposes the use of binocular stereo-vision technology to obtain three-dimensional (3D) point clouds of rapeseed leaves at the seedling and bolting stages. The point clouds were colorized based on elevation values in order to better process the 3D point cloud data and extract rapeseed phenotypic parameters. Denoising methods were selected based on the source and classification of point cloud noise. However, for ground point clouds, we combined plane fitting with pass-through filtering for denoising, while statistical filtering was used for denoising outliers generated during scanning. We found that, during the seedling stage of rapeseed, a region-growing segmentation method was helpful in finding suitable parameter thresholds for leaf segmentation, and the Locally Convex Connected Patches (LCCP) clustering method was used for leaf segmentation at the bolting stage. Furthermore, the study results show that combining plane fitting with pass-through filtering effectively removes the ground point cloud noise, while statistical filtering successfully denoises outlier noise points generated during scanning. Finally, using the region-growing algorithm during the seedling stage with a normal angle threshold set at 5.0/180.0* M_PI and a curvature threshold set at 1.5 helps to avoid the under-segmentation and over-segmentation issues, achieving complete segmentation of rapeseed seedling leaves, while the LCCP clustering method fully segments rapeseed leaves at the bolting stage. The proposed method provides insights to improve the accuracy of subsequent point cloud phenotypic parameter extraction, such as rapeseed leaf area, and is beneficial for the 3D reconstruction of rapeseed. Full article
(This article belongs to the Special Issue Unmanned Farms in Smart Agriculture)
Show Figures

Figure 1

11 pages, 16191 KiB  
Proceeding Paper
Lens Distortion Measurement and Correction for Stereovision Multi-Camera System
by Grzegorz Madejski, Sebastian Zbytniewski, Mateusz Kurowski, Dawid Gradolewski, Włodzimierz Kaoka and Wlodek J. Kulesza
Eng. Proc. 2024, 82(1), 85; https://doi.org/10.3390/ecsa-11-20457 - 26 Nov 2024
Viewed by 1170
Abstract
In modern autonomous systems, measurement repeatability and precision are crucial for robust decision-making algorithms. Stereovision, which is widely used in safety applications, provides information about an object’s shape, orientation, and 3D localisation. The camera’s lens distortion is a common source of systematic measurement [...] Read more.
In modern autonomous systems, measurement repeatability and precision are crucial for robust decision-making algorithms. Stereovision, which is widely used in safety applications, provides information about an object’s shape, orientation, and 3D localisation. The camera’s lens distortion is a common source of systematic measurement errors, which can be estimated and then eliminated or at least reduced using a suitable correction/calibration method. In this study, a set of cameras equipped with Basler lenses (C125-0618-5M F1.8 f6mm) and Sony IMX477R matrices are calibrated using a state-of-the-art Zhang–Duda–Frese method. The resulting distortion coefficients are used to correct the images. The calibrations are evaluated with the aid of two novel methods for lens distortion measurement. The first one is based on linear regression with images of a vertical and horizontal line pattern. Based on the evaluation tests, outlying cameras are eliminated from the test set by applying the 2σ criterion. For the remaining cameras, the MSE was reduced up to 75.4 times, to 1.8 px−6.9 px. The second method is designed to evaluate the impact of lens distortion on stereovision applied to bird tracking around wind farms. A bird’s flight trajectory is synthetically generated to estimate changes in disparity and distance before and after calibration. The method shows that at the margins of the image, lens distortion might introduce errors into the object’s distance measurement of +17%−+20% for cameras with the same distortion and from −41% up to + for camera pairs with different lens distortions. These results highlight the importance of having well-calibrated cameras in systems that require precision, such as stereovision bird tracking in bird–turbine collision risk assessment systems. Full article
Show Figures

Figure 1

20 pages, 11683 KiB  
Article
Responses of Vehicular Occupants During Emergency Braking and Aggressive Lane-Change Maneuvers
by Hyeonho Hwang and Taewung Kim
Sensors 2024, 24(20), 6727; https://doi.org/10.3390/s24206727 - 19 Oct 2024
Viewed by 1448
Abstract
To validate active human body models for investigating occupant safety in autonomous cars, it is crucial to comprehend the responses of vehicle occupants during evasive maneuvers. This study sought to quantify the behavior of midsize male and small female passenger seat occupants in [...] Read more.
To validate active human body models for investigating occupant safety in autonomous cars, it is crucial to comprehend the responses of vehicle occupants during evasive maneuvers. This study sought to quantify the behavior of midsize male and small female passenger seat occupants in both upright and reclined postures during three types of vehicle maneuvers. Volunteer tests were conducted using a minivan, where vehicle kinematics were measured with a DGPS sensor and occupant kinematics were captured with a stereo-vision motion capture system. Seatbelt loads, belt pull-out, and footrest reaction forces were also documented. The interior of the vehicle was 3D-scanned for modeling purposes. Results indicated that seatback angles significantly affected occupant kinematics, with small female volunteers displaying reduced head and torso movements, except during emergency braking with a upright posture seatback. Lane-change maneuvers revealed that maximum lateral head excursions varied depending on the maneuver’s direction. The study concluded that seatback angles were crucial in determining the extent of occupant movement, with notable variations in head and torso excursions observed. The collected data assist in understanding occupant behavior during evasive maneuvers and contribute to the validation of human body models, offering essential insights for enhancing safety systems in autonomous vehicles. Full article
(This article belongs to the Special Issue Sensing Human Cognitive Factors)
Show Figures

Figure 1

17 pages, 15407 KiB  
Article
Research on Defect Detection Method of Fusion Reactor Vacuum Chamber Based on Photometric Stereo Vision
by Guodong Qin, Haoran Zhang, Yong Cheng, Youzhi Xu, Feng Wang, Shijie Liu, Xiaoyan Qin, Ruijuan Zhao, Congju Zuo and Aihong Ji
Sensors 2024, 24(19), 6227; https://doi.org/10.3390/s24196227 - 26 Sep 2024
Cited by 1 | Viewed by 1264
Abstract
This paper addresses image enhancement and 3D reconstruction techniques for dim scenes inside the vacuum chamber of a nuclear fusion reactor. First, an improved multi-scale Retinex low-light image enhancement algorithm with adaptive weights is designed. It can recover image detail information that is [...] Read more.
This paper addresses image enhancement and 3D reconstruction techniques for dim scenes inside the vacuum chamber of a nuclear fusion reactor. First, an improved multi-scale Retinex low-light image enhancement algorithm with adaptive weights is designed. It can recover image detail information that is not visible in low-light environments, maintaining image clarity and contrast for easy observation. Second, according to the actual needs of target plate defect detection and 3D reconstruction inside the vacuum chamber, a defect reconstruction algorithm based on photometric stereo vision is proposed. To optimize the position of the light source, a light source illumination profile simulation system is designed in this paper to provide an optimized light array for crack detection inside vacuum chambers without the need for extensive experimental testing. Finally, a robotic platform mounted with a binocular stereo-vision camera is constructed and image enhancement and defect reconstruction experiments are performed separately. The results show that the above method can broaden the gray level of low-illumination images and improve the brightness value and contrast. The maximum depth error is less than 24.0% and the maximum width error is less than 15.3%, which achieves the goal of detecting and reconstructing the defects inside the vacuum chamber. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

19 pages, 15208 KiB  
Article
Analysis of the Influence of Refraction-Parameter Deviation on Underwater Stereo-Vision Measurement with Flat Refraction Interface
by Guanqing Li, Shengxiang Huang, Zhi Yin, Nanshan Zheng and Kefei Zhang
Remote Sens. 2024, 16(17), 3286; https://doi.org/10.3390/rs16173286 - 4 Sep 2024
Viewed by 1244
Abstract
There has been substantial research on multi-medium visual measurement in fields such as underwater three-dimensional reconstruction and underwater structure monitoring. Addressing the issue where traditional air-based visual-measurement models fail due to refraction when light passes through different media, numerous studies have established refraction-imaging [...] Read more.
There has been substantial research on multi-medium visual measurement in fields such as underwater three-dimensional reconstruction and underwater structure monitoring. Addressing the issue where traditional air-based visual-measurement models fail due to refraction when light passes through different media, numerous studies have established refraction-imaging models based on the actual geometry of light refraction to compensate for the effects of refraction on cross-media imaging. However, the calibration of refraction parameters inevitably contains errors, leading to deviations in these parameters. To analyze the impact of refraction-parameter deviations on measurements in underwater structure visual navigation, this paper develops a dual-media stereo-vision measurement simulation model and conducts comprehensive simulation experiments. The results indicate that to achieve high-precision underwater-measurement outcomes, the calibration method for refraction parameters, the distribution of the targets in the field of view, and the distance of the target from the camera must all be meticulously designed. These findings provide guidance for the construction of underwater stereo-vision measurement systems, the calibration of refraction parameters, underwater experiments, and practical applications. Full article
Show Figures

Graphical abstract

20 pages, 5395 KiB  
Article
Detection and Segmentation of Mouth Region in Stereo Stream Using YOLOv6 and DeepLab v3+ Models for Computer-Aided Speech Diagnosis in Children
by Agata Sage and Pawel Badura
Appl. Sci. 2024, 14(16), 7146; https://doi.org/10.3390/app14167146 - 14 Aug 2024
Cited by 5 | Viewed by 1694
Abstract
This paper describes a multistage framework for face image analysis in computer-aided speech diagnosis and therapy. Multimodal data processing frameworks have become a significant factor in supporting speech disorders’ treatment. Synchronous and asynchronous remote speech therapy approaches can use audio and video analysis [...] Read more.
This paper describes a multistage framework for face image analysis in computer-aided speech diagnosis and therapy. Multimodal data processing frameworks have become a significant factor in supporting speech disorders’ treatment. Synchronous and asynchronous remote speech therapy approaches can use audio and video analysis of articulation to deliver robust indicators of disordered speech. Accurate segmentation of articulators in video frames is a vital step in this agenda. We use a dedicated data acquisition system to capture the stereovision stream during speech therapy examination in children. Our goal is to detect and accurately segment four objects in the mouth area (lips, teeth, tongue, and whole mouth) during relaxed speech and speech therapy exercises. Our database contains 17,913 frames from 76 preschool children. We apply a sequence of procedures employing artificial intelligence. For detection, we train the YOLOv6 (you only look once) model to catch each of the three objects under consideration. Then, we prepare the DeepLab v3+ segmentation model in a semi-supervised training mode. As preparation of reliable expert annotations is exhausting in video labeling, we first train the network using weak labels produced by initial segmentation based on the distance-regularized level set evolution over fuzzified images. Next, we fine-tune the model using a portion of manual ground-truth delineations. Each stage is thoroughly assessed using the independent test subset. The lips are detected almost perfectly (average precision and F1 score of 0.999), whereas the segmentation Dice index exceeds 0.83 in each articulator, with a top result of 0.95 in the whole mouth. Full article
Show Figures

Figure 1

18 pages, 4787 KiB  
Article
Estimating Bermudagrass Aboveground Biomass Using Stereovision and Vegetation Coverage
by Jasanmol Singh, Ali Bulent Koc, Matias Jose Aguerre, John P. Chastain and Shareef Shaik
Remote Sens. 2024, 16(14), 2646; https://doi.org/10.3390/rs16142646 - 19 Jul 2024
Cited by 3 | Viewed by 1137
Abstract
Accurate information about the amount of standing biomass is important in pasture management for monitoring forage growth patterns, minimizing the risk of overgrazing, and ensuring the necessary feed requirements of livestock. The morphological features of plants, like crop height and density, have been [...] Read more.
Accurate information about the amount of standing biomass is important in pasture management for monitoring forage growth patterns, minimizing the risk of overgrazing, and ensuring the necessary feed requirements of livestock. The morphological features of plants, like crop height and density, have been proven to be prominent predictors of crop yield. The objective of this study was to evaluate the effectiveness of stereovision-based crop height and vegetation coverage measurements in predicting the aboveground biomass yield of bermudagrass (Cynodon dactylon) in a pasture. Data were collected from 136 experimental plots within a 0.81 ha bermudagrass pasture using an RGB-depth camera mounted on a ground rover. The crop height was determined based on the disparity between images captured by two stereo cameras of the depth camera. The vegetation coverage was extracted from the RGB images using a machine learning algorithm by segmenting vegetative and non-vegetative pixels. After camera measurements, the plots were harvested and sub-sampled to measure the wet and dry biomass yields for each plot. The wet biomass yield prediction function based on crop height and vegetation coverage was generated using a linear regression analysis. The results indicated that the combination of crop height and vegetation coverage showed a promising correlation with aboveground wet biomass yield. However, the prediction function based only on the crop height showed less residuals at the extremes compared to the combined prediction function (crop height and vegetation coverage) and was thus declared the recommended approach (R2 = 0.91; SeY= 1824 kg-wet/ha). The crop height-based prediction function was used to estimate the dry biomass yield using the mean dry matter fraction. Full article
Show Figures

Figure 1

22 pages, 913 KiB  
Review
A Comparative Literature Review of Machine Learning and Image Processing Techniques Used for Scaling and Grading of Wood Logs
by Yohann Jacob Sandvik, Cecilia Marie Futsæther, Kristian Hovde Liland and Oliver Tomic
Forests 2024, 15(7), 1243; https://doi.org/10.3390/f15071243 - 17 Jul 2024
Cited by 2 | Viewed by 2563
Abstract
This literature review assesses the efficacy of image-processing techniques and machine-learning models in computer vision for wood log grading and scaling. Four searches were conducted in four scientific databases, yielding a total of 1288 results, which were narrowed down to 33 relevant studies. [...] Read more.
This literature review assesses the efficacy of image-processing techniques and machine-learning models in computer vision for wood log grading and scaling. Four searches were conducted in four scientific databases, yielding a total of 1288 results, which were narrowed down to 33 relevant studies. The studies were categorized according to their goals, including log end grading, log side grading, individual log scaling, log pile scaling, and log segmentation. The studies were compared based on the input used, choice of model, model performance, and level of autonomy. This review found a preference for images over point cloud representations for logs and an increase in camera use over laser scanners. It identified three primary model types: classical image-processing algorithms, deep learning models, and other machine learning models. However, comparing performance across studies proved challenging due to varying goals and metrics. Deep learning models showed better performance in the log pile scaling and log segmentation goal categories. Cameras were found to have become more popular over time compared to laser scanners, possibly due to stereovision cameras taking over for laser scanners for sampling point cloud datasets. Classical image-processing algorithms were consistently used, deep learning models gained prominence in 2018, and other machine learning models were used in studies published between 2010 and 2018. Full article
Show Figures

Figure 1

20 pages, 17993 KiB  
Article
Semantic 3D Reconstruction for Volumetric Modeling of Defects in Construction Sites
by Dimitrios Katsatos, Paschalis Charalampous, Patrick Schmidt, Ioannis Kostavelis, Dimitrios Giakoumis, Lazaros Nalpantidis and Dimitrios Tzovaras
Robotics 2024, 13(7), 102; https://doi.org/10.3390/robotics13070102 - 11 Jul 2024
Cited by 1 | Viewed by 2231
Abstract
The appearance of construction defects in buildings can arise from a variety of factors, ranging from issues during the design and construction phases to problems that develop over time with the lifecycle of a building. These defects require repairs, often in the context [...] Read more.
The appearance of construction defects in buildings can arise from a variety of factors, ranging from issues during the design and construction phases to problems that develop over time with the lifecycle of a building. These defects require repairs, often in the context of a significant shortage of skilled labor. In addition, such work is often physically demanding and carried out in hazardous environments. Consequently, adopting autonomous robotic systems in the construction industry becomes essential, as they can relieve labor shortages, promote safety, and enhance the quality and efficiency of repair and maintenance tasks. Hereupon, the present study introduces an end-to-end framework towards the automation of shotcreting tasks in cases where construction or repair actions are required. The proposed system can scan a construction scene using a stereo-vision camera mounted on a robotic platform, identify regions of defects, and reconstruct a 3D model of these areas. Furthermore, it automatically calculates the required 3D volumes to be constructed to treat a detected defect. To achieve all of the above-mentioned technological tools, the developed software framework employs semantic segmentation and 3D reconstruction modules based on YOLOv8m-seg, SiamMask, InfiniTAM, and RTAB-Map, respectively. In addition, the segmented 3D regions are processed by the volumetric modeling component, which determines the amount of concrete needed to fill the defects. It generates the exact 3D model that can repair the investigated defect. Finally, the precision and effectiveness of the proposed pipeline are evaluated in actual construction site scenarios, featuring reinforcement bars as defective areas. Full article
(This article belongs to the Special Issue Localization and 3D Mapping of Intelligent Robotics)
Show Figures

Figure 1

23 pages, 5776 KiB  
Article
Estimating the Workability of Concrete with a Stereovision Camera during Mixing
by Teemu Ojala and Jouni Punkki
Sensors 2024, 24(14), 4472; https://doi.org/10.3390/s24144472 - 10 Jul 2024
Cited by 2 | Viewed by 1339
Abstract
The correct workability of concrete is an essential parameter for its placement and compaction. However, an absence of automatic and transparent measurement methods to estimate the workability of concrete hinders the adaptation from laborious traditional methods such as the slump test. In this [...] Read more.
The correct workability of concrete is an essential parameter for its placement and compaction. However, an absence of automatic and transparent measurement methods to estimate the workability of concrete hinders the adaptation from laborious traditional methods such as the slump test. In this paper, we developed a machine-learning framework for estimating the slump class of concrete in the mixer using a stereovision camera. Depth data from five different slump classes was transformed into Haralick texture features to train several machine-learning classifiers. The best-performing classifier achieved a multiclass classification accuracy of 0.8179 with the XGBoost algorithm. Furthermore, we found through statistical analysis that while the denoising of depth data has little effect on the accuracy, the feature extraction of mixer blades and the choice of region of interest significantly increase the accuracy and the efficiency of the classifiers. The proposed framework shows robust results, indicating that stereovision is a competitive solution to estimate the workability of concrete during concrete production. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

26 pages, 11261 KiB  
Article
A Novel Simulation Method for 3D Digital-Image Correlation: Combining Virtual Stereo Vision and Image Super-Resolution Reconstruction
by Hao Chen, Hao Li, Guohua Liu and Zhenyu Wang
Sensors 2024, 24(13), 4031; https://doi.org/10.3390/s24134031 - 21 Jun 2024
Cited by 4 | Viewed by 2785
Abstract
3D digital-image correlation (3D-DIC) is a non-contact optical technique for full-field shape, displacement, and deformation measurement. Given the high experimental hardware costs associated with 3D-DIC, the development of high-fidelity 3D-DIC simulations holds significant value. However, existing research on 3D-DIC simulation was mainly carried [...] Read more.
3D digital-image correlation (3D-DIC) is a non-contact optical technique for full-field shape, displacement, and deformation measurement. Given the high experimental hardware costs associated with 3D-DIC, the development of high-fidelity 3D-DIC simulations holds significant value. However, existing research on 3D-DIC simulation was mainly carried out through the generation of random speckle images. This study innovatively proposes a complete 3D-DIC simulation method involving optical simulation and mechanical simulation and integrating 3D-DIC, virtual stereo vision, and image super-resolution reconstruction technology. Virtual stereo vision can reduce hardware costs and eliminate camera-synchronization errors. Image super-resolution reconstruction can compensate for the decrease in precision caused by image-resolution loss. An array of software tools such as ANSYS SPEOS 2024R1, ZEMAX 2024R1, MECHANICAL 2024R1, and MULTIDIC v1.1.0 are used to implement this simulation. Measurement systems based on stereo vision and virtual stereo vision were built and tested for use in 3D-DIC. The results of the simulation experiment show that when the synchronization error of the basic stereo-vision system (BSS) is within 103 time steps, the reconstruction error is within 0.005 mm and the accuracy of the virtual stereo-vision system is between the BSS’s synchronization error of 107 and 106 time steps. In addition, after image super-resolution reconstruction technology is applied, the reconstruction error will be reduced to within 0.002 mm. The simulation method proposed in this study can provide a novel research path for existing researchers in the field while also offering the opportunity for researchers without access to costly hardware to participate in related research. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

15 pages, 3907 KiB  
Article
Methodological Selection of Optimal Features for Object Classification Based on Stereovision System
by Rafał Tkaczyk, Grzegorz Madejski, Dawid Gradolewski, Damian Dziak and Wlodek J. Kulesza
Sensors 2024, 24(12), 3941; https://doi.org/10.3390/s24123941 - 18 Jun 2024
Viewed by 1631
Abstract
With the expansion of green energy, more and more data show that wind turbines can pose a significant threat to some endangered bird species. The birds of prey are more frequently exposed to collision risk with the wind turbine blades due to their [...] Read more.
With the expansion of green energy, more and more data show that wind turbines can pose a significant threat to some endangered bird species. The birds of prey are more frequently exposed to collision risk with the wind turbine blades due to their unique flight path patterns. This paper shows how data from a stereovision system can be used for an efficient classification of detected objects. A method for distinguishing endangered birds from common birds and other flying objects has been developed and tested. The research focused on the selection of a suitable feature extraction methodology. Both motion and visual features are extracted from the Bioseco BPS system and retested using a correlation-based and a wrapper-type approach with genetic algorithms (GAs). With optimal features and fine-tuned classifiers, birds can be distinguished from aeroplanes with a 98.6% recall and 97% accuracy, whereas endangered birds are delimited from common ones with 93.5% recall and 77.2% accuracy. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Back to TopTop