Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (77)

Search Parameters:
Keywords = virtual stereo

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 610 KiB  
Review
Three-Dimensional Reconstruction Techniques and the Impact of Lighting Conditions on Reconstruction Quality: A Comprehensive Review
by Dimitar Rangelov, Sierd Waanders, Kars Waanders, Maurice van Keulen and Radoslav Miltchev
Lights 2025, 1(1), 1; https://doi.org/10.3390/lights1010001 - 14 Jul 2025
Viewed by 347
Abstract
Three-dimensional (3D) reconstruction has become a fundamental technology in applications ranging from cultural heritage preservation and robotics to forensics and virtual reality. As these applications grow in complexity and realism, the quality of the reconstructed models becomes increasingly critical. Among the many factors [...] Read more.
Three-dimensional (3D) reconstruction has become a fundamental technology in applications ranging from cultural heritage preservation and robotics to forensics and virtual reality. As these applications grow in complexity and realism, the quality of the reconstructed models becomes increasingly critical. Among the many factors that influence reconstruction accuracy, the lighting conditions at capture time remain one of the most influential, yet widely neglected, variables. This review provides a comprehensive survey of classical and modern 3D reconstruction techniques, including Structure from Motion (SfM), Multi-View Stereo (MVS), Photometric Stereo, and recent neural rendering approaches such as Neural Radiance Fields (NeRFs) and 3D Gaussian Splatting (3DGS), while critically evaluating their performance under varying illumination conditions. We describe how lighting-induced artifacts such as shadows, reflections, and exposure imbalances compromise the reconstruction quality and how different approaches attempt to mitigate these effects. Furthermore, we uncover fundamental gaps in current research, including the lack of standardized lighting-aware benchmarks and the limited robustness of state-of-the-art algorithms in uncontrolled environments. By synthesizing knowledge across fields, this review aims to gain a deeper understanding of the interplay between lighting and reconstruction and provides research directions for the future that emphasize the need for adaptive, lighting-robust solutions in 3D vision systems. Full article
Show Figures

Figure 1

21 pages, 13198 KiB  
Article
Infrared Bionic Compound-Eye Camera: Long-Distance Measurement Simulation and Verification
by Xiaoyu Wang, Linhan Li, Jie Liu, Zhen Huang, Yuhan Li, Huicong Wang, Yimin Zhang, Yang Yu, Xiupeng Yuan, Liya Qiu and Sili Gao
Electronics 2025, 14(7), 1473; https://doi.org/10.3390/electronics14071473 - 6 Apr 2025
Cited by 1 | Viewed by 558
Abstract
To achieve rapid distance estimation and tracking of moving targets in a large field of view, this paper proposes an innovative simulation method. Using a low-cost approach, the imaging and distance measurement performance of the designed cooling-type mid-wave infrared compound-eye camera (CM-CECam) is [...] Read more.
To achieve rapid distance estimation and tracking of moving targets in a large field of view, this paper proposes an innovative simulation method. Using a low-cost approach, the imaging and distance measurement performance of the designed cooling-type mid-wave infrared compound-eye camera (CM-CECam) is experimentally evaluated. The compound-eye camera consists of a small-lens array with a spherical shell, a relay optical system, and a cooling-type mid-wave infrared detector. Based on the spatial arrangement of the small-lens array, a precise simulation imaging model for the compound-eye camera is developed, constructing a virtual imaging space. Distance estimation and error analysis for virtual targets are performed using the principle of stereo disparity. This universal simulation method provides a foundation for spatial design and image-plane adjustments for compound-eye cameras with specialized structures. Using the raw images captured by the compound-eye camera, a scene-specific piecewise linear mapping method is applied. This method significantly reduces the brightness contrast differences between sub-images during wide-field observations, enhancing image details. For the fast detection of moving targets, ommatidia clusters are defined as the minimal spatial constraint units. Local information at the centers of these constraint units is prioritized for processing. This approach replaces traditional global detection methods, improving the efficiency of subsequent processing. Finally, the simulated distance measurement results are validated using real-world scene data. Full article
Show Figures

Figure 1

20 pages, 13379 KiB  
Article
From Simulation to Field Validation: A Digital Twin-Driven Sim2real Transfer Approach for Strawberry Fruit Detection and Sizing
by Omeed Mirbod, Daeun Choi and John K. Schueller
AgriEngineering 2025, 7(3), 81; https://doi.org/10.3390/agriengineering7030081 - 17 Mar 2025
Cited by 1 | Viewed by 1828
Abstract
Typically, developing new digital agriculture technologies requires substantial on-site resources and data. However, the crop’s growth cycle provides only limited time windows for experiments and equipment validation. This study presents a photorealistic digital twin of a commercial-scale strawberry farm, coupled with a simulated [...] Read more.
Typically, developing new digital agriculture technologies requires substantial on-site resources and data. However, the crop’s growth cycle provides only limited time windows for experiments and equipment validation. This study presents a photorealistic digital twin of a commercial-scale strawberry farm, coupled with a simulated ground vehicle, to address these constraints by generating high-fidelity synthetic RGB and LiDAR data. These data enable the rapid development and evaluation of a deep learning-based machine vision pipeline for fruit detection and sizing without continuously relying on real-field access. Traditional simulators often lack visual realism, leading many studies to mix real images or adopt domain adaptation methods to address the reality gap. In contrast, this work relies solely on photorealistic simulation outputs for training, eliminating the need for real images or specialized adaptation approaches. After training exclusively on images captured in the virtual environment, the model was tested on a commercial-scale strawberry farm using a physical ground vehicle. Two separate trials with field images resulted in F1-scores of 0.92 and 0.81 for detection and a sizing error of 1.4 mm (R2 = 0.92) when comparing image-derived diameters against caliper measurements. These findings indicate that a digital twin-driven sim2real transfer can offer substantial time and cost savings by refining crucial tasks such as stereo sensor calibration and machine learning model development before extensive real-field deployments. In addition, the study examined geometric accuracy and visual fidelity through systematic comparisons of LiDAR and RGB sensor outputs from the virtual and real farms. Results demonstrated close alignment in both topography and textural details, validating the digital twin’s ability to replicate intricate field characteristics, including raised bed geometry and strawberry plant distribution. The techniques developed and validated in this strawberry project have broad applicability across agricultural commodities, particularly for fruit and vegetable production systems. This study demonstrates that integrating digital twins with simulation tools can significantly reduce the need for resource-intensive field data collection while accelerating the development and refinement of agricultural robotics algorithms and hardware. Full article
Show Figures

Graphical abstract

29 pages, 4682 KiB  
Article
LSAF-LSTM-Based Self-Adaptive Multi-Sensor Fusion for Robust UAV State Estimation in Challenging Environments
by Mahammad Irfan, Sagar Dalai, Petar Trslic, James Riordan and Gerard Dooly
Machines 2025, 13(2), 130; https://doi.org/10.3390/machines13020130 - 9 Feb 2025
Cited by 2 | Viewed by 1935
Abstract
Unmanned aerial vehicle (UAV) state estimation is fundamental across applications like robot navigation, autonomous driving, virtual reality (VR), and augmented reality (AR). This research highlights the critical role of robust state estimation in ensuring safe and efficient autonomous UAV navigation, particularly in challenging [...] Read more.
Unmanned aerial vehicle (UAV) state estimation is fundamental across applications like robot navigation, autonomous driving, virtual reality (VR), and augmented reality (AR). This research highlights the critical role of robust state estimation in ensuring safe and efficient autonomous UAV navigation, particularly in challenging environments. We propose a deep learning-based adaptive sensor fusion framework for UAV state estimation, integrating multi-sensor data from stereo cameras, an IMU, two 3D LiDAR’s, and GPS. The framework dynamically adjusts fusion weights in real time using a long short-term memory (LSTM) model, enhancing robustness under diverse conditions such as illumination changes, structureless environments, degraded GPS signals, or complete signal loss where traditional single-sensor SLAM methods often fail. Validated on an in-house integrated UAV platform and evaluated against high-precision RTK ground truth, the algorithm incorporates deep learning-predicted fusion weights into an optimization-based odometry pipeline. The system delivers robust, consistent, and accurate state estimation, outperforming state-of-the-art techniques. Experimental results demonstrate its adaptability and effectiveness across challenging scenarios, showcasing significant advancements in UAV autonomy and reliability through the synergistic integration of deep learning and sensor fusion. Full article
Show Figures

Figure 1

19 pages, 7424 KiB  
Article
Residual Vision Transformer and Adaptive Fusion Autoencoders for Monocular Depth Estimation
by Wei-Jong Yang, Chih-Chen Wu and Jar-Ferr Yang
Sensors 2025, 25(1), 80; https://doi.org/10.3390/s25010080 - 26 Dec 2024
Cited by 1 | Viewed by 1371
Abstract
Precision depth estimation plays a key role in many applications, including 3D scene reconstruction, virtual reality, autonomous driving and human–computer interaction. Through recent advancements in deep learning technologies, monocular depth estimation, with its simplicity, has surpassed the traditional stereo camera systems, bringing new [...] Read more.
Precision depth estimation plays a key role in many applications, including 3D scene reconstruction, virtual reality, autonomous driving and human–computer interaction. Through recent advancements in deep learning technologies, monocular depth estimation, with its simplicity, has surpassed the traditional stereo camera systems, bringing new possibilities in 3D sensing. In this paper, by using a single camera, we propose an end-to-end supervised monocular depth estimation autoencoder, which contains an encoder with a structure with a mixed convolution neural network and vision transformers and an effective adaptive fusion decoder to obtain high-precision depth maps. In the encoder, we construct a multi-scale feature extractor by mixing residual configurations of vision transformers to enhance both local and global information. In the adaptive fusion decoder, we introduce adaptive fusion modules to effectively merge the features of the encoder and the decoder together. Lastly, the model is trained using a loss function that aligns with human perception to enable it to focus on the depth values of foreground objects. The experimental results demonstrate the effective prediction of the depth map from a single-view color image by the proposed autoencoder, which increases the first accuracy rate about 28% and reduces the root mean square error about 27% compared to an existing method in the NYU dataset. Full article
Show Figures

Figure 1

30 pages, 6897 KiB  
Article
Research on UAV Autonomous Recognition and Approach Method for Linear Target Splicing Sleeves Based on Deep Learning and Active Stereo Vision
by Guocai Zhang, Guixiong Liu and Fei Zhong
Electronics 2024, 13(24), 4872; https://doi.org/10.3390/electronics13244872 (registering DOI) - 10 Dec 2024
Cited by 1 | Viewed by 1112
Abstract
This study proposes an autonomous recognition and approach method for unmanned aerial vehicles (UAVs) targeting linear splicing sleeves. By integrating deep learning and active stereo vision, this method addresses the navigation challenges faced by UAVs during the identification, localization, and docking of splicing [...] Read more.
This study proposes an autonomous recognition and approach method for unmanned aerial vehicles (UAVs) targeting linear splicing sleeves. By integrating deep learning and active stereo vision, this method addresses the navigation challenges faced by UAVs during the identification, localization, and docking of splicing sleeves on overhead power transmission lines. First, a two-stage localization strategy, LC (Local Clustering)-RB (Reparameterization Block)-YOLO (You Only Look Once)v8n (OBB (Oriented Bounding Box)), is developed for linear target splicing sleeves. This strategy ensures rapid, accurate, and reliable recognition and localization while generating precise waypoints for UAV docking with splicing sleeves. Next, virtual reality technology is utilized to expand the splicing sleeve dataset, creating the DSS dataset tailored to diverse scenarios. This enhancement improves the robustness and generalization capability of the recognition model. Finally, a UAV approach splicing sleeve (UAV-ASS) visual navigation simulation platform is developed using the Robot Operating System (ROS), the PX4 open-source flight control system, and the GAZEBO 3D robotics simulator. This platform simulates the UAV’s final approach to the splicing sleeves. Experimental results demonstrate that, on the DSS dataset, the RB-YOLOv8n(OBB) model achieves a mean average precision (mAP0.5) of 96.4%, with an image inference speed of 86.41 frames per second. By incorporating the LC-based fine localization method, the five rotational bounding box parameters (x, y, w, h, and angle) of the splicing sleeve achieve a mean relative error (MRE) ranging from 3.39% to 4.21%. Additionally, the correlation coefficients (ρ) with manually annotated positions improve to 0.99, 0.99, 0.98, 0.95, and 0.98, respectively. These improvements significantly enhance the accuracy and stability of splicing sleeve localization. Moreover, the developed UAV-ASS visual navigation simulation platform effectively validates high-risk algorithms for UAV autonomous recognition and docking with splicing sleeves on power transmission lines, reducing testing costs and associated safety risks. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

21 pages, 3646 KiB  
Article
D3L-SLAM: A Comprehensive Hybrid Simultaneous Location and Mapping System with Deep Keypoint, Deep Depth, Deep Pose, and Line Detection
by Hao Qu, Congrui Wang, Yangfan Xu, Lilian Zhang, Xiaoping Hu and Changhao Chen
Appl. Sci. 2024, 14(21), 9748; https://doi.org/10.3390/app14219748 - 25 Oct 2024
Viewed by 1883
Abstract
Robust localization and mapping are crucial for autonomous systems, but traditional handcrafted feature-based visual SLAM often struggles in challenging, textureless environments. Additionally, monocular SLAM lacks scale-aware depth perception, making accurate scene scale estimation difficult. To address these issues, we propose D3L-SLAM, a novel [...] Read more.
Robust localization and mapping are crucial for autonomous systems, but traditional handcrafted feature-based visual SLAM often struggles in challenging, textureless environments. Additionally, monocular SLAM lacks scale-aware depth perception, making accurate scene scale estimation difficult. To address these issues, we propose D3L-SLAM, a novel monocular SLAM system that integrates deep keypoints, deep depth estimates, deep pose priors, and a line detector. By leveraging deep keypoints, which are more resilient to lighting variations, our system improves the robustness of visual SLAM. We further enhance perception in low-texture areas by incorporating line features in the front-end and mitigate scale degradation with learned depth estimates. Additionally, point-line feature constraints optimize pose estimation and mapping through a tightly coupled point-line bundle adjustment (BA). The learned pose estimates refine the feature matching process during tracking, leading to more accurate localization and mapping. Experimental results on public and self-collected datasets show that D3L-SLAM significantly outperforms both traditional and learning-based visual SLAM methods in localization accuracy. Full article
Show Figures

Figure 1

23 pages, 4290 KiB  
Article
A Method for Recognition and Coordinate Reference of Autonomous Underwater Vehicles to Inspected Objects of Industrial Subsea Structures Using Stereo Images
by Valery Bobkov and Alexey Kudryashov
J. Mar. Sci. Eng. 2024, 12(9), 1514; https://doi.org/10.3390/jmse12091514 - 2 Sep 2024
Viewed by 1047
Abstract
To date, the development of unmanned technologies using autonomous underwater vehicles (AUVs) has become an urgent demand for solving the problem of inspecting industrial subsea structures. A key issue here is the precise localization of AUVs relative to underwater objects. However, the impossibility [...] Read more.
To date, the development of unmanned technologies using autonomous underwater vehicles (AUVs) has become an urgent demand for solving the problem of inspecting industrial subsea structures. A key issue here is the precise localization of AUVs relative to underwater objects. However, the impossibility of using GPS and the presence of various interferences associated with the dynamics of the underwater environment do not allow high-precision navigation based solely on a standard suite of AUV navigation tools (sonars, etc.). An alternative technology involves the processing of optical images that, at short distances, can provide higher accuracy of AUV navigation compared to the technology of acoustic measurement processing. Although there have been results in this direction, further development of methods for extracting spatial information about objects from images recorded by a camera is necessary in the task of calculating the exact mutual position of the AUV and the object. In this study, in the context of the problem of subsea production system inspection, we propose a technology to recognize underwater objects and provide coordinate references to the AUV based on stereo-image processing. Its distinctive features are the use of a non-standard technique to generate a geometric model of an object from its views (foreshortening) taken from positions of a pre-made overview trajectory, the use of various characteristic geometric elements when recognizing objects, and the original algorithms for comparing visual data of the inspection trajectory with an a priori model of the object. The results of experiments on virtual scenes and with real data showed the effectiveness of the proposed technology. Full article
(This article belongs to the Special Issue Autonomous Marine Vehicle Operations—2nd Edition)
Show Figures

Figure 1

35 pages, 24997 KiB  
Article
EchoSee: An Assistive Mobile Application for Real-Time 3D Environment Reconstruction and Sonification Supporting Enhanced Navigation for People with Vision Impairments
by Broderick S. Schwartz, Seth King and Tyler Bell
Bioengineering 2024, 11(8), 831; https://doi.org/10.3390/bioengineering11080831 - 14 Aug 2024
Cited by 1 | Viewed by 3669
Abstract
Improving the quality of life for people with vision impairments has been an important goal in the research and design of assistive devices for several decades. This paper seeks to further that goal by introducing a novel assistive technology platform that leverages real-time [...] Read more.
Improving the quality of life for people with vision impairments has been an important goal in the research and design of assistive devices for several decades. This paper seeks to further that goal by introducing a novel assistive technology platform that leverages real-time 3D spatial audio to promote safe and efficient navigation for people who are blind or visually impaired (PVI). The presented platform, EchoSee, uses modern 3D scanning technology on a mobile device to construct a live, digital 3D map of a user’s environment as they move about their surroundings. Spatialized, virtual audio sources (i.e., virtual speakers) are dynamically placed within the digital 3D scan of the world, providing the navigator with a real-time 3D stereo audio “soundscape.” The digital 3D map, and its resultant soundscape, are continuously updated as the user moves about their environment. The generated soundscape is played back through headphones connected to the navigator’s device. This paper details (1) the underlying technical components and how they were integrated to produce the mobile application that generates a dynamic soundscape on a consumer mobile device, (2) a methodology for analyzing navigation performance with the application, (3) the design and execution of a user study investigating the effectiveness of the presented system, and (4) a discussion of the results of that study along with a proposed future study and possible improvements. Altogether, this paper presents a novel software platform aimed at assisting individuals with vision impairments to navigate and understand spaces safely, efficiently, and independently and the results of a feasibility study analyzing the viability of the approach. Full article
(This article belongs to the Section Nanobiotechnology and Biofabrication)
Show Figures

Graphical abstract

19 pages, 8886 KiB  
Article
High-Precision Calibration Method and Error Analysis of Infrared Binocular Target Ranging Systems
by Changwen Zeng, Rongke Wei, Mingjian Gu, Nejie Zhang and Zuoxiao Dai
Electronics 2024, 13(16), 3188; https://doi.org/10.3390/electronics13163188 - 12 Aug 2024
Cited by 2 | Viewed by 1635
Abstract
Infrared binocular cameras, leveraging their distinct thermal imaging capabilities, are well-suited for visual measurement and 3D reconstruction in challenging environments. The precision of camera calibration is essential for leveraging the full potential of these infrared cameras. To overcome the limitations of traditional calibration [...] Read more.
Infrared binocular cameras, leveraging their distinct thermal imaging capabilities, are well-suited for visual measurement and 3D reconstruction in challenging environments. The precision of camera calibration is essential for leveraging the full potential of these infrared cameras. To overcome the limitations of traditional calibration techniques, a novel method for calibrating infrared binocular cameras is introduced. By creating a virtual target plane that closely mimics the geometry of the real target plane, the method refines the feature point coordinates, leading to enhanced precision in infrared camera calibration. The virtual target plane is obtained by inverse projecting the centers of the imaging ellipses, which are estimated at sub-pixel edge, into three-dimensional space, and then optimized using the RANSAC least squares method. Subsequently, the imaging ellipses are inversely projected onto the virtual target plane, where its centers are identified. The corresponding world coordinates of the feature points are then refined through a linear optimization process. These coordinates are reprojected onto the imaging plane, yielding optimized pixel feature points. The calibration procedure is iteratively performed to determine the ultimate set of calibration parameters. The method has been validated through experiments, demonstrating an average reprojection error of less than 0.02 pixels and a significant 24.5% improvement in calibration accuracy over traditional methods. Furthermore, a comprehensive analysis has been conducted to identify the primary sources of calibration error. Ultimately, this achieves an error rate of less than 5% in infrared stereo ranging within a 55-m range. Full article
Show Figures

Figure 1

9 pages, 2319 KiB  
Article
Augmented Reality Improved Knowledge and Efficiency of Root Canal Anatomy Learning: A Comparative Study
by Fahd Alsalleeh, Katsushi Okazaki, Sarah Alkahtany, Fatemah Alrwais, Mohammad Bendahmash and Ra’ed Al Sadhan
Appl. Sci. 2024, 14(15), 6813; https://doi.org/10.3390/app14156813 - 4 Aug 2024
Cited by 4 | Viewed by 2676
Abstract
Teaching root canal anatomy has traditionally been reliant on static methods, but recent studies have explored the potential of advanced technologies like augmented reality (AR) to enhance learning and address the limitations of traditional training methods, such as the requirement for spatial imagination [...] Read more.
Teaching root canal anatomy has traditionally been reliant on static methods, but recent studies have explored the potential of advanced technologies like augmented reality (AR) to enhance learning and address the limitations of traditional training methods, such as the requirement for spatial imagination and the inability to simulate clinical scenarios fully. This study evaluated the potential of AR as a tool for teaching root canal anatomy in preclinical training in endodontics for predoctoral dental students. Six cone beam computed tomography (CBCT) images of teeth were selected. Board-certified endodontist and radiologist recorded the tooth type and classification of root canals. Then, STereoLithography (STL) files of the same images were imported into a virtual reality (VR) application and viewed through a VR head-mounted display. Forty-three third-year dental students were asked questions about root canal anatomy based on the CBCT images, and then, after the AR model. The time to respond to each question and feedback was recorded. Student responses were paired, and the difference between CBCT and AR scores was examined using a paired-sample t-test and set to p = 0.05. Students demonstrated a significant improvement in their ability to answer questions about root canal anatomy after utilizing the AR model (p < 0.05). Female participants demonstrated significantly higher AR scores compared to male participants. However, gender did not significantly influence overall test scores. Furthermore, students required significantly less time to answer questions after using the AR model (M = 4.09, SD = 3.55) compared to the CBCT method (M = 15.21, SD = 8.01) (p < 0.05). This indicates that AR may improve learning efficiency alongside comprehension. In a positive feedback survey, 93% of students reported that the AR simulation led to a better understanding of root canal anatomy than traditional CBCT interpretation. While this study highlights the potential of AR in learning root canal anatomy, further research is needed to explore its long-term impact and efficacy in clinical settings. Full article
(This article belongs to the Special Issue Virtual/Augmented Reality and Its Applications)
Show Figures

Figure 1

26 pages, 11261 KiB  
Article
A Novel Simulation Method for 3D Digital-Image Correlation: Combining Virtual Stereo Vision and Image Super-Resolution Reconstruction
by Hao Chen, Hao Li, Guohua Liu and Zhenyu Wang
Sensors 2024, 24(13), 4031; https://doi.org/10.3390/s24134031 - 21 Jun 2024
Cited by 4 | Viewed by 2785
Abstract
3D digital-image correlation (3D-DIC) is a non-contact optical technique for full-field shape, displacement, and deformation measurement. Given the high experimental hardware costs associated with 3D-DIC, the development of high-fidelity 3D-DIC simulations holds significant value. However, existing research on 3D-DIC simulation was mainly carried [...] Read more.
3D digital-image correlation (3D-DIC) is a non-contact optical technique for full-field shape, displacement, and deformation measurement. Given the high experimental hardware costs associated with 3D-DIC, the development of high-fidelity 3D-DIC simulations holds significant value. However, existing research on 3D-DIC simulation was mainly carried out through the generation of random speckle images. This study innovatively proposes a complete 3D-DIC simulation method involving optical simulation and mechanical simulation and integrating 3D-DIC, virtual stereo vision, and image super-resolution reconstruction technology. Virtual stereo vision can reduce hardware costs and eliminate camera-synchronization errors. Image super-resolution reconstruction can compensate for the decrease in precision caused by image-resolution loss. An array of software tools such as ANSYS SPEOS 2024R1, ZEMAX 2024R1, MECHANICAL 2024R1, and MULTIDIC v1.1.0 are used to implement this simulation. Measurement systems based on stereo vision and virtual stereo vision were built and tested for use in 3D-DIC. The results of the simulation experiment show that when the synchronization error of the basic stereo-vision system (BSS) is within 103 time steps, the reconstruction error is within 0.005 mm and the accuracy of the virtual stereo-vision system is between the BSS’s synchronization error of 107 and 106 time steps. In addition, after image super-resolution reconstruction technology is applied, the reconstruction error will be reduced to within 0.002 mm. The simulation method proposed in this study can provide a novel research path for existing researchers in the field while also offering the opportunity for researchers without access to costly hardware to participate in related research. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

22 pages, 830 KiB  
Review
Content Analysis of Three-Dimensional Model Technologies and Applications for Construction: Current Trends and Future Directions
by Nhien Le, Daniel Tran and Roy Sturgill
Sensors 2024, 24(12), 3838; https://doi.org/10.3390/s24123838 - 13 Jun 2024
Cited by 2 | Viewed by 2085
Abstract
The proliferation of digital technologies is substantially transforming inspection methodologies for construction activities. Although the implementation of a three-dimensional (3D) model has emerged as an advantageous, feasible inspection application, the selection of the most suitable 3D models is challenging due to multiple technology [...] Read more.
The proliferation of digital technologies is substantially transforming inspection methodologies for construction activities. Although the implementation of a three-dimensional (3D) model has emerged as an advantageous, feasible inspection application, the selection of the most suitable 3D models is challenging due to multiple technology options. The primary objectives of this study were to investigate current trends and identify future technologies for 3D models in the construction industry. This study utilized systematic reviews by identifying and selecting quality journals, analyzing selected articles, and conducting content analysis and meta-analysis to identify dominant themes in 3D models. Results showed that the top technologies used to model construction projects are building information models, remote sensing, stereo vision system/photo processing programs, and augmented reality/virtual reality. The main benefits and challenges of these technologies for modeling were also determined. This study identified three areas with significant knowledge gaps for future research: (1) the amalgamation of two or more technologies to overcome project obstacles; (2) solution optimization for inspections in remote areas; and (3) the development of algorithm-based technologies. This research contributes to the body of knowledge by exploring current trends and future directions of 3D model technologies in the construction industry. Full article
Show Figures

Figure 1

30 pages, 10354 KiB  
Article
3D Modelling Approach to Enhance the Characterization of a Bronze Age Nuragic Site
by Stefano Cara, Paolo Valera and Carlo Matzuzzi
Minerals 2024, 14(5), 489; https://doi.org/10.3390/min14050489 - 6 May 2024
Cited by 1 | Viewed by 1999
Abstract
Megalithism in Sardinia (Italy) had its highest expression during the Bronze Age with the creation of monumental complexes known as Nuraghes. These unique monuments have recently been the subject of in-depth investigations for their potential to be recognized as World Heritage Sites (by [...] Read more.
Megalithism in Sardinia (Italy) had its highest expression during the Bronze Age with the creation of monumental complexes known as Nuraghes. These unique monuments have recently been the subject of in-depth investigations for their potential to be recognized as World Heritage Sites (by UNESCO). The main purpose of our research was to make a contribution to obtain a more in-depth characterization of these monuments by testing a 3D model of a complex Nuraghe, integrated with an analysis of the geolithological context. This work first focused on the geological and typological investigation of the materials used in its construction, which was then compared with the geolithological characteristics of the region. A survey of the outcropping remains was carried out by means of Structure-from-Motion Multi-View Stereo (SfM-MVS) photogrammetry with UAV ground and aerial acquisition using APS-C photo sensors, georeferenced with an RTK-GNSS ground survey. The level of accuracy of our digital models shows the potential of the proposed method, giving accurate and geometrically consistent 3D reconstructions in terms of georeferencing error, shape and surface. The survey method allows for the virtualization of the current state of conservation of the Nuraghe, giving a solid basis to set up further (future) archaeological excavations and to contribute to knowledge on the architecture of the structures. This study also provides useful information on the nature and origin of the construction materials and proposes a hypothesis on the original dimensions of the monument, which is often a topic of debate in the world of archaeology. Full article
Show Figures

Graphical abstract

20 pages, 14427 KiB  
Article
Virtual Simulation Design and Debugging of Lift-and-Transverse Stereo Garage Based on the Digital Twin
by Ke Zhang and Ziyang Ding
Appl. Sci. 2024, 14(9), 3896; https://doi.org/10.3390/app14093896 - 2 May 2024
Cited by 4 | Viewed by 2100
Abstract
In the face of the challenges of limited urban space and the continuous increase of vehicles, stereo garages have been widely used as a solution in cities. In order to improve the automation and intelligence level of the stereo garage, this paper applies [...] Read more.
In the face of the challenges of limited urban space and the continuous increase of vehicles, stereo garages have been widely used as a solution in cities. In order to improve the automation and intelligence level of the stereo garage, this paper applies the digital twin technology to the lift-and-transverse stereo garage. A five-dimensional model of a digital twin has been developed based on an actual stereo garage. Combined with S7-PLCSIM Advanced, Botu TIA Portal, and NX MCD to build a virtual simulation platform, realizing the virtual simulation design and debugging of the digital twin-based stereo garage. This approach allows to test and optimize operational processes without relying on physical equipment, reducing labor and field debugging costs, shortening deployment cycles, and significantly reducing development costs. In addition, MCD allows real-time monitoring and control of security risks and failures detected. Finally, the feasibility of the virtual simulation and debugging scheme based on digital twin is verified by comparing the operation data of the virtual model and the actual stereo garage, which provides new ideas for the intelligent development of stereo garage but also can be used as an important reference for the development of equipment in other areas. Full article
Show Figures

Figure 1

Back to TopTop