1. Introduction
Building Information Modeling (BIM) is a 3D object-based design methodology that aims to cover the entire lifecycle of facilities, including design, construction, and maintenance. By integrating data generated during design, construction, and operation processes, BIM supports efficient decision-making among project stakeholders [
1,
2,
3]. Traditional facility design primarily used 2D CAD drawings, which were limited by their two-dimensional nature. This limitation led to various views such as elevations and floor plans being perceived as separate rather than interconnected processes [
4]. Such disconnection often resulted in design errors remaining undetected until final stages, leading to mistakes, information omission, schedule delays, increased costs, and quality deterioration [
5]. The adoption of BIM 3D drawings enables more intuitive and three-dimensional verification, reducing the time required for design comprehension and information recognition [
6].
Mixed Reality (MR) technology, which integrates real and virtual spaces through three-dimensional visualization, is emerging as a significant technology in the construction industry. According to Extremera et al. (2022), Reality-Virtuality Technologies (RVTs) including Virtual Reality (VR), Augmented Reality (AR), and MR are increasingly being adopted in engineering fields [
7]. Research focusing on visualizing 3D BIM objects in MR environments has gained attention for its potential to enhance drawing comprehension and reduce construction and inspection errors [
8]. The application of MR technology in initial design phases enables improved understanding of design information compared to traditional paper drawings and reduces the time required to detect design errors [
9,
10]. Additionally, three-dimensional visualization facilitates effective communication among various stakeholders in construction sites [
11], thereby reducing errors caused by miscommunication. Recognizing these advantages, the Ministry of Land, Infrastructure and Transport has recommended using MR technology to review BIM drawings from a first-person perspective for design and constructability verification [
12].
The effective integration of BIM and MR requires accurate positioning of virtual BIM objects in real space [
13,
14]. While terms such as ‘Placement’ and ‘Alignment’ are used for this positioning, no clear definition has been established [
15]. Therefore, this study adopts the term ‘Spatial Alignment’ and proposes an enhanced methodology for this process. The current spatial alignment methods present limitations: marker-based approaches necessitate precise installation at predetermined locations, while drag-based methods rely extensively on user manipulation proficiency.
This research introduces the Two-points Spatial Alignment System (TSAS) utilizing dual reference points, aiming to overcome the limitations of existing methods. The proposed approach was developed and validated through comparison with traditional marker-based and drag-based methods. To evaluate the system’s effectiveness, Microsoft HoloLens2 (Microsoft, Redmond, WA, USA) was used as the MR hardware platform, with the application development conducted using Unity (Version 2020.3.25f1). The implementation process consisted of three primary stages: format conversion from BIM authoring programs, interface development for real-time manipulation and review of BIM models in MR environments, and execution on the MR hardware.
2. Literature Review
Spatial alignment in MR environments typically involves three fundamental processes: (1) Object tracking, (2) Mapping, and (3) Registration [
9]. Object tracking, which determines the position of virtual objects within the MR system, can be categorized into two main approaches: sensor-based tracking and computer vision-based tracking [
16].
Sensor-based tracking methods utilize various sensors such as GPS (Global Positioning System), IMU (Inertial Measurement Unit), and RFID (Radio-Frequency Identification) to track the location of virtual objects. These methods offer reliable positioning data but may exhibit several limitations in indoor environments or require additional hardware installation. Rehbein et al. (2022) utilized either HTC Vive Lighthouse and ARTTRACK5 optical tracking systems for 3D visualization of ultrasonic nondestructive testing (NDT) data, achieving real-time texture mapping on 3D models [
17]. Computer vision-based tracking methods are further classified into marker-based and marker-less approaches. The marker-based method, which is widely adopted due to its simplicity and cost-effectiveness, functions by recognizing QR codes through the cameras to position virtual objects based on predefined coordinate information. Marker-less tracking, also known as feature-based tracking, utilizes natural features such as points, lines, edges, and textures to track objects. Many construction safety studies have used either sensor-based or vision-based technologies for their research [
18].
Mapping involves the recognition of real-world spaces that serve as backgrounds for MR applications [
19,
20,
21]. The Microsoft HoloLens2, utilized in this study, employs depth sensors for spatial mapping through Time of Flight (ToF) technology. ToF operates by measuring the time taken for emitted signals to reflect off objects and return to the sensor [
22]. The depth sensor functions in two modes: Short throw and Long throw [
23]. The Short throw mode enables hand tracking up to 1 m through AHAT (Articulated Hand Tracking) functionality. The Long throw mode generates 3D meshes by scanning real spaces, enabling the identification of major structural elements such as walls, floors, and ceilings.
Registration is the process of positioning virtual objects within the real space. After the system tracks object positions and maps the real space through the previous steps, registration enables precise placement of virtual objects in specific real-world locations. The marker-based method, while commonly used, presents several challenges. For accurate registration, markers must be placed precisely in their intended positions. However, achieving exact marker placement in practice often proves difficult [
24]. Subsequent marker position adjustment is typically necessary, and without proper calibration, alignment accuracy may vary depending on viewing angle [
25].
According to Al-Adhami et al. (2019), marker-based spatial alignment methods may be unsuitable for construction sites due to the potential for marker damage or displacement in environments with diverse workforce activities [
26]. This challenge is particularly relevant in dynamic construction environments where multiple trades work simultaneously and physical markers can be easily disturbed or obscured by dust, debris, or equipment.
Current MR systems frequently employ simple drag-based interfaces [
27,
28]. This method allows users to directly manipulate virtual objects through finger gestures in MR environments, serving as a basic object manipulation approach. However, the drag-based method’s performance heavily depends on user operation, potentially leading to accuracy issues. The challenge of precise manual control in 3D space is compounded when users must simultaneously manage depth perception, object rotation, and position adjustment without physical reference points.
Through literature reviews, this study identified limitations in existing positioning methods. The marker-based method requires precise marker positioning in both BIM drawings and physical spaces, with potential errors arising from marker recognition angles. The drag-based method exhibits challenges in accurate object positioning due to its dependence on user manipulation skills [
29]. The present study proposes a system that eliminates the need for marker installation while providing clearer alignment reference points compared to drag-based methods, aiming to enhance user convenience.
3. Methodology
3.1. System Design
This study develops an integrated BIM-MR system designed to visualize and align BIM objects in MR environments. The system architecture consists of three primary modules: MR-BIM data generation module, MR scenario development module, and system implementation module.
Table 1 presents the hardware and software components utilized in each module.
3.1.1. MR-BIM Data Creation Module
The MR-BIM data generation module serves as a core component for converting BIM drawings into formats compatible with MR environments. This module operates based on Autodesk Revit as the primary BIM design platform. Since Revit’s native RVT format is not directly compatible with Unity, this module performs necessary format conversions. The module converts BIM model geometry information to OBJ format and attribute information to CSV format (
Figure 1). This conversion process is executed through a Revit Add-in developed using C# programming language in Visual Studio, utilizing Revit’s Application Programming Interface (API).
3.1.2. MR Scenario Development Module
The MR scenario development module is built on the Unity3D game engine platform. This module processes the OBJ geometry files and CSV attribute data generated from the previous stage within the Unity3D environment. A key function of this module is the development of user interaction interfaces, allowing users to manipulate and review BIM models within the MR environment. The module includes the creation of interactive buttons and user interface elements designed for MR interaction. The completed development is then compiled into an executable application format compatible with the HoloLens2 platform (
Figure 2).
3.1.3. System Implementation Module
The system implementation module represents the practical application phase of the MR scenario. This study enhances the haptic interface system developed by Cho et al. (2022) [
30]. Through the HoloLens2 MR headset, users can visualize virtual BIM objects and employ the spatial alignment system to position them accurately within real space. This module integrates the visualization capabilities with precise spatial positioning functionality, enabling practical application in real-world environments.
3.2. System Implement
3.2.1. Spatial Alignment Principles
Figure 3 shows the step-by-step spatial alignment process of TSAS. The process begins with the user designating two reference points (
A,
B) on the virtual object and two corresponding target points (
A′,
B′) in the real space where the object is to be placed.
Figure 3a represents the initial state, showing the positions of the designated reference and target points. In stage (b), point
A transitions to point
A′, followed by stages
Figure 3c,d which demonstrate the rotation process around the Y-axis based on the calculated angle between vectors. The rotation angle is determined through Equations (1) and (2).
where
= unit vector of segment
AB;
= unit vector of segment
A′
B′.
In
Figure 3e, the object’s scale is adjusted according to the distance ratio between vectors (Equation (3)).
Finally,
Figure 3f demonstrates the final state where the object is precisely positioned at the target location. The complete transformation is achieved through the sequential application of these operations, represented by the following transformation matrix:
where
is the final 4 × 4 homogeneous transformation matrix,
is the translation matrix derived from the displacement vector (
A′
A),
is the rotation matrix around the Y-axis with angle
, and
is the uniform scaling matrix with diagonal elements equal to
3.2.2. System Improvement
To address the limitations of the previous haptic interface in precise point designation, TSAS was enhanced to improve user interaction (
Figure 4). While maintaining the system’s core principles, the current implementation increases operational efficiency by allowing users to directly drag existing reference points. To further facilitate precise positioning, the reference point spheres were enlarged and supplemented with a nested center point. This design resolves the fundamental trade-off between ease of selection and positional accuracy. Additionally, transparency gradients visually guide the user’s attention from the grabbable outer sphere to the precise inner point.
This improved interface design addresses previous usability challenges while maintaining the system’s core functionality. The combination of enlarged reference points and center point provides users with clearer visual feedback during the alignment process, contributing to more accurate spatial positioning.
3.3. Usability Evaluation
The evaluation was conducted to evaluate TSAS’ effectiveness. This evaluation approach follows the experimental design used in the work of Cho (2024) [
31]. The test objective was to accurately position a laboratory BIM model within an actual laboratory space using three methods: marker-based, drag-based, and TSAS. The test environment was configured using a laboratory (approximately 5 m × 8 m room size) setting as the target space.
The test involved 30 participants (16 males, 14 females). Prior to testing, participants received comprehensive instruction through posted visual guides and monitoring display covering system operation and procedures. A monitoring system displayed real-time HoloLens view on an external screen, allowing participants to familiarize themselves with the interface before their turn. Following the briefing, participants utilized the HoloLens device and proceeded with the test and subsequent survey.
The evaluation criteria were established based on international usability standards [
32], measuring accuracy, efficiency, and satisfaction. Accuracy was assessed by measuring positioning errors for each method, while efficiency was evaluated through time measurements for completing spatial alignment tasks. For the drag-based method and TSAS, both accuracy and time measurements were automatically recorded when users activated the completion button. For the marker-based method, the system measured the time required for marker recognition through the camera. Satisfaction was assessed through a survey using a 5-point Likert scale, and participants were also asked to indicate their preferred method.
To measure alignment errors, reference points were manually repositioned for each participant due to HoloLens 2’s spatial coordinate system characteristics. The device resets its spatial origin upon restart, using the user’s initial head position as the coordinate origin. Additionally, while HoloLens 2 employs ToF sensors for spatial mapping [
23], variability in wall and corner detection necessitated consistent manual reference point placement to ensure measurement reliability across all 30 participants.
Figure 5 illustrates the reference points in the actual laboratory and reference points in the virtual BIM model. Alignment error was calculated by measuring the average distance between wall reference points and the corresponding reference objects in the BIM model.
Figure 6a shows a user performing the spatial alignment task, and
Figure 6b shows the MR visualization of BIM model in the laboratory space.
The test results were systematically analyzed across three evaluation criteria: accuracy, efficiency, and user satisfaction. This comprehensive evaluation provided an objective comparison of each method’s performance, offering insights into their potential for practical application in real-world scenarios.
4. Results
Usability evaluation results measuring accuracy, efficiency, satisfaction, and preference of each method are presented in
Table 2.
The accuracy analysis revealed that TSAS achieved an average error of 50.3 mm, while the marker-based method showed 64.0 mm error, and the drag-based method demonstrated 199.7 mm error. Values are mean ± Standard Deviation (SD). A one-way Analysis of Variance (ANOVA) indicated that these differences were statically significant (
p < 0.001). Given the significant differences in standard deviations across groups, which violates the assumption of homogeneity of variances, the Games-Howell post hoc test was utilized for pairwise comparisons. According to Article 20 of the Enforcement Rule of the Building Act [
33], construction errors in room partitions must not exceed 100 mm. Both TSAS and marker-based method satisfied this regulatory requirement.
In terms of efficiency, TSAS required an average of 370 s, while the drag method required 359 s. The marker-based method recorded significantly shorter operation times, averaging 10 s. However, it should be noted that this measurement excluded the time required for marker installation due to practical testing constraints. Had the marker installation process been included, both accuracy and efficiency results might have shown different patterns.
Satisfaction ratings indicated that TSAS scored 4.30 out of 5, the drag method received 2.30, and the marker method achieved 4.13. In preference evaluation, 16 participants (53.33%) favored TSAS, while 12 participants (40.00%) preferred the marker method, and only 2 participants (6.67%) chose the drag method. Participants who preferred TSAS cited its ease of adjustment and minimal errors as key advantages. Those who favored the marker method appreciated its ease of use, while participants generally indicated difficulties with the drag method for spatial alignment.
5. Discussion
This study has proposed and evaluated TSAS for integrating BIM with MR technology in construction environments. The experimental results indicate that TSAS performs better than existing methods. With a mean alignment error of 50.3 mm, TSAS shows better accuracy compared to marker-based (64.0 mm) and drag-based methods (199.7 mm). This level of accuracy satisfies Article 20 of the Enforcement Rule of the Building Act (100 mm tolerance) [
33]. The user satisfaction rating (4.3/5) and preference rate (53.33%) indicate the system’s usability.
The results revealed distinct characteristics of each alignment method. The marker-based method showed high efficiency in operation time but required additional setup procedures. However, it should be noted that this measurement excluded the time required for marker installation due to practical testing constraints. Had the marker installation process been included, both accuracy and efficiency results might have shown different patterns. This setup requirement could affect its practicality in dynamic construction environments where marker positions need frequent adjustments. The drag-based method provided interaction with objects but showed lower accuracy (199.7 mm) and user satisfaction (2.3/5), indicating the difficulty of precise manual control in MR environments. TSAS addressed these limitations by reference point-based positioning.
Previous studies have primarily evaluated spatial alignment performance based on accuracy, efficiency, and user experience. Wu et al. (2022) utilized MR and SLAM-based object detection to detect hazards in real-time and monitor worker locations at construction sites [
18]. While their study employed SLAM technology for automatic alignment, it required an initial setup time and showed performance variations depending on environmental changes. Fenais et al. (2019) applied a GIS-based AR system for spatial alignment of underground utilities [
34]. Although GIS and GPS-based alignment methods are effective in outdoor environments, they exhibit limitations in achieving precise alignment indoors. Tavares et al. (2019) combined Spatial Augmented Reality (SAR) with a laser scanner, achieving a projection error within 3 mm [
35]. Their findings demonstrated high automation and precision; however, the method involved complex initial setup and a strong reliance on hardware.
User feedback from the experiments indicated specific factors affecting system usability. Participants who preferred TSAS noted the clear visual feedback and controlled adjustment capabilities as key advantages. Those who favored the marker method appreciated its ease of use, as it required only viewing through the camera with minimal interaction, while participants who chose the drag method appreciated its flexibility, as it allowed direct adjustments. However, this approach required a higher level of manual control for precise placement, making it less reliable overall compared to reference point-based methods. These preferences suggest the importance of balancing precision and operational simplicity in MR interfaces.
The current evaluation was conducted in a controlled laboratory environment (approximately 5 m × 8 m room size). While this allowed for establishing baseline performance metrics, future validation in actual construction sites with varying environmental conditions, larger scales, and dynamic obstacles is necessary. The system’s robustness in such environments is expected to be favorable compared to marker-based methods, as TSAS does not rely on maintaining visual markers that could be damaged or obscured during construction activities.
6. Conclusions
This research presents TSAS for BIM object alignment in MR environments. The experimental results demonstrate that TSAS achieved a 50.3 mm alignment error, satisfying the Building Act’s enforcement rule tolerance (100 mm), while maintaining high user satisfaction.
The study makes three primary contributions. First, it introduces a two-point reference system that improves alignment accuracy while maintaining usability. Second, it develops a feedback mechanism that supports accurate object positioning. Third, it implements an approach that eliminates complex setup procedures typical of existing marker-based methods.
The system has several technical limitations. The rotation mechanism is restricted to the Y-axis, as structures are typically positioned along vertical axes, reducing unnecessary calculations and enabling positioning with only two-point designations. While this simplification improves efficiency, Y-axis-only rotation may limit flexibility in certain scenarios requiring multi-axis adjustments. The current evaluation was conducted in a laboratory environment (approximately 5 m × 8 m room size), necessitating further validation in actual construction environments.
Future research should address these limitations in three directions. First, technical development should investigate additional rotation mechanisms for handling inclined structures or complex geometries, potentially incorporating X- and Z-axis rotations when necessary. Second, comprehensive field studies should test the system in larger construction sites with varying environmental conditions, dynamic obstacles, and different material types to verify its scalability and robustness. Third, research should examine how this alignment method can be effectively integrated into existing construction and BIM workflows.
Author Contributions
Conceptualization, S.K. (Sanghyeok Kang) and S.K. (Sungpyo Kim); methodology, S.K. (Sungpyo Kim) and S.K. (Sanghyeok Kang); software, J.C. and S.K. (Sungpyo Kim); validation, J.C. and S.K. (Sanghyeok Kang); formal analysis, J.C.; investigation, J.C.; resources, J.C.; data curation, J.C.; supervision, S.K. (Sanghyeok Kang); writing—original draft preparation, S.K. (Sanghyeok Kang); writing—review and editing, J.C. and S.K. (Sanghyeok Kang); visualization, S.K. (Sungpyo Kim) and J.C.; project administration, S.K. (Sanghyeok Kang); funding acquisition, S.K. (Sanghyeok Kang). All authors have read and agreed to the published version of the manuscript.
Funding
This research was supported by Incheon National University Research Grant in 2021.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Informed consent was obtained from all participants involved in this study.
Data Availability Statement
The data supporting the findings of this study are available from the corresponding author on request.
Acknowledgments
During the preparation of this manuscript/study, the author(s) used ChatGPT (o3 and 5) for the purpose of translating Korean to English. The authors have reviewed and edited the output and take full responsibility for the content of this publication.
Conflicts of Interest
Author Sungpyo Kim was employed by the R&D Business Department, DoIT Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.
Abbreviations
The following abbreviations are used in this manuscript:
ANOVA | Analysis of Variance |
AHAT | Articulated Hand Tracking |
API | Application Programming Interface |
BIM | Building Information Modeling |
GPS | Global Positioning System |
IMU | Inertial Measurement Unit |
MR | Mixed Reality |
NDT | Nondestructive testing |
RFID | Radio-Frequency Identification |
RVTs | Reality-Virtuality Technologies |
SD | Standard Deviation |
ToF | Time of Flight |
TSAS | Two-points Spatial Alignment System |
References
- Li, S.; Zhang, Z.; Mei, G.; Lin, D.; Yu, J.; Qiu, R.; Su, X.; Lin, X.; Lou, C. Utilization of BIM in the construction of a submarine tunnel: A case study in Xiamen city, China. J. Civ. Eng. Manag. 2021, 27, 14–26. [Google Scholar] [CrossRef]
- Chen, L.; Luo, H. A BIM-based construction quality management model and its applications. Autom. Constr. 2014, 46, 64–73. [Google Scholar] [CrossRef]
- Jane, M.; Peter, E.; Sam, H.; Robert, C.; Chris, R.; Oluwole, O. Real time progress management: Re-engineering processes for cloud-based BIM in construction. Autom. Constr. 2015, 58, 38–47. [Google Scholar] [CrossRef]
- Kim, E.Y.; Park, S.H. Educational effects of architectural CAD on BIM design tool acceptance in design and drafting of architectural forms. In Proceedings of the Conference of Society for Computational Design and Engineering, Pyeongchang, Republic of Korea, 31 January–2 February 2017. (In Korean). [Google Scholar]
- Kazaz, A.; Acıkara, T.; Ulubeyli, S.; Koyun, H. Detection of architectural drawings errors in 3 dimension. Procedia Eng. 2017, 196, 1018–1025. [Google Scholar] [CrossRef]
- Chalhoub, J.; Ayer, S. Using Mixed Reality for electrical construction design communication. Autom. Constr. 2018, 86, 1–10. [Google Scholar] [CrossRef]
- Extremera, J.; Vergara, D.; Rodríguez, S.; Dávila, L.P. Reality-Virtuality Technologies in the Field of Materials Science and Engineering. Appl. Sci. 2022, 12, 4968. [Google Scholar] [CrossRef]
- Carrasco, M.; Chen, P. Application of mixed reality for improving architectural design comprehension effectiveness. Autom. Constr. 2021, 126, 103677. [Google Scholar] [CrossRef]
- Prabhakaran, A.; Mahamadu, A.; Mahdjoubi, L.; Manu, P. An approach for integrating mixed reality into BIM for early stage design coordination. In Proceedings of the MATEC Web of Conferences 312, Cape Town, South Africa, 24–26 September 2020. [Google Scholar]
- Wang, X.; Dunston, P. Comparative Effectiveness of Mixed Reality-Based Virtual Environments in Collaborative Design. In Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, Anchorage, AK, USA, 9–12 October 2011. [Google Scholar]
- El Ammari, K.; Hammad, A. Remote interactive collaboration in facilities management using BIM-based mixed reality. Autom. Constr. 2019, 107, 102940. [Google Scholar] [CrossRef]
- Ministry of Land, Infrastructure and Transport. BIM Implementation Guidelines on the Construction Industry; Ministry of Land, Infrastructure and Transport: Sejong-si, Republic of Korea, 2022. (In Korean) [Google Scholar]
- Jiao, Y.; Zhang, S.; Li, Y.; Wang, Y.; Yang, B. Towards cloud augmented reality for construction application by BIM and SNS integration. Autom. Constr. 2013, 33, 37–47. [Google Scholar] [CrossRef]
- Jurado, D.; Jurado, J.M.; Ortega, L.; Feito, F.R. GEUINF: Real-Time Visualization of Indoor Facilities Using Mixed Reality. Sensors 2021, 21, 1123. [Google Scholar] [CrossRef]
- El Barhoumi, N.; Hajji, R.; Bouali, Z.; Ben Brahim, Y.; Kharroubi, A. Assessment of 3D Models Placement Methods in Augmented Reality. Appl. Sci. 2022, 12, 10620. [Google Scholar] [CrossRef]
- Ashwini, K.; Preethi, N.; Sacitha, R. Tracking Methods in Augmented Reality-Explore the Usage of Marker-Based Tracking. In Proceedings of the 2nd International Conference on IoT, Social, Mobile, Analytics & Cloud in Computational Vision & Bio-engineering, Tiruchengodu, India, 29–30 October 2020. [Google Scholar]
- Rehbein, J.; Lorenz, S.J.; Holtmannspötter, J.; Valeske, B. 3D-Visualization of Ultrasonic NDT Data Using Mixed Reality. J. Nondestruct. Eval. 2022, 41, 26. [Google Scholar] [CrossRef]
- Wu, S.; Hou, L.; Zhang, G.K.; Chen, H. Real-time mixed reality-based visual warning for construction workforce safety. Autom. Constr. 2022, 139, 104252. [Google Scholar] [CrossRef]
- Teruggi, S.; Fassi, F. HoloLens 2 Spatial Mapping Capabilities in Vast Monumental Heritage Environments. In Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Mantua, Italy, 2–4 March 2022. [Google Scholar]
- Brito, C.; Alves, N.; Magalhães, L.; Guevara, M. Bim mixed reality tool for the inspection of heritage buildings. In Proceedings of the International Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Ávila, Spain, 1–5 September 2019. [Google Scholar]
- Azimi, E.; Qian, L.; Navab, N.; Kazanzides, P. Alignment of the Virtual Scene to the Tracking Space of a Mixed Reality Head-Mounted Display. arXiv 2017, arXiv:1703.05834. [Google Scholar] [CrossRef]
- Skurowski, P.; Myszor, D.; Paszkuta, M.; Moroń, T.; Cyran, K.A. Energy Demand in AR Applications—A Reverse Ablation Study of the HoloLens 2 Device. Energies 2024, 17, 553. [Google Scholar] [CrossRef]
- Ungureanu, D.; Bogo, F.; Galliani, S.; Sama, P.; Duan, X.; Meekhof, C.; Stühmer, J.; Cashman, T.; Tekin, B.; Schönberger, J.; et al. HoloLens2 Research Mode as a Tool for Computer Vision Research. arXiv 2020, arXiv:2008.11239. [Google Scholar] [CrossRef]
- Rokhsaritalemi, S.; Sadeghi-Niaraki, A.; Choi, S.M. A review on mixed reality: Current trends, challenges and prospects. Appl. Sci. 2020, 10, 636. [Google Scholar] [CrossRef]
- Kim, S.Y.; Kim, H.S.; Moon, H.S.; Kang, L.S. Field Applicability of Augmented Reality Technology by Marker Mapping for Construction Project (Focused on Measurement Process of Rebar Work). KSCE J. Civ. Environ. Eng. Res. 2013, 33, 2509–2518. (In Korean) [Google Scholar] [CrossRef]
- Al-Adhami, M.; Wu, S.; Ma, L. Extended reality approach for construction quality control. In Proceedings of the International Council for Research and Innovation in Building and Construction (CIB) World Building Congress, Hong Kong SAR, China, 17–21 June 2019. [Google Scholar]
- Chaconas, N.; Höllerer, T. An evaluation of bimannual gestures on the microsoft hololens. In Proceedings of the 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Tuebingen, Germany, 18–22 March 2018. [Google Scholar]
- Huang, Y.; Shakya, S.; Zhu, L. Capabilities of mixed reality applications for architecture and construction: A comparative review with hololens. In Proceedings of the ASCE International Conference on Computing in Civil Engineering 2019, Atlanta, GA, USA, 17–19 June 2019. [Google Scholar]
- Argelaguet, F.; Andujar, C. A survey of 3D object selection techniques for virtual environments. Comput. Graph. 2013, 37, 121–136. [Google Scholar] [CrossRef]
- Cho, J.; Kim, S.; Kim, N.; Kim, S.; Park, C.; Lim, J.; Kang, S. The Development of a Haptic Interface for Interacting with BIM Elements in Mixed Reality. In Proceedings of the 9th International Conference on Construction Engineering and Project Management, Las Vegas, NV, USA, 20–23 June 2022. [Google Scholar]
- Cho, J. Development of a Spatial Alignment System for Interaction with BIM Objects in Mixed Reality. Master’s Thesis, Incheon National University, Incheon, Republic of Korea, 2024. [Google Scholar]
- ISO 9241-11; Ergonomics of Human-System Interaction-Part 11: Usability: Definitions and Concepts. International Organization for Standardization: Geneva, Switzerland, 2018. Available online: https://www.iso.org/standard/63500.html (accessed on 2 March 2020).
- Article 20 of the Enforcement Rule of the Building Act. Available online: https://www.law.go.kr/LSW//lsInfoP.do?lsId=006191&ancYnChk=0#0000 (accessed on 29 August 2023). (In Korean).
- Fenais, A.; Ariaratnam, S.T.; Ayer, S.K.; Smilovsky, N. Integrating geographic information systems and augmented reality for mapping underground utilities. Infrastructures 2019, 4, 60. [Google Scholar] [CrossRef]
- Tavares, P.; Costa, C.M.; Rocha, L.; Malaca, P.; Costa, P.; Moreira, A.P.; Sousa, A.; Veiga, G. Collaborative welding system using BIM for robotic reprogramming and spatial augmented reality. Autom. Constr. 2019, 106, 102825. [Google Scholar] [CrossRef]
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).