Next Article in Journal
Attention-Based Hypergraph Neural Network: A Personalized Recommendation
Previous Article in Journal
Low-Light Image Enhancement Using Deep Learning: A Lightweight Network with Synthetic and Benchmark Dataset Evaluation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Augmented Reality for PCB Component Identification and Localization

1
Department of Computer and Geospatial Sciences, University of Gävle, 80176 Gävle, Sweden
2
Hellman Dynamic Gävle AB, 80310 Gävle, Sweden
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(11), 6331; https://doi.org/10.3390/app15116331
Submission received: 30 April 2025 / Revised: 30 May 2025 / Accepted: 31 May 2025 / Published: 4 June 2025

Abstract

:
This study evaluates the effectiveness of augmented reality (AR), using the Microsoft™ HoloLens™™ 2, for identifying and localizing PCB components compared to traditional PDF-based methods. Two experiments examined the influence of user expertise, viewing angles, and component sizes on accuracy and usability. The results indicate that AR improved identification accuracy and user experience for non-experts, although it was slower than traditional methods for experienced users. Optimal performance was achieved at 90° viewing angles, while accuracy declined significantly at oblique angles. Medium-sized components received the highest confidence scores, suggesting favorable visibility and recognition characteristics within this group, though further evaluation with a broader component distribution is warranted. Participant feedback highlighted the system’s intuitive interface and effective guidance, but also noted challenges with marker stability, visual discomfort, and ergonomic limitations. These findings suggest that AR can enhance training and reduce errors in electronics manufacturing, although refinements in marker rendering and user onboarding are necessary to support broader adoption. This research provides empirical evidence on the role of AR in supporting user-centered design and improving task performance in industrial electronics workflows.

1. Introduction

The accurate identification and placement of printed circuit board (PCB) components remain critical challenges in electronics manufacturing, directly affecting product reliability [1]. Errors in PCB assembly—including incorrect component placement, misidentification of components, and inadequate inspection—frequently occur during the manufacturing process, particularly at the assembly and inspection stages. These errors contribute to various defects, such as solder bridging, component shifts, tombstoning, and pad-related issues, which can compromise product reliability and often become evident during testing [2,3,4]. Miniaturization and complex layouts amplify these challenges. Such errors significantly threaten final product functionality, emphasizing the need for more accurate and reliable methods in modern electronics manufacturing [5,6].
Augmented reality (AR) technology presents a promising solution in this context, enhancing the precision and efficiency of assembly operations, particularly in testing. By providing real-time digital overlays, AR supports accurate component identification and facilitates the detection of errors in PCB assembly, which is critical during the testing stage. AR aids, such as the HoloLens™ 2, offer intuitive, step-by-step guidance, helping to mitigate human errors and accelerate the learning process for technicians. However, AR’s utility is not limited to any one device; many techniques, including visual inspection tools and enhanced training modules, can achieve similar results [7,8,9].
The testing of PCBs presents additional challenges due to human factors, such as fatigue and variable skill levels, which can lead to inconsistencies in component placement and the overlooking of defects [10]. Tools like multimeters, oscilloscopes, and in-circuit testers (ICTs) are essential for identifying defects, but their effective use requires considerable expertise, often resulting in a steep learning curve for new technicians. AR devices such as the Microsoft™ HoloLens™™ 2 enable real-time guidance and digital overlays that enhance precision in electronics tasks. While the HoloLens™ 2 has shown benefits, its limitations also highlight the need for further refinement in AR systems to maximize their potential in industrial applications [11]. This study aims to evaluate the effectiveness of augmented reality (AR) using the Microsoft HoloLens™ 2 for identifying and localizing components on a printed circuit board (PCB). Through two controlled experiments, we compare AR performance with traditional methods and examine identification accuracy across varying viewing angles. This paper contributes by empirically quantifying AR’s effectiveness in industrial electronics workflows using a head-mounted display, evaluating its usability across user expertise levels and varying viewing conditions. The dual-experiment approach enhances methodological robustness and practical relevance.

2. Literature Review and Technical Framework

Augmented reality (AR) has shown significant promise across various industrial domains, particularly in improving productivity, accuracy, and training outcomes during complex assembly tasks. For instance, [12] demonstrated how AR reduced task completion times and errors in shipyard environments—insights directly applicable to electronics manufacturing, where minor errors can lead to costly disruptions.
AR has been widely adopted for industrial training, offering immersive and interactive environments that improve knowledge retention and reduce learning curves. Morales Méndez & del Cerro Velázquez [8] and Vidal-Balea et al. [9] demonstrated that AR significantly accelerates the training of assembly-line workers through intuitive digital overlays and real-time guidance. In the context of PCB manufacturing, where training new technicians is challenging, AR can provide step-by-step visual cues that reduce onboarding time and enhance learning outcomes.
Reliable performance in augmented reality (AR) systems is fundamentally dependent on robust and accurate tracking technologies. Among these, marker-based tracking methods have proven particularly effective, especially in precision-driven applications like electronics manufacturing. Studies on AR-assisted manual printed circuit board (PCB) assembly demonstrate that marker-based approaches enable highly precise alignment of virtual elements with physical components [13,14]. This alignment is critical in high-density environments, such as PCB layouts, where even minor deviations in component placement can lead to assembly defects, functional failures, or costly rework.
By leveraging distinct visual markers, these systems maintain spatial coherence between digital overlays and real-world objects, facilitating accurate guidance for assembly operators. This precision not only improves component positioning accuracy but also streamlines workflow by reducing cognitive load and error rates during assembly tasks. The work of Chatterjee et al. [13] underscores these benefits, highlighting how AR-enhanced debugging processes can directly contribute to higher assembly efficiency and quality control.
Accurate tracking technologies, especially marker-based methods, are fundamental to the reliable performance of AR systems in intricate environments like PCB layouts. They ensure precise integration of virtual and physical elements, thereby enhancing efficiency and reducing errors in manufacturing and assembly processes [13,14].
Spatial contextualization in augmented reality (AR) enhances user interaction by providing context-aware information that dynamically adapts to the spatial environment. This functionality overlays digital data relevant to the user’s immediate physical context, thereby improving situational awareness and supporting more intuitive interactions. Recent studies have demonstrated that such spatially contextualized AR systems significantly enhance task performance while reducing cognitive load. For instance, Yang et al. [15] found that AR assistance improved assembly performance and decreased cognitive load across various stages of assembly tasks. Similarly, Tiwari & Bhagat [16] compared different AR approaches and confirmed that marker-based and markerless AR effectively support spatial visualization while minimizing cognitive load in complex technical tasks. Additionally, Alessa et al. [17] provided neurophysiological evidence that AR interactions reduce mental strain during industrial maintenance and assembly operations. Collectively, these findings underscore the importance of spatial contextualization in AR applications, particularly in environments requiring high precision and cognitive efficiency.
Augmented reality (AR) is increasingly recognized as a cornerstone of Industry 4.0, facilitating the integration of cyber-physical systems for more intelligent and adaptive manufacturing processes. Research indicates that AR enhances digital twins and smart factory ecosystems by providing real-time visualization, error detection, and context-aware instructions. For example, a study by Park et al. [18] discusses the role of digital twins in Industry 4.0, emphasizing their application in various sectors to optimize production and control systems. Similarly, Voinea et al. [19] provide an overview of emergent trends in industrial augmented reality, highlighting its integration into manufacturing processes. In the realm of maintenance and remote assistance, Palmarini et al. [20] conducted a systematic review of AR applications, underscoring AR’s value in improving maintenance operations and remote support. These insights are particularly relevant for printed circuit board (PCB) diagnostics and repair workflows, where precision and efficiency are paramount.
Despite these advantages, challenges persist. Integrating AR with existing workflows presents ergonomic constraints and issues, such as marker instability under varied conditions. Usability also varies depending on user expertise and task complexity, making empirical evaluations essential for optimizing AR deployment in real-world manufacturing contexts. Building on these studies, this research explores the performance of AR in real-world PCB component identification tasks using HoloLens™ 2.

3. Research Aims

The primary aim of this study is to evaluate the effectiveness of augmented reality (AR), specifically using the Microsoft™ HoloLens™™ 2, in enhancing the accuracy, efficiency, and usability of PCB component identification tasks. To this end, we developed and tested a spatially anchored AR system and compared its performance to traditional PDF-based schematic interpretation.
We formulated the following hypotheses:
H1. 
AR significantly improves component identification accuracy compared to traditional PDF-based methods.
Rationale: AR overlays minimize the cognitive gap between schematic interpretation and physical action, reducing identification errors.
H2. 
AR-based identification is faster than traditional methods.
Rationale: Direct spatial guidance and visual cues provided by AR reduce the time needed to locate and confirm components.
H3. 
User experience is more positive with AR than with traditional methods, particularly for novice users.
Rationale: AR interfaces are more intuitive and reduce the learning curve, which is especially beneficial for users with limited prior experience.
These hypotheses guide our investigation into how AR can be effectively integrated into electronics workflows for both training and real-time task execution.

4. Materials and Methods

To provide a comprehensive overview of the study design, a schematic representation of the research workflow is presented in Figure 1. This diagram outlines the key stages of the investigation—from the initial research motivation and experimental setup to data collection, analysis, and conclusion. It highlights the dual-experiment structure and the integration of both quantitative and qualitative evaluation methods.

4.1. AR System Architecture

The AR system was developed using Unity™2021.3.9f1 (Unity Technologies, San Francisco, CA, USA), in conjunction with the Mixed Reality Toolkit (MRTK) (Microsoft Corporation, Redmond, WA, USA) and Vuforia™ Engine (PTC Inc., Boston, MA, USA). This configuration enabled real-time spatial anchoring, gesture-based interaction, and image-based tracking on a Microsoft™ HoloLens™ 2 (Microsoft, Redmond, WA, USA); headset. Pattern-based image targets were generated from the PCB layout using Vuforia™’s model target generator to achieve a stable and precise alignment between digital overlays and physical components.
The system architecture included the following components:
  • Hardware: Microsoft™ HoloLens™™ 2
  • Development Environment: Unity™2021.3.9f1;
  • AR Libraries: MRTK for spatial interaction and UI integration;
  • Tracking Engine: Vuforia™ Engine for image-based marker tracking.
This setup allowed the AR interface to present spatially accurate digital cues and annotations aligned with physical PCB features, facilitating guided component identification.

4.2. Experiment 1-Methods

Experiment 1 focused on evaluating the baseline accuracy and efficiency of AR-based component identification using the HoloLens™ 2 under standard viewing conditions. This experiment served as a foundational assessment, establishing key metrics for comparison in subsequent experiments.

4.2.1. Technical Setup

The augmented reality (AR) system was developed using the Unity™ game engine, widely recognized for supporting complex 3D modeling and real-time AR interactions [21]. A digital twin of the printed circuit board (PCB) was created using a high-resolution image of its front face, overlaid with dynamically rendered 3D markers. These markers were synchronized in real-time with the PCB’s rotation and position, enabling accurate alignment as the user shifted their perspective [13].
The Microsoft™ HoloLens™ 2 served as the optical see-through head-mounted display (HMD-OST), showing only a red marker in the AR interface to guide component identification. This marker updates dynamically to maintain alignment with the physical PCB (Figure 2).
To enable marker-based tracking, the Vuforia™ engine was employed by treating the PCB image as a pattern target. This markerless approach supported lightweight deployment, but its effectiveness depended on the visual distinctiveness of the PCB’s textures. In regions with dense or miniaturized components, reduced feature contrast led to tracking instability, particularly at oblique viewing angles. Prior research has demonstrated that marker detection stability is influenced by factors such as marker size, position, and orientation [22,23].
The digital twin in this study comprised only the 2D PCB image and the overlaid markers. The system exchanged only rotation and position data between the digital and physical boards, ensuring persistent spatial alignment [13].
The user interface (UI), developed using Unity™ and the Mixed Reality Toolkit (MRTK), was designed to minimize distractions and enhance operational efficiency during PCB component identification. It integrated gesture recognition, spatial mapping, and pre-built MRTK components to support intuitive interaction [19]. Users could search for components, toggle marker visibility, and reposition interface panels within their field of view. Visual cues indicated successful board recognition, while markers were placed based on precise coordinates from the pick-and-place file to ensure alignment with the physical PCB.
The interface emphasized clarity, minimal visual clutter, and task continuity, aligning with best practices in AR information presentation [7]. These design choices contributed to effective task performance, particularly for non-expert users, by reducing cognitive load and maintaining user engagement.
Together, Unity™, Vuforia™, and MRTK provided a cohesive framework for developing the AR system, enabling a precise, task-specific interaction in electronics manufacturing contexts. Figure 3 shows the texture-mapped digital twin and 3D markers rendered in the Unity™ environment.

4.2.2. Procedure

Participants (n = 10) were electronics engineers from the same local electronics company actively engaged in PCB manufacturing tasks. All had prior experience with traditional schematic-based identification, but no hands-on experience with head-mounted AR systems such as the Microsoft™ HoloLens™™ 2. To reduce learning bias, each session began with a briefing and a 15 min familiarization period using the AR application. Participants were divided into two groups based on industry-standard classifications of expertise: experts (≥8 years of experience, n = 6) and non-experts (<8 years, n = 4). The threshold of ≥8 years was selected based on internal classification practices, where professionals with this level of experience are typically considered senior. In this study, participants’ experience ranged from 4 to 15 years, making this threshold a practical and meaningful division for comparing expert and non-expert groups. They were tasked with identifying 10 components on a PCB—including resistors, capacitors, diodes, and integrated circuits—that varied in size from 0.5 mm to 15 mm. The components were intentionally distributed across dense clusters and isolated areas to evaluate the AR system’s tracking performance under spatial variability. All tasks were performed from an overhead view to maintain consistency in angle and minimize depth distortion.
To establish a baseline for comparison, participants also identified the same components using traditional methods by referencing a digital schematic of the PCB in PDF format (viewed on a screen using Adobe Acrobat Reader DC, version 2023.006.20380). This allowed for a direct measurement of the time taken and the accuracy of component identification between the AR method and conventional methods. Upon completion of the tasks, participants filled out a survey to assess various aspects of the AR system’s usability, including ease of use, perceived accuracy, speed, and overall satisfaction. The survey provided qualitative data that complemented the quantitative measurements from the component identification tasks.
The results from Experiment 1, which highlighted the efficiency of the HoloLens™ 2 in comparison to traditional methods, revealed the need for further investigation into the accuracy of the AR system. This motivated Experiment 2, which focuses on how different viewing angles impact the AR system’s accuracy and efficiency in component identification.

4.2.3. Data Collection

Data collection in this study focused on quantitative measures, specifically the time taken for participants to identify components using the AR system versus traditional methods. The key metric was the time required to complete the component identification tasks, with no direct accuracy measurements collected. Qualitative data were also gathered through post-task surveys, which assessed the usability and effectiveness of the HoloLens™ 2. These surveys included structured questions on ease of use, perceived speed, and overall satisfaction, as well as open-ended questions to capture detailed user feedback and suggestions for improvement.

4.2.4. Expected Outcomes

Experiment 1 aims to evaluate the effectiveness of augmented reality (AR) technology, with the HoloLens™ 2 serving as a test case, in improving the speed of PCB component identification compared to traditional methods. The study will explore whether AR systems can significantly reduce the time required for component localization tasks. The findings are expected to highlight the potential of AR technology to enhance operational efficiency and decrease identification times in practical applications. The HoloLens™ 2 is an illustrative example of AR’s capabilities, offering insights that could extend to other AR platforms in industrial settings.

4.3. Experiment 2-Methods

Experiment 2 aimed to investigate how different viewing angles affect the accuracy and efficiency of the AR component identification process, going further than the findings from Experiment 1. A critical consideration in this study is the use of Vuforia™-based marker tracking, which relies on the PCB texture as a tracked marker. This approach represents a simplified model of the actual 3D structure of the PCB. Consequently, this simplification is expected to cause degradation in 3D tracking accuracy as the viewing angles become more off-axis. Different viewing angles are critical to simulate real-world scenarios where users may not always have a direct, perpendicular view of the PCB. Understanding how these angles impact tracking performance is essential for improving the robustness and reliability of AR systems in practical applications, such as assembly or inspection tasks. This potential limitation highlights the need for a systematic study of angle adjustments to understand how they might improve or impair the AR system’s performance in a controlled environment. The procedural flow of Experiment 2 is summarized in Figure 4.

4.3.1. Technical Setup

Building on the Unity™ and Vuforia™-based application used in Experiment 1, the system was enhanced to facilitate the controlled presentation of experimental stimuli at specific viewing angles. The AR application was configured to conduct identification tasks at predetermined angles of 45°, 60°, and 90° relative to the PCB orientation. The stimuli in this context consisted of a picture of the PCB board with designated components to be identified, which served as visual markers for the participants. This setup enabled a systematic evaluation of the experimental AR system’s performance under these controlled visual conditions.

4.3.2. Procedure

The experimental setup involved mounting the PCB on a standard tripod, pre-adjusted to the specified angles of 45°, 60°, and 90° (as shown in Figure 5). Participants were presented with the PCB at these fixed angles, enabling them to perform identification tasks without adjusting their positions. As illustrated in Figure 6, participants interacted with the PCB using the HoloLens™ and a physical stylus to point to and identify specific components. This approach was designed to simulate real-world conditions where technicians or engineers often need to identify components from various viewing angles, such as during assembly, inspection, or maintenance tasks, where physical repositioning may not always be feasible.
For each angle, participants repeated the component identification tasks, enabling a detailed assessment of how changes in viewing angles impacted AR marker visibility and accuracy. The tasks were timed to add complexity and simulate the time-sensitive nature of real-world applications. This method ensured that the study could capture the nuances of how different viewing angles affect the performance and usability of the AR system.

4.3.3. Data Collection and Analysis

Data collection was designed to capture both quantitative performance metrics and qualitative user feedback, providing a comprehensive evaluation of the AR system’s capabilities.

4.3.4. Study Overview

Participants (n = 10) were equipped with the Microsoft™ HoloLens™ 2 and tasked with identifying components on a printed circuit board (PCB). They were non-domain users with no prior experience in electronics manufacturing or PCB handling. All participants were students, some of whom had limited prior exposure to augmented reality. Each session began with a briefing on the experiment’s objectives and an overview of the AR system, followed by a brief training session to familiarize participants with the functionalities of the HoloLens™ 2 and the specific AR application. During the main experiment, each participant completed 12 PCB component identification tasks, with each task treated as an individual trial. Participants selected components from a randomized list displayed in the HoloLens™ application. Upon selection and verbal confirmation, a timer was initiated. Participants used a stylus to identify the component on the PCB, with the process recorded via the HoloLens™ camera for later accuracy verification.

4.3.5. Experimental Setup

The PCB was mounted on a tripod and adjusted to three specific angles: 90°, 60°, and 45°. At each angle, participants performed the same identification tasks in a fixed sequence of components, ensuring consistency across all conditions. This setup allowed for a detailed examination of how different viewing angles influenced AR marker visibility and task efficiency while controlling task order across participants.

4.3.6. Position Definitions

  • Reference Position: The reference position represented the true 2D coordinates of the center of each component on the PCB. These positions were predefined using the original PCB layout or schematic (Figure 7), serving as the baseline for comparison. In Figure 7, yellow squares and circles represent the mapped boundaries of selected components, and the reference position corresponds to the geometric center of each boundary.
  • Augmented Position: The augmented position (Figure 8) refers to the 2D coordinates of the component center as displayed in the AR interface. It reflects the technical performance of the system, specifically marker tracking, spatial co-registration accuracy, and the fidelity of the digital twin. In this context, the coordinates obtained for the red dot rendered on each component represent the augmented position. These positions were extracted from recorded video frames using image analysis techniques, such as centroid detection, calculated for each component, angle, and participant.
  • Identified Position: The identified position indicates where users manually pointed to a component using a stylus. It captures the combined influence of human visual perception and motor coordination, providing insight into real-world usability. In this case, the coordinates represent the calculated location of the stylus tip and were extracted from video frames by locating the stylus tip within the AR display (Figure 9), using similar image processing methods as described above.

4.3.7. Accuracy Calculations

This section evaluates the accuracy of the AR system and user interactions:
  • System Alignment Accuracy: Measures the Euclidean distance between the reference position and the augmented position to quantify the accuracy of AR marker tracking.
  • User Interaction Accuracy: Assesses the user’s ability to interpret and interact with the AR display by measuring the distance between the augmented position and the identified position. These distances are calculated using the Euclidean distance formula, providing a straightforward assessment of accuracy for both system alignment and user interactions.

4.3.8. Repetition Across Angles

This process was systematically repeated for all components, participants, and viewing angles (90°, 60°, and 45°). The fixed sequence of components ensured consistency across trials, enabling a detailed analysis of how viewing angles influenced the AR system’s performance.

4.3.9. Data Organization

All data, including task completion times and screenshots for accuracy verification, were systematically recorded and compiled into a structured dataset. The dataset consisted of 360 observations (10 participants × 12 components × 3 viewing angles). Each entry was recorded in a data table (Table 1) with the following attributes.

4.3.10. Participant Feedback

After completing the experimental tasks, participants provided qualitative feedback using a structured questionnaire. The questionnaire was designed to assess user experience, comfort, and system performance while also collecting suggestions for improvement. Feedback was gathered across several key areas, including usability, visual comfort, marker precision, and ease of learning.
This approach helped identify both technical limitations and user-centered challenges, offering insights for improving AR tracking accuracy and enhancing the overall user experience under varying conditions.

5. Results

5.1. Experiment 1

The results for Experiment 1 are presented across three runs, each focusing on different variables to evaluate the efficiency of the HoloLens™ system compared to traditional methods in component localization tasks. As shown in Table 2, in Run 1, the impact of the method sequence was analyzed by dividing participants (n = 10) into two groups: Group A performed tasks using the traditional method first, followed by the HoloLens™ method, while Group B followed the reverse sequence. Run 2 investigated the influence of component size (SD card and resistor) on localization times. Finally, Run 3 explored the effect of user expertise, comparing performance between experts (≥8 years of experience) and non-experts (<8 years of experience).
The findings for each run are summarized below, with key trends and statistical comparisons presented in tables and figures.

5.1.1. Impact of Method Sequence on Localization Times (Run 1)

As shown in Figure 10 and summarized in Table 3, the traditional method was completed in less time across the entire population.
  • In Group A, where participants started with the traditional method (Figure 10b), significantly (p = 0.003) shorter localization times were observed, indicating an advantage when beginning with this method.
  • A reverse pattern can be noticed in Group B, where participants started with the HoloLens™ method (Figure 10c) and achieved slightly shorter times (median) when using the HoloLens™ method. However, these differences were not statistically significant, suggesting a minimal impact of method sequence on task completion times for this group.
  • Interpretation: Across the entire population (Figure 10a), there is a weak effect, suggesting that the traditional method is more efficient than the HoloLens™-based method. For the subgroup starting with the traditional method, this effect is statistically significant. Subjects performed, on average, better in the first attempt, regardless of which method they used. This indicates the subjects’ loss of attention or possible fatigue during the experiment.
Figure 10. (a) Comparison of times by method—entire population; (b) comparison of times by method—Group A: traditional first, HoloLens™ second; (c) comparison of times by method—Group B: HoloLens™ first, traditional second. (Red boxplots represent the Traditional method, and blue boxplots represent the HoloLens™ method. Red “+” symbols indicate outliers, defined as values falling outside 1.5 times the interquartile range).
Figure 10. (a) Comparison of times by method—entire population; (b) comparison of times by method—Group A: traditional first, HoloLens™ second; (c) comparison of times by method—Group B: HoloLens™ first, traditional second. (Red boxplots represent the Traditional method, and blue boxplots represent the HoloLens™ method. Red “+” symbols indicate outliers, defined as values falling outside 1.5 times the interquartile range).
Applsci 15 06331 g010
Table 3. Sequence Comparison of Localization Times by Method and Sequence.
Table 3. Sequence Comparison of Localization Times by Method and Sequence.
GroupMethodMean
(Seconds)
Median
(Seconds)
Wilcoxon Test (Statistic, p-Value)
Entire Population (n = 10)Traditional17.2017.00(342.00, 0.068)
HoloLens™25.7520.50
Group A (n = 5)Traditional13.7013.00(65.00, 0.003)
HoloLens™31.5030.50
Group B (n = 5)HoloLens™20.0015.50(113.50, 0.544)
Traditional20.7022.00

5.1.2. Localization Times for Different Component Sizes (Run 2)

This phase of the experiment analyzed localization times for two components, the SD card and the resistor, to examine the effect of component size on localization efficiency between the traditional and HoloLens™ methods:
  • Comparison of Methods: As illustrated in Figure 11, the traditional method resulted in shorter localization times compared to the HoloLens™ method:
    o
    SD Card: The traditional method recorded a median time of 3 s, while the HoloLens™ method recorded a median time of 32.9 s.
    o
    Resistor: The traditional method had a median time of 17 s, compared to 22.5 s for the HoloLens™ method.
  • Variability in the HoloLens™ Method: Analysis of the HoloLens™ method revealed increased variability, most prominently for the SD card component, as evidenced by the wider interquartile range in the boxplot. This suggests potential limitations related to unstable marker tracking and subjective differences in user interpretation of the AR interface, which collectively contribute to greater inconsistency in task execution times.
The data show that the traditional method was faster than the HoloLens™ method for both components. For the SD card, this difference was statistically significant (p = 0.00018), with the traditional method demonstrating consistently shorter and more stable localization times. The HoloLens™ method exhibited greater variability and longer completion times, suggesting usability challenges with smaller or less visually prominent components. For the resistor, although mean localization times were similar across methods, the difference was not statistically significant (p = 0.621). These results indicate that, while AR holds potential as an alternative to traditional methods, further refinement is needed, particularly to improve consistency and efficiency for Small component identification tasks like the SD card.

5.1.3. Impact of User Expertise on Localization Times (Run 3)

The third phase of the experiment analyzed localization times based on participants’ experience levels, categorizing them into experts (≥8 years) and non-experts (<8 years). Each group performed localization tasks using both the traditional and HoloLens™ methods.
  • Comparison of Methods
As summarized in Table 4, the traditional method consistently outperformed the HoloLens™ across the entire population and among expert participants. The difference was statistically significant (p = 0.0013 for the overall population and p = 1.30 × 10−6 for experts), confirming the superior efficiency of the traditional method for experienced users. Experts achieved a median localization time of 8 s with the traditional method, compared to 16.5 s with the HoloLens™.
For non-experts, performance was slightly better using HoloLens™, which had a median time of 10 s, compared to 16.5 s with the traditional method. However, this difference was not statistically significant (p = 0.4908), indicating that user preference or familiarity may have influenced outcomes rather than inherent methodological advantages.
Additionally, variability in performance, reflected in broader interquartile ranges, was more pronounced with the HoloLens™ method across all groups, suggesting that individual user interactions with AR interfaces varied more widely compared to the traditional setup.

5.1.4. Usability and User Experience

Qualitative Feedback: Survey responses were collected from all 10 participants after completing the localization tasks using both methods. The feedback revealed mixed perceptions regarding the usability of the AR system using HoloLens™. Approximately half of the participants found the system useful and intuitive, while the others expressed neutral or negative opinions. Notably, challenges with navigation and interaction were frequently reported, particularly by participants with limited or no prior exposure to augmented reality. This suggests that familiarity with AR may influence user experience, with more experienced users generally showing greater comfort and engagement with the system (see Questionnaire S1 in Supplementary Materials).
Task Efficiency and Speed: When asked if the HoloLens™ system felt faster than the traditional method, participants were divided. This indicates that, while the AR-based approach offered an alternative method for component localization, it did not provide a clear advantage in terms of perceived task efficiency or speed.
Precision and Marker Visibility: Participants generally considered the precision of the 3D markers adequate for completing the tasks. However, several suggestions for improvement were made to enhance marker visibility and usability.
Feedback emphasized the need for the following:
  • Increasing marker size and contrast.
  • Adding features like blinking or brighter colors for easier identification.
Specific comments highlighted these limitations:
  • “The contrast of the designated area should be higher”.
  • “The marker should blink or shine brighter”.
User Interface Challenges: The user interface was identified as a limiting factor in the system’s usability. Participants reported difficulties with the digital keyboard and precise positioning of AR windows, particularly due to a lack of familiarity with AR-based interactions. Comments included the following:
  • “It was difficult to place windows exactly where needed”.
  • “The biggest challenge was getting used to the system. Once familiar, it felt more manageable”.
Key Areas for Improvement: Participants suggested the following enhancements to improve the system’s overall usability:
  • Better marker visibility (e.g., improved size, contrast, and dynamic visual cues like blinking).
  • Refinements to the interface for smoother navigation and interaction.
  • Improved tools for managing AR elements like windows and input mechanisms.
While the AR system demonstrated potential, usability challenges, particularly related to marker visibility, user interface interaction, and unfamiliarity with AR environments, limited its effectiveness. Addressing these areas through system refinements could significantly enhance its practical application and overall user experience.

5.2. Experiment 2

Expanding on Experiment 1, which examined differences in component localization times between the HoloLens™ and traditional methods, Experiment 2 was designed to evaluate the accuracy of PCB component identification using Microsoft™ HoloLens™™ 2, with a focus on its sensitivity to varying viewing angles.
Participants (n = 10) were equipped with the HoloLens™ 2 and tasked with identifying components on a printed circuit board (PCB). They were non-domain users with no prior experience in electronics manufacturing or PCB handling. All participants were students, some of whom had limited prior exposure to augmented reality. Each session began with a briefing on the experiment’s objectives, followed by a training session to familiarize them with the HoloLens™ 2 and the AR application used.
Participants were asked to identify the same set of twelve PCB components at three predefined viewing angles—90°, 60°, and 45°—in a fixed sequence. For each participant, all twelve components were identified at one angle before proceeding to the next. This design ensured systematic and consistent progression through the task and avoided angle changes between individual components, reducing potential variability and cognitive load.
Accuracy in this study was assessed by calculating the Euclidean distances between the augmented positions and indicated positions relative to the reference positions, as described in the Section 4. The results were visualized using scatter plots and box plots. Additionally, the average time participants took to confirm the stable augmentation or rendering of each component was recorded and analyzed.
Definition of Key Metrics:
For clarity, the following terms are used distinctly in this study:
Accuracy denotes the spatial precision of component localization, measured by the Euclidean distance between augmented/indicated positions and the reference position.
Correctness refers to a binary classification of whether the user-indicated position falls within the target component’s footprint.
Confidence reflects the average proportion of correct identifications made by participants per component size category and angle.
These definitions support consistent evaluation of user interaction and system performance.

5.2.1. Positional Accuracy Analysis

Position accuracy measures the alignment of augmented (system-provided) and indicated (user-identified) positions with the reference (actual component) position, calculated using the Euclidean distance.
Figure 12 illustrates the average error vectors for augmented and indicated positions at 90°, 60°, and 45° viewing angles. Average error vectors represent the mean displacement in X and Y coordinates between the reference position and the augmented or indicated positions for each component. These vectors indicate the magnitude and direction of error, with shorter vectors representing higher accuracy and longer vectors indicating larger deviations. At 90°, both augmented and indicated positions are closely clustered around the reference position, with minimal error vectors, demonstrating the highest accuracy. At 60°, augmented positions remain close to the reference, but indicated positions show greater variability, reflecting a slight decline in user interaction accuracy. At 45°, both augmented and indicated positions exhibit significant deviations for the augmented positions.
Figure 13 summarizes distance errors across all angles in more detail. The 90° angle yields the highest accuracy (lowest distance errors) for both augmented and indicated positions. There is a notable difference between augmentation accuracy and indicated position accuracy for decreasing viewing angles. While indication accuracy remains almost constant at moderate levels for all viewing angles, augmentation accuracy is very high at perpendicular viewing angles; however, it drops dramatically at a 45° angle. This finding is not unexpected, as the marker visibility (the actual PCB) decreases at low viewing angles. Also, the assumption that the texture image of the PCB is a good representation of the object to be tracked holds only for flat structures (like the PCB) under almost perpendicular viewing directions. Indicated position accuracy is, on average, about 2 mm, regardless of the viewing angle. Whereas marker position accuracy is far better than 2 mm at 90°, it seems human perceptual and neuro-motoric factors set limits for indication accuracy. On the other hand, while marker augmentation accuracy decreased at low viewing angles, users still maintained better accuracy when indicating component positions. In other words, users were able to compensate for errors in the augmentation using contextual cues other than only the markers.

5.2.2. Component Identification Correctness Analysis

Identification correctness evaluates participants’ ability to correctly identify PCB components based on the AR system’s augmented positions. The correctness of users’ responses was determined by assessing whether the indicated position fell inside or on the boundary of each component’s mapped area (center, size, and shape). A component was deemed correctly identified if the indicated position was within the mapped boundary; otherwise, it was considered incorrect (see Figure 14).
The highest correctness was observed at 90°, where participants correctly identified an average of 10 out of 12 components. At this angle, the augmented and indicated positions were closely aligned with the reference position, resulting in minimal errors and enabling more accurate identifications. At a 60° viewing angle, correctness decreased to an average of 7 out of 12 components and was lowest at 45°, with only 5 out of 12 correctly identified components on average. The scatter plot in Figure 13 shows observations at 45° and illustrates what was found earlier. Users seemed to compensate for large errors in the augmented marker positions (see, e.g., components 22), allowing them to still determine the correct component. Also, the spatial pattern and distribution of indicated positions are, quite expectedly, similar for all components regardless of component size, meaning that the likelihood of correctly identifying a component increases for larger components.
The analysis also reveals that components placed near the edges of the workspace, such as P7 and U4, were identified with greater correctness, possibly due to reduced visual clutter. In contrast, centrally located components in denser regions showed higher variability in indicated positions. Altogether, these observations showcase the influence of component size, shape, spatial arrangement, and viewing angle on identification correctness, with larger, circular, and isolated components being easier to identify, even under oblique viewing angles.
The component identification correctness, as defined in this analysis, serves as an intuitive measure to assess the usability of the AR-based augmentation. By comparing the spatial “footprint” of a component with the user-indicated positions, this metric tends to favor larger components with a uniform, convex structure (e.g., P7 and U4), which are more easily aligned within the AR overlay. However, it penalizes smaller or elongated components, such as resistors or SD card connectors, which often have narrow footprints and are more sensitive to angle and orientation. This introduces a geometric bias that should be considered when interpreting identification accuracy across diverse component types.
The alternative to not correctly identifying a given component on the PCB is either not identifying a component at all or mistakenly identifying another component near the intended component. The likelihood of opting for such an incorrect nearby component depends naturally on the component density of a given PCB and how far away the identified positions are from the intended components’ borders.
We, therefore, include in our analysis another measure that indicates the shortest distances of identified positions from the intended component boundaries.
The plot in Figure 15 provides an account of the distances from component boundaries for the identified and augmented positions at different viewing angles. As the viewing angle becomes more oblique, the distances and variabilities increase. For the most unfavorable viewing condition at a 45° angle, distances from the component borders are substantially closer for the identified positions as compared with the augmented positions, suggesting again that users seem to compensate for the marker positioning errors. The upper quartile for the distance of the indicated positions is in all conditions, approximately 0.1 cm or less. In other words, when components were not identified correctly based on the measure above, the indicated position was within 1 mm of the component in 75% of the cases.

5.2.3. Component Identification Accuracy and Confidence

In this study, identification confidence refers to the proportion of correctly identified components averaged across participants for each component size category and viewing angle. It serves as a quantitative measure of how reliably users can locate and identify components using the AR interface. Components were grouped into three size categories: Small (Area ≤ 0.05 cm2), Medium (0.05 cm2 < Area ≤ 0.15 cm2), and Large (Area > 0.15 cm2). The dataset included seven Small, four Medium, and one Large component. However, this size distribution was a consequence of the available PCB layout rather than a controlled design choice, resulting in an unintentional imbalance, particularly the presence of only one large component, which limits the generalizability of size-related findings.
The grouped bar chart (Figure 16) shows that Medium-sized components maintained consistently high confidence scores across all angles, suggesting they are reliably identified regardless of viewing perspective. Small components exhibited a decrease in confidence at more oblique angles, with the lowest confidence observed at 45°, indicating that identification becomes more difficult as visibility decreases. The single Large component demonstrated a trend of increasing confidence at steeper viewing angles. However, this observation remains preliminary and cannot support generalizable conclusions due to the limited representation of large components in the dataset. While the results overall suggest that both component size and viewing angle influence identification performance—medium-sized components exhibiting greater robustness and small components being more angle-sensitive—interpretations regarding large components should be viewed as indicative rather than conclusive.

5.2.4. Time Analysis Across Viewing Angles

While no statistically significant differences were found in task completion times across the three viewing angles (p > 0.05), descriptive analysis offers additional insight. Table 5 summarizes the mean, standard deviation, and percentile ranges for each angle. Task performance was broadly consistent, with only modest variation across angles. Notably, the 90° angle exhibited slightly higher variability (SD = 1.81 s), which may reflect early-stage familiarization effects, as this condition was always presented first. Participants may have required additional time to adjust to the AR interface and interaction mechanisms during their initial exposure. In contrast, the 60° angle showed the most consistent performance (SD = 0.69 s), suggesting that once familiarized, participants executed the task more uniformly. The 45° condition showed a moderate spread in times, possibly due to minor ergonomic or marker visibility challenges at steeper angles. These trends are illustrated in Figure 17 (boxplot) and Table 5 (descriptive summary), supporting the overall finding of comparable mean performance with limited but interpretable variation.

5.2.5. Usability and User Experience

The usability of the HoloLens™ 2 AR headset for identifying PCB components across three viewing angles (45°, 60°, and 90°) was evaluated through a detailed participant feedback questionnaire. The survey addressed usability, interface design, physical and visual discomfort, marker precision, learning experience, and overall limitations. Responses were collected using a 5-point Likert scale, where participants evaluated statements or questions as follows:
  • Very Poor (1): Completely inadequate or unsatisfactory.
  • Poor (2): Significant issues or limitations.
  • Neutral (3): Acceptable, neither positive nor negative.
  • Good (4): Effective with minor limitations.
  • Very Good (5): Highly effective and satisfactory.
The full questionnaire used for this evaluation is provided in Questionnaire S2 in the Supplementary Materials.
Open-ended responses were also gathered to supplement quantitative findings with qualitative insights.
Seven out of ten participants rated the usability of the HoloLens™ and Tracker application as either “Good” (4) or “Neutral” (3), indicating moderate intuitiveness with clear potential for improvement. Participants with prior AR experience navigated the interface more easily, while first-time users often rated usability as “Somewhat Difficult” (2), highlighting challenges related to interaction mechanisms. Several novice users emphasized the need for better onboarding resources, suggesting that clearer navigation guidance and a more intuitive interface could significantly enhance the user experience.
Feedback on the Tracker application—the AR interface used for marker-based component identification—was mixed. While many participants appreciated its utility in guiding users to the correct components through visual overlays, some noted issues, such as delayed responsiveness and difficulty achieving stable marker alignment, which affected the precision of the AR overlay on the physical PCB. Suggested improvements included enhancing the application’s responsiveness and incorporating clearer, more intuitive interaction cues to support more reliable tracking and guidance.

Physical and Visual Discomfort

Discomfort related to both the physical and visual aspects of the HoloLens™ headset was frequently reported. On the physical side, participants mentioned the weight of the device and the pressure from the straps around the head and nose. Five out of ten participants expressed concerns regarding prolonged use, with comments such as “The headset felt heavy after a few minutes” and “The straps were difficult to adjust for comfort”.
Visual discomfort was also a recurring issue. Participants reported difficulty focusing, compatibility problems with glasses, and flickering or misaligned markers in the AR overlay. These issues were particularly noticeable at oblique viewing angles (e.g., 45°), where tracking instability often led to reduced performance. As one participant noted, “The flickering markers made it difficult to maintain focus”. Collectively, these responses highlight the need for improved ergonomics and visual stability in AR systems to support longer, more complex tasks.

Precision of 3D Markers

The precision and stability of 3D markers were critical to successful task completion. While markers were accurate under optimal conditions, their precision decreased significantly at steeper angles. Common issues included flickering, delays, and misalignment of markers. Suggestions for improvement include the following:
  • Increasing marker contrast and size to enhance visibility.
  • Reducing flickering for more stable positioning.
A participant remarked, “The markers need to glow or blink more clearly to make identification easier at sharper angles”.

Learning and Ease of Use

Participants with prior experience in AR technology adapted quickly, rating the learning process as “Good” (4) or “Very Good” (5). However, novice users struggled more, with many rating the experience as “Neutral” (3) or “Somewhat Difficult”. Feedback emphasized the need for interactive tutorials, step-by-step guides, and better onboarding resources to help first-time users gain proficiency.

Overall Feedback and Limitations

The participant feedback underscores both the potential and limitations of the HoloLens™ 2 AR system for PCB component identification tasks. While it demonstrates promise in terms of usability and functionality, areas such as physical discomfort, visual clarity, marker precision, and onboarding for novices require significant improvement to enhance its practicality for prolonged use. Addressing these issues would improve user experience and broaden its applicability.

Results Summary

To support a clearer synthesis of the study’s outcomes, Table 6 provides a structured summary of the two experiments, highlighting their primary focus, analyzed variables, and key findings. This overview is intended to facilitate rapid comprehension of the study design and results for future researchers and practitioners considering AR integration in electronics manufacturing workflows.

6. Discussion

This study evaluated the effectiveness of augmented reality (AR) technology, specifically the Microsoft™ HoloLens™™ 2, for identifying and localizing PCB components through two complementary experiments. Experiment 1 compared AR with traditional methods, revealing that the traditional approach enabled faster localization times, particularly for experienced users and smaller components. In contrast, the AR system offered comparable performance in some cases, notably among non-expert users. These findings align with previous observations that traditional methods benefit from user familiarity and well-established workflows [20,24]. However, challenges with AR, such as marker stability, ergonomic comfort, and variability in task execution times, were evident, corroborating concerns raised in recent industrial AR evaluations [25].
Experiment 2 focused exclusively on the AR system to assess identification correctness across varying viewing angles. The results demonstrated that AR allowed for accurate component identification overall, though performance varied noticeably with viewing angle and component size. Specifically, oblique viewing angles and smaller components were associated with reduced accuracy, highlighting the critical role of visual alignment and marker recognition [26,27]. These findings align with prior research on AR tracking, which has identified marker visibility and alignment as significant determinants of accuracy in industrial contexts [27,28]. The study underscores AR’s potential to support PCB localization tasks, especially for users with limited domain expertise, if improvements are made in tracking stability, user interface design, and marker visibility [19].
The analysis of viewing angles in Experiment 2 identified 90° as the optimal condition for AR performance, with both augmented and indicated positions closely aligned to the reference positions. At this angle, error distances and variability were minimized, demonstrating the system’s effectiveness under conditions of optimal visibility and stability [28,29]. At 60°, performance declined, with increased variability in indicated positions, reflecting challenges in user interaction and marker alignment. The 45° condition presented the most substantial limitations, with elevated error rates and pronounced variability in both augmented and indicated positions. Similar findings have been reported in marker-based AR research, where oblique angles impair tracking performance due to surface visibility and occlusion issues [27,29]. Notably, despite these limitations, participants maintained strong identification performance at challenging angles, suggesting they compensated using contextual spatial cues beyond the augmented markers, consistent with established spatial cognition theories [28].
Nonetheless, it is important to acknowledge that the current system relies solely on Vuforia™-based marker tracking using a texture image of the PCB. While this approach enabled markerless deployment, it proved sensitive to visual complexity and lighting. Densely packed or miniaturized component regions reduced marker stability due to insufficient distinctive texture features, as noted in Vuforia’s best practices [27]. These limitations were particularly evident under oblique viewing angles, where loss of tracking precision was reported by users. This represents a technical limitation of the current study, as alternative tracking modalities—such as spatial anchors, depth sensing, or SLAM-based tracking—were not explored. Future work may compare these hybrid approaches to assess whether they can improve robustness and reduce angular sensitivity, ultimately enhancing AR performance in real-world electronics manufacturing environments [30].
In Experiment 1, traditional methods outperformed AR in task completion times, particularly among expert users, likely due to their familiarity with established workflows and reliance on cognitive shortcuts [20,29]. In contrast, the AR system introduced additional interaction complexity, which may have contributed to slower performance [16]. Prior studies have confirmed that AR interfaces, while supportive for novices, impose higher cognitive loads during initial use, especially under conditions of marker tracking variability [17,31,32]. Moreover, participants performed tasks faster during their initial sequence, irrespective of method, indicating possible attention decline or cognitive fatigue in subsequent phases [32,33].
Component size was a significant factor influencing performance in Experiment 2. Medium-sized components yielded the highest identification confidence across all viewing angles, likely due to their optimal balance between visibility and spatial footprint. In contrast, small components consistently posed challenges, particularly at 45°, where confidence was lowest, aligning with prior findings on visual resolution and perception limits in AR environments [32]. Confidence for the single Large component was comparatively high at steeper angles; however, this result is based on a single data point and cannot support general conclusions. The limited and unbalanced representation of component sizes, especially the small number of Medium components (n = 4) and only one large component, further restricts the validity of size-based trends. Additionally, the use of a binary correctness metric based on point-in-polygon inclusion may have introduced geometric bias, disproportionately benefiting larger or convex-shaped components. This metric does not account for footprint area or shape complexity, potentially confounding component-wise comparisons. To improve statistical rigor and interpretation in future work, studies should incorporate a more balanced distribution of component geometries and adopt alternative or shape-adjusted performance measures.
Although no significant time differences were observed across the three viewing angles, this does not preclude the presence of increased cognitive effort, particularly at oblique angles, where component identification was more visually demanding. Participants may have adapted their strategies (e.g., leaning, eye focus adjustment) to maintain performance, which would not be reflected in raw task time. This highlights the need for future studies to incorporate complementary measures, such as eye-tracking, head movement analysis, or subjective cognitive load assessments (e.g., NASA-TLX), to more holistically evaluate effort and mental demand across spatial conditions [34].
Marker stability was a critical factor impacting AR accuracy and user trust, particularly in Experiment 2. Participants reported flickering, delays, and misalignment under oblique viewing conditions, aligning with recent findings on AR marker reliability in industrial settings [23,27]. At 45°, error distances from component boundaries increased for augmented positions, while indicated positions remained comparatively stable, suggesting reliance on contextual spatial cues to compensate for marker instability. This aligns with established cognitive compensation strategies observed in AR environments [28,35]. Improvements in marker precision through adaptive rendering and enhanced tracking algorithms would strengthen AR reliability for complex tasks [27,30].
Usability feedback revealed multiple ergonomic and interaction-related limitations. Half of the participants reported physical discomfort from the HoloLens™ 2 headset, particularly regarding weight and strap pressure. Visual issues included difficulty focusing, flickering overlays, and challenges for glasses wearers, especially at oblique angles. Such discomforts have been recognized in ergonomic evaluations of head-mounted displays, where device weight and interface stability significantly influence prolonged usability [36,37]. These concerns raise important implications for industrial settings where technicians may be required to wear AR headsets for extended periods. Although HoloLens™ 2 improves weight distribution over earlier models, several participants reported discomfort from forehead pressure, strap tension, and general fatigue after only brief use. These findings are consistent with established ergonomic design guidance, which emphasizes limiting head-mounted device weight to under 500g, ensuring balanced front-back load distribution, and avoiding continuous use beyond 30–60 min without breaks to prevent fatigue and neck strain [38,39]. In real-world long-duration workflows, such physical discomfort may contribute to reduced concentration, increased cognitive load, or abandonment of AR-based support tools. These findings align with recent studies on head-mounted display ergonomics, which highlight the importance of minimizing neck torque, balancing device weight, and enhancing fit to ensure sustained comfort and performance [36,39]. Future iterations of AR systems should prioritize lighter materials, modular strapping mechanisms, and improved ventilation to enable extended industrial usability.
The user interface also posed challenges, particularly for first-time users, who struggled with precise interaction and navigation. Participants with prior AR experience adapted more quickly, while novice users emphasized the need for better onboarding and clearer interaction cues. Similar challenges have been reported in AR usability studies, where steep learning curves impact initial task performance [40]. Suggested improvements included larger, higher-contrast markers, interactive tutorials, and more intuitive interface designs.
While the current study focused exclusively on component identification, we acknowledge that PCB assembly encompasses a broader range of tasks, including placement, soldering, and inspection. The findings reported here are most applicable to identification and visual referencing tasks, where spatial guidance is critical. Future research should extend this framework to evaluate the applicability of AR systems in more interactive and sequential assembly steps, where motor precision, timing, and multi-modal feedback are involved. Recent work has demonstrated the potential of AR to support such complex operations by enhancing multi-modal perception and interaction [40] and optimizing task performance in precise manual assembly scenarios [41].
The precision and stability of 3D markers were essential for successful task completion. While markers performed well under optimal conditions, their precision declined at steeper angles. Participants recommended increasing marker size and contrast and reducing flickering to improve stability. These findings align with previous ergonomic research emphasizing display clarity and visual marker stability as key to effective AR deployment [27,36].
A key limitation of this study is the relatively small sample size (n = 10 per experiment), which constrains the statistical power of both within-group and between-group comparisons. Subgroup analyses, such as those based on user expertise or component size, lack sufficient statistical resolution to support definitive conclusions and may reflect individual variability or task-specific factors. In addition, long-term adoption of AR systems in industrial settings depends not only on performance metrics but also on user acceptance and training effectiveness—factors increasingly recognized in immersive technology research [42]. While the findings offer preliminary insights into AR-assisted component identification performance, they require confirmation through studies with larger and more diverse participant cohorts to enable statistically robust inferences and broader generalizability across industrial applications [43].
In addition, Experiment 1 participants were recruited from the same industrial company, and the PCB used in both experiments represented a single, moderately complex design. While this allowed for consistency in experimental control and reduced confounding variables, it limits the ecological validity of the findings. Likewise, the non-domain participants in Experiment 2 represent a narrow user profile, which may not reflect the diversity of skills and roles found in real industrial environments. Future research should incorporate a broader range of PCB types and layouts, as well as participant groups from multiple industrial domains, to better evaluate the generalizability and robustness of AR-based component identification systems across varied production scenarios.
A further limitation in Experiment 1 is the use of a fixed task sequence, with all participants performing the traditional method before the AR-based task. This design may have introduced order effects, such as fatigue or reduced attention in the second phase, potentially confounding the observed performance differences. While the faster performance in Run 1 may reflect such effects, it cannot be disentangled from method-specific influences. Future research may consider using counterbalanced or randomized task sequences, such as Latin square designs [44], Williams designs [45], or randomized complete block designs [46] to better isolate genuine method effects and reduce sequence-related bias.
While the present study provides controlled insights into AR-based component identification, translating these findings into industrial-scale deployment involves additional challenges. In actual manufacturing environments, PCBs are often more densely populated, with components varying in size, shape, and labeling clarity [47,48]. Lighting conditions can also fluctuate, affecting marker detection and spatial mapping accuracy [47]. Moreover, technicians may use AR systems for extended periods during multi-step assembly or inspection tasks, where cumulative physical strain and marker drift may become more pronounced [20]. These contextual factors were not fully replicated in the current experimental setup. As such, future research should validate AR performance in representative industrial settings—integrating more complex PCBs, varied ambient conditions, and extended use durations—to ensure system robustness, ergonomic viability, and sustained task accuracy over time.
Overall, this study demonstrates the promise of AR systems like the HoloLens™ 2 in supporting non-expert users with guided identification tasks on PCBs. However, traditional methods remain faster and more consistent for experienced users. The variability in AR performance across viewing angles and component types highlights the need for targeted improvements to position AR as a viable complement in real-world workflows.
While this study focused on the use of a head-mounted AR device (Microsoft™ HoloLens™ 2), it is essential to contextualize its performance relative to alternative AR platforms, such as tablet- or phone-based systems. Handheld AR devices are generally more affordable and easier to deploy, making them attractive for broader adoption in field-based or cost-sensitive environments. However, they often suffer from reduced spatial anchoring accuracy, lack of hands-free interaction, and ergonomic challenges during prolonged use, which can limit their effectiveness in precision-critical manufacturing and inspection workflows [49]. In contrast, head-mounted displays like the HoloLens™ 2 offer hands-free operation, robust spatial mapping, and immersive visualization, which better align with tasks requiring real-time spatial awareness and uninterrupted manual engagement [50]. Comparative studies have further emphasized that HMDs can lead to improved task performance, reduced error rates, and better user experience in guided assembly contexts due to their enhanced interaction capabilities and spatial fidelity [49]. Furthermore, the integration of such devices supports the human-centric vision of Industry 5.0, where seamless collaboration between digital tools and operators is prioritized [51]. Future research should explore hybrid or mobile-first AR frameworks that balance cost, spatial precision, and user comfort for varying industrial needs.
Future research should address both the technical and ergonomic challenges identified here. Priorities include enhancing marker stability through hardware and software advances, improving ergonomic design for prolonged use, and developing onboarding resources tailored to novice users. Design enhancements should follow established AR usability heuristics [39], focusing on visibility, feedback, and cognitive load minimization. Additionally, larger, more diverse participant groups and complex PCB layouts should be included to validate and extend current findings. Another limitation to acknowledge is the unbalanced distribution of component sizes used in this study, which restricts the generalizability of size-related observations. Through iterative improvements and focused research, AR systems can evolve into reliable, efficient tools in electronics manufacturing and beyond, effectively bridging the gap between traditional expertise and emerging digital workflows.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/app15116331/s1, Questionnaire S1: Participant Feedback Questionnaire–Experiment 1: Component Identification Using AR and Traditional Methods; Questionnaire S2: Participant Feedback Questionnaire–Experiment 2: AR-Based PCB Identification Using HoloLens 2.

Author Contributions

Conceptualization, S.S.; methodology, S.S., K.C. and A.R.; application development: A.R.; investigation Experiment 1: A.R.; investigation Experiment 2: K.C.; data analysis Experiment 1: A.R. and K.C.; data analysis Experiment 2: K.C. and J.Å.; writing—original draft preparation: K.C.; writing—review and editing, K.C., S.S. and J.Å. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partly funded by the European Regional Development Fund (through the Swedish Agency for Economic and Regional Growth), the Gävleborg Region (contract ID 20201871).

Institutional Review Board Statement

This study involved adult participants in a usability testing context, where they interacted with augmented visualizations of PCB components. No physical intervention, health-related procedures, or collection of sensitive personal data were involved. According to the Swedish Act concerning the Ethical Review of Research Involving Humans (SFS 2003:460, Chapter 2, Sections 1–4), ethical approval is required only when research involves physical intervention, psychological impact, traceable biological material, or sensitive personal data. As this study did not include any of these elements, it is exempt from mandatory ethical review. The study was conducted by the Declaration of Helsinki (2013 revision) and good research practice guidelines.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The original contributions presented in this study are included in the article/Supplementary Materials. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

Andreas Roghe was employed by the Hellman Dynamic Gävle AB. The funders had no role in the design of the study; in the collection, analysis, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Sharma, M.D.; Kiran, K.K.; Surya, M.P.; Singh, N.K.; Choudhary, P.R. Printed Circuit Board Defect Detection Using Machine Learning. Int. J. Tech. Res. Sci. 2024, 9, 27–35. [Google Scholar] [CrossRef] [PubMed]
  2. Akhtar, M.B. The Use of a Convolutional Neural Network in Detecting Soldering Faults from a Printed Circuit Board Assembly. HighTech Innov. J. 2022, 3, 1–14. [Google Scholar] [CrossRef]
  3. Dai, W.; Mujeeb, A.; Erdt, M.; Sourin, A. Soldering defect detection in automatic optical inspection. Adv. Eng. Inform. 2020, 43, 101004. [Google Scholar] [CrossRef]
  4. Sankar, V.U.; Lakshmi, G.; Sankar, Y.S. A Review of Various Defects in PCB. J. Electron. Test. 2022, 38, 1498–1504. [Google Scholar] [CrossRef]
  5. Radamson, H.H.; He, X.; Zhang, Q.; Liu, J.; Cui, H.; Xiang, J.; Kong, Z.; Xiong, W.; Li, J.; Gao, J.; et al. Miniaturization of CMOS. Micromachines 2019, 10, 293. [Google Scholar] [CrossRef]
  6. Lukacs, P.; Rovensky, T.; Otahal, A. A Contribution to Printed Circuit Boards’ Miniaturization by the Vertical Embedding of Passive Components. J. Electron. Packag. Trans. ASME 2024, 146, 011004. [Google Scholar] [CrossRef]
  7. Gattullo, M.; Laviola, E.; Evangelista, A.; Fiorentino, M.; Uva, A.E. Towards the Evaluation of Augmented Reality in the Metaverse: Information Presentation Modes. Appl. Sci. 2022, 12, 12600. [Google Scholar] [CrossRef]
  8. Morales Méndez, G.; del Cerro Velázquez, F. Augmented Reality in Industry 4.0 Assistance and Training Areas: A Systematic Literature Review and Bibliometric Analysis. Electronics 2024, 13, 1147. [Google Scholar] [CrossRef]
  9. Vidal-Balea, A.; Blanco-Novoa, O.; Fraga-Lamas, P.; Vilar-Montesinos, M.; Fernández-Caramés, T.M. Creating Collaborative Augmented Reality Experiences for Industry 4.0 Training and Assistance Applications: Performance Evaluation in the Shipyard of the Future. Appl. Sci. 2020, 10, 9073. [Google Scholar] [CrossRef]
  10. Yeow, P.H.P.; Sen, R.N. Ergonomics improvements of the visual inspection process in a printed circuit assembly factory. Int. J. Occup. Saf. Ergon. 2004, 10, 369–385. [Google Scholar] [CrossRef]
  11. Park, S.; Bokijonov, S.; Choi, Y. Review of microsoft hololens applications over the past five years. Appl. Sci. 2021, 11, 7259. [Google Scholar] [CrossRef]
  12. Vidal-Balea, A.; Blanco-Novoa, O.; Fraga-Lamas, P.; Vilar-Montesinos, M.; Fernández-Caramés, T.M. A Collaborative Augmented Reality Application for Training and Assistance during Shipbuilding Assembly Processes. Proceedings 2020, 54, 4. [Google Scholar] [CrossRef]
  13. Chatterjee, I.; Pforte, T.; Tng, A.; Salemi Parizi, F.; Chen, C.; Patel, S. ARDW: An Augmented Reality Workbench for Printed Circuit Board Debugging. UIST 2022. In Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology, Bend, OR, USA, 29 October–2 November 2022. [Google Scholar]
  14. Hahn, J.; Ludwig, B.; Wolff, C. Augmented reality-based training of the PCB assembly process. ACM Int. Conf. Proceeding Ser. 2015, 395–399. [Google Scholar]
  15. Yang, X.; Yang, J.; He, H.; Chen, H. A Hybrid 3D Registration Method of Augmented Reality for Intelligent Manufacturing. IEEE Access 2019, 7, 181867–181883. [Google Scholar] [CrossRef]
  16. Tiwari, A.S.; Bhagat, K.K. Comparative analysis of augmented reality in an engineering drawing course: Assessing spatial visualisation and cognitive load with marker-based, markerless and Web-based approaches. Australas. J. Educ. Technol. 2024, 40, 19–36. [Google Scholar] [CrossRef]
  17. Alessa, F.M.; Alhaag, M.H.; Al-harkan, I.M.; Ramadan, M.Z.; Alqahtani, F.M. A Neurophysiological Evaluation of Cognitive Load during Augmented Reality Interactions in Various Industrial Maintenance and Assembly Tasks. Sensors 2023, 23, 7698. [Google Scholar] [CrossRef]
  18. Park, S.; Maliphol, S.; Woo, J.; Fan, L. Digital Twins in Industry 4. Electronics 2024, 13, 2258. [Google Scholar] [CrossRef]
  19. Voinea, G.D.; Gîrbacia, F.; Duguleană, M.; Boboc, R.G.; Gheorghe, C. Mapping the Emergent Trends in Industrial Augmented Reality. Electronics 2023, 12, 1719. [Google Scholar] [CrossRef]
  20. Palmarini, R.; Erkoyuncu, J.A.; Roy, R.; Torabmostaedi, H. A systematic review of augmented reality applications in maintenance. Robot. Comput. Integr. Manuf. 2018, 49, 215–228. [Google Scholar] [CrossRef]
  21. Chatterjee, I.; Khvan, O.; Pforte, T.; Li, R.; Patel, S. Augmented Silkscreen: Designing AR Interactions for Debugging Printed Circuit Boards. DIS 2021. In Proceedings of the 2021 ACM Designing Interactive Systems Conference: Nowhere and Everywhere, Virtual, 28 June–2 July 2021; Association for Computing Machinery, Inc.: New York, NY, USA, 2021; pp. 220–233. [Google Scholar]
  22. Fernández del Amo, I.; Erkoyuncu, J.A.; Roy, R.; Palmarini, R.; Onoufriou, D. A systematic review of Augmented Reality content-related techniques for knowledge transfer in maintenance applications. Comput. Ind. 2018, 103, 47–71. [Google Scholar] [CrossRef]
  23. Haraguchi, D.; Miyahara, R. High Accuracy and Wide Range Recognition of Micro AR Markers with Dynamic Camera Parameter Control †. Electronics 2023, 12, 4398. [Google Scholar] [CrossRef]
  24. Syberfeldt, A.; Danielsson, O.; Gustavsson, P. Augmented Reality Smart Glasses in the Smart Factory: Product Evaluation Guidelines and Review of Available Products. IEEE Access 2017, 5, 9118–9130. [Google Scholar] [CrossRef]
  25. Roberts, J.; Christian, S. User Comfort in VR/AR Headsets: A Mathematical Investigation into Ergonomic and Functional Limitations of Eye Tracking Technology. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2024, 68, 1235–1239. [Google Scholar] [CrossRef]
  26. Ariwa, M.; Itamiya, T.; Koizumi, S.; Yamaguchi, T. Comparison of the observation errors of augmented and spatial reality systems. Appl. Sci. 2021, 11, 12076. [Google Scholar] [CrossRef]
  27. Vuforia Developer Library. Best Practices for Designing and Developing Image-Based Targets. Vuforia. 2024. Available online: https://developer.vuforia.com/library/vuforia-engine/images-and-objects/image-targets/best-practices/best-practices-designing-and-developing-image-based-targets/ (accessed on 20 May 2024).
  28. Deshpande, A.; Kim, I. The effects of augmented reality on improving spatial problem solving for object assembly. Adv. Eng. Inform. 2018, 38, 760–775. [Google Scholar] [CrossRef]
  29. Hou, L.; Wang, X.; Truijens, M. Using Augmented Reality to Facilitate Piping Assembly: An Experiment-Based Evaluation. J. Comput. Civ. Eng. 2015, 29, 05014007. [Google Scholar] [CrossRef]
  30. Xu, M.; Shu, Q.; Huang, Z.; Chen, G.; Poslad, S. ARLO: Augmented Reality Localization Optimization for Real-Time Pose Estimation and Human–Computer Interaction. Electronics 2025, 14, 1478. [Google Scholar] [CrossRef]
  31. Seeliger, A.; Merz, G.; Holz, C.; Feuerriegel, S. Exploring the Effect of Visual Cues on Eye Gaze during AR-Guided Picking and Assembly Tasks. In Proceedings of the 2021 IEEE International Symposium on Mixed and Augmented Reality Adjunct, ISMAR-Adjunct 2021, Virtual, 4–8 October 2021; pp. 159–164. [Google Scholar]
  32. Yang, Z.; Shi, J.; Jiang, W.; Sui, Y.; Wu, Y.; Ma, S.; Kang, C.; Li, H. Influences of augmented reality assistance on performance and cognitive loads in different stages of assembly task. Front. Psychol. 2019, 10, 1703. [Google Scholar] [CrossRef]
  33. Devos, H.; Gustafson, K.; Ahmadnezhad, P.; Liao, K.; Mahnken, J.D.; Brooks, W.M.; Burns, J.M. Psychometric properties of NASA-TLX and index of cognitive activity as measures of cognitive workload in older adults. Brain Sci. 2020, 10, 994. [Google Scholar] [CrossRef]
  34. Kim, J.Y.; Choi, J.K. Effects of AR on Cognitive Processes: An Experimental Study on Object Manipulation, Eye-Tracking, and Behavior Observation in Design Education. Sensors 2025, 25, 1882. [Google Scholar] [CrossRef]
  35. Qian, X.; He, F.; Hu, X.; Wang, T.; Ipsita, A.; Ramani, K. ScalAR: Authoring Semantically Adaptive Augmented Reality Experiences in Virtual Reality. In Proceedings of the Conference on Human Factors in Computing Systems, New Orleans, LA, USA, 29 April–5 May 2022. [Google Scholar]
  36. Ito, K.; Tada, M.; Ujike, H.; Hyodo, K. Effects of the weight and balance of head-mounted displays on physical load. Appl. Sci. 2021, 11, 6802. [Google Scholar] [CrossRef]
  37. Laviola, J.J. A Discussion of Cybersickness in Virtual Environments. ACM Sigchi Bull. 2000, 32, 47–56. [Google Scholar] [CrossRef]
  38. ISO/TS 9241-411:2012; Ergonomics of Human-System Interaction—Part 411: Evaluation Methods for the Design of Physical Input Devices. ISO: Geneva, Switzerland, 2012.
  39. Seeliger, A.; Netland, T.; Feuerriegel, S. Augmented Reality for Machine Setups: Task Performance and Usability Evaluation in a Field Test. Procedia CIRP 2022, 107, 570–575. [Google Scholar] [CrossRef]
  40. Chen, L.; Zhao, H.; Shi, C.; Wu, Y.; Yu, X.; Ren, W.; Zhang, Z.; Shi, X. Enhancing Multi-Modal Perception and Interaction: An Augmented Reality Visualization System for Complex Decision Making. Systems 2024, 12, 7. [Google Scholar] [CrossRef]
  41. Zhang, X.; He, W.; Bai, J.; Billinghurst, M.; Qin, Y.; Dong, J.; Liu, T. Evaluation of Augmented Reality instructions based on initial and dynamic assembly tolerance allocation schemes in precise manual assembly. Adv. Eng. Inform. 2025, 63, 102954. [Google Scholar] [CrossRef]
  42. Bachmann, M.; Subramaniam, A.; Born, J.; Weibel, D. Virtual reality public speaking training: Effectiveness and user technology acceptance. Front. Virtual Real. 2023, 4, 1242544. [Google Scholar] [CrossRef]
  43. Montgomery, D.C. Design and Analysis of Experiments; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2013. [Google Scholar]
  44. Jones, B.; Kenward, M.G. Design and Analysis of Cross-Over Trials, 3rd ed.; Chapman and Hall/CRC: New York, NY, USA, 2014; pp. 1–438. [Google Scholar] [CrossRef]
  45. Toutenburg, H.; Shalabh. Statistical Analysis of Designed Experiments, 3rd ed.; Springer Texts in Statistics; Springer: New York, NY, USA, 2009; pp. 1–615. [Google Scholar] [CrossRef]
  46. Masood, T.; Egger, J. Augmented reality in support of Industry 4.0—Implementation challenges and success factors. Robot. Comput. Integr. Manuf. 2019, 58, 181–195. [Google Scholar] [CrossRef]
  47. Nee, A.Y.C.; Ong, S.K.; Chryssolouris, G.; Mourtzis, D. Augmented reality applications in design and manufacturing. CIRP Ann Manuf. Technol. 2012, 61, 657–679. [Google Scholar] [CrossRef]
  48. Leins, N.; Gonnermann-Müller, J.; Teichmann, M. Comparing head-mounted and handheld augmented reality for guided assembly. J. Multimodal User Interfaces 2024, 18, 313–328. [Google Scholar] [CrossRef]
  49. Teruggi, S.; Fassi, F. Hololens 2 Spatial Mapping Capabilities in Vast Monumental Heritage Environments. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2022, 46, 489–496. [Google Scholar] [CrossRef]
  50. Bazel, M.A.; Mohammed, F.; Baarimah, A.O.; Alawi, G.; Al-Mekhlafi, A.-B.A.; Almuhaya, B. The Era of Industry 5.0: An Overview of Technologies, Applications, and Challenges. In Advances in Intelligent Computing Techniques and Applications; Saeed, F., Mohammed, F., Fazea, Y., Eds.; Springer Nature: Cham, Switzerland, 2024; pp. 274–284. [Google Scholar]
  51. Stadler, S.; Cornet, H.; Frenkler, F. Assessing Heuristic Evaluation in Immersive Virtual Reality—A Case Study on Future Guidance Systems. Multimodal Technol. Interact. 2023, 7, 19. [Google Scholar] [CrossRef]
Figure 1. Overview of the research design and workflow.
Figure 1. Overview of the research design and workflow.
Applsci 15 06331 g001
Figure 2. A red 3D marker for a component rendered on the PCB.
Figure 2. A red 3D marker for a component rendered on the PCB.
Applsci 15 06331 g002
Figure 3. Texture-mapped digital twin of the PCB Unity™ game engine.
Figure 3. Texture-mapped digital twin of the PCB Unity™ game engine.
Applsci 15 06331 g003
Figure 4. Flow chart outlining the step-by-step methodology of Experiment 2.
Figure 4. Flow chart outlining the step-by-step methodology of Experiment 2.
Applsci 15 06331 g004
Figure 5. Close-up of the PCB mounted on a tripod at a 45° angle.
Figure 5. Close-up of the PCB mounted on a tripod at a 45° angle.
Applsci 15 06331 g005
Figure 6. A participant pointing at the identified component.
Figure 6. A participant pointing at the identified component.
Applsci 15 06331 g006
Figure 7. Reference position.
Figure 7. Reference position.
Applsci 15 06331 g007
Figure 8. Augmented position.
Figure 8. Augmented position.
Applsci 15 06331 g008
Figure 9. Identified position.
Figure 9. Identified position.
Applsci 15 06331 g009
Figure 11. Comparison of localization times for SD card and resistor across methods.
Figure 11. Comparison of localization times for SD card and resistor across methods.
Applsci 15 06331 g011
Figure 12. Average error vectors for (a) 90-degree viewing angle, (b) 60-degree viewing angle, and (c) 45-degree viewing angle.
Figure 12. Average error vectors for (a) 90-degree viewing angle, (b) 60-degree viewing angle, and (c) 45-degree viewing angle.
Applsci 15 06331 g012
Figure 13. Errors in augmented and identified positions across all angles.
Figure 13. Errors in augmented and identified positions across all angles.
Applsci 15 06331 g013
Figure 14. Component identification correctness at the 45-degree viewing angle. (Colored dots represent user-indicated positions cluster for each component. Each color corresponds to a different component, while black squares and circles represent the true component boundaries as defined in the PCB layout).
Figure 14. Component identification correctness at the 45-degree viewing angle. (Colored dots represent user-indicated positions cluster for each component. Each color corresponds to a different component, while black squares and circles represent the true component boundaries as defined in the PCB layout).
Applsci 15 06331 g014
Figure 15. Distribution of distances from component boundaries for augmented and identified positions across different viewing angles. Red boxplots indicate the spread of distances (median, IQR, and whiskers), and blue “+” symbols represent statistical outliers.
Figure 15. Distribution of distances from component boundaries for augmented and identified positions across different viewing angles. Red boxplots indicate the spread of distances (median, IQR, and whiskers), and blue “+” symbols represent statistical outliers.
Applsci 15 06331 g015
Figure 16. Average confidence by component size category at different viewing angles.
Figure 16. Average confidence by component size category at different viewing angles.
Applsci 15 06331 g016
Figure 17. Task completion time (in seconds) across viewing angles (90°, 60°, and 45°). (Each blue boxplot represents the distribution of completion times, with the red horizontal line indicating the median. Red “+” symbols denote statistical outliers).
Figure 17. Task completion time (in seconds) across viewing angles (90°, 60°, and 45°). (Each blue boxplot represents the distribution of completion times, with the red horizontal line indicating the median. Red “+” symbols denote statistical outliers).
Applsci 15 06331 g017
Table 1. Structure of the Dataset Collected During the Experiment.
Table 1. Structure of the Dataset Collected During the Experiment.
AttributeDescription
Participant IDUnique identifier for each participant (P1–P10)
Viewing AngleOne of the three predefined viewing angles
ComponentIdentifier for the PCB component being located
Time TakenTime (in seconds) to complete the identification task
Screenshot FilenameFile name of the screenshot captured
Table 2. Experiment 1 Overview and Analysis Focus.
Table 2. Experiment 1 Overview and Analysis Focus.
RunFocusGroupsParticipantsVariableAnalysis Focus
Run 110 Components (All)Group A: Traditional First, HoloLens™ Second (n = 5)5Method Sequence: Traditional → HoloLens™Impact of method sequence on localization time.
Group B: HoloLens™ First, Traditional Second (n = 5)5Method Sequence: HoloLens™ → TraditionalImpact of method sequence on localization time.
Run 22 Components (SD Card and Resistor)Same Participants (n = 10)10Component Size: SD Card vs. ResistorEffect of component size on localization time.
Run 310 Components (All)Experts (≥8 years, n = 6) and Non-Experts (<8 years, n = 4)10Expertise: Experts vs. Non-ExpertsEffect of expertise on localization time.
Table 4. Comparison of Localization Times Across Methods and Expertise.
Table 4. Comparison of Localization Times Across Methods and Expertise.
GroupMethodMean (s)Median (s)Wilcoxon Test (Statistic, p-Value)
Entire Population (n = 10)Traditional16.6110(3210.5, 0.0013)
HoloLens™21.5713.5
Experts (n = 6)Traditional14.878(1480, 1.30e-06)
HoloLens™23.6016.5
Non-Experts (n = 4)Traditional19.2316.5(323, 0.4908)
HoloLens™18.5310
Table 5. Descriptive Statistics of Task Completion Times by Viewing Angle (in seconds).
Table 5. Descriptive Statistics of Task Completion Times by Viewing Angle (in seconds).
Viewing AngleMeanStd DevVarianceMin25th PercentileMedian75th PercentileMax
90°3.261.813.281.262.162.593.408.46
60°2.480.690.471.361.962.462.834.22
45°2.451.251.570.971.312.283.106.25
Table 6. Summary of Experimental Results, Analyzed Variables, and Key Findings.
Table 6. Summary of Experimental Results, Analyzed Variables, and Key Findings.
ExperimentFocusKey VariablesMain Findings
Exp. 1AR vs. Traditional MethodMethod, Component Size, User ExpertiseTraditional method was faster overall, especially for experts. AR slightly better for non-experts. Component size and method order affected outcomes.
Usability and Qualitative FeedbackAR usability, marker precision, interfaceMixed usability ratings. Issues included marker flicker, interface learning curve, and headset discomfort. Suggestions made for improving UI and comfort.
Exp. 2Accuracy at Different Viewing AnglesViewing Angle (90°, 60°, 45°), Position ErrorHighest accuracy at 90°. Significant decline in AR augmentation accuracy at 45°. Users compensated for AR drift using spatial cues.
Component Identification CorrectnessCorrect/Incorrect identification, PositionCorrectness highest at 90°, lowest at 45°. Larger components and edge placement improved results. Geometric bias noted for smaller components.
Confidence and Component SizeComponent Size (Small, Medium, Large)Medium components identified with highest confidence. Small components most affected by viewing angle. Limited generalizability for large components.
Time Analysis by AngleTask Completion Time at 3 AnglesNo significant difference. Slight variability across angles. Highest SD at 90° likely due to learning curve.
Usability FeedbackPhysical, visual, interface, learning curveParticipants cited headset discomfort, visual strain, and usability limitations. Training and ergonomic improvements recommended.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chandel, K.; Seipel, S.; Åhlén, J.; Roghe, A. Augmented Reality for PCB Component Identification and Localization. Appl. Sci. 2025, 15, 6331. https://doi.org/10.3390/app15116331

AMA Style

Chandel K, Seipel S, Åhlén J, Roghe A. Augmented Reality for PCB Component Identification and Localization. Applied Sciences. 2025; 15(11):6331. https://doi.org/10.3390/app15116331

Chicago/Turabian Style

Chandel, Kuhelee, Stefan Seipel, Julia Åhlén, and Andreas Roghe. 2025. "Augmented Reality for PCB Component Identification and Localization" Applied Sciences 15, no. 11: 6331. https://doi.org/10.3390/app15116331

APA Style

Chandel, K., Seipel, S., Åhlén, J., & Roghe, A. (2025). Augmented Reality for PCB Component Identification and Localization. Applied Sciences, 15(11), 6331. https://doi.org/10.3390/app15116331

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop