Next Article in Journal
Cross-Encoder-Based Semantic Evaluation of Extractive and Generative Question Answering in Low-Resourced African Languages
Previous Article in Journal
Towards a Holistic Approach for UAV-Based Large-Scale Photovoltaic Inspection: A Review on Deep Learning and Image Processing Techniques
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparative Analysis of Different Display Technologies for Defect Detection in 3D Objects

Department of Computer Science, University of Ruse, 7017 Ruse, Bulgaria
*
Author to whom correspondence should be addressed.
Technologies 2025, 13(3), 118; https://doi.org/10.3390/technologies13030118
Submission received: 20 December 2024 / Revised: 27 February 2025 / Accepted: 8 March 2025 / Published: 14 March 2025
(This article belongs to the Section Information and Communication Technologies)

Abstract

:
This paper starts with an overview of current methods of displaying 3D objects. Two different technologies are compared—a glasses-free 3D laptop that uses stereoscopy, and one that uses front projection on a silver impregnated fabric screen that diffracts light to achieve a holographic effect. The research question is defined—which one is suitable for use by specialists. A methodology for an experiment is designed. A scenario for finding the solution to the problem during the experiment is created. An experiment environment with different workstations for each technology has been set up. An additional reference workstation with a standard screen has been created. Three-dimensional CAD models from the field of mechanical engineering were chosen. Different categories of defects were introduced to make the models usable for the scenario—finding the defects in each of the different workstations. A survey for participant feedback, using several categories of questions, was created, improved, and used during the experiment. The experiment was completed, short discussions were held with each participant, and their feedback was analyzed. The categories of the participants were discussed. The results from the experiment were discussed and analyzed. Statistical analysis was performed on the survey results. The applicability of the experiment in other fields was discussed. Conclusions were made, and the comparative advantages and specifics of each technology were discussed based on the analysis results and the experience gained during the experiment.

1. Introduction

As visualization technologies advance, two new ways of displaying 3D objects have become available recently. One of them is the glasses-free stereoscopic display laptop, which uses cameras integrated in its front panel, right above its monitor. It uses the effect of stereoscopy to display 3D images by tracking the eyes, removing the need for using the special glasses that come with its all-in-one predecessors. This type of technology is currently seeing applications in education [1,2] and training [3]. Another is a holographic simulation that allows for the creation of holographic illusions by using front projection. The technology embeds thin silver threads in its display that reflect light at a high enough level to create bright 3D projections and holographic effects. The method is dependent on the light conditions, and in the perfect environment, the parts of the image that contain no objects are omitted and disappear for the viewers. This in turn creates the illusion that the objects actually exist in reality and are not part of a display. Microsoft HoloLens was considered for evaluation as a fourth workstation, as there are interesting applications for mixed reality and augmented reality that can be explored [4], but they necessitate their own dedicated research and experiments. Virtual reality was also considered, as there are great applications for the technology already, such as cultural tourism [5] and investigating mental stress [6], and in general, 3D technology has introduced various changes to our ways of viewing data [7]. Researchers are searching for ways to enhance visualization, and some have successfully integrated large displays in virtual reality environments to enhance user experience [8]. The effects of stereoscopic 3D displays on visual fatigue and operating performance have been researched [9]. Scientists have optimized the techniques to limit user fatigue when watching such displays by calculating the positions of the viewers’ eyes during movement, thus correcting the 3D projections [10]. Techniques for holographic 3D displays that do not use glasses, and their bottlenecks, have been discussed, and possible algorithmic solutions have been investigated [11]. Innovative holographic near-eye display systems have been created, and prototypes have been tested to expand the lightweight options that may become available for easier use of the technology [12]. The field of 3D visualization is advancing tremendously, and there is room to grow in the direction of physical representation of objects using shape displays [13]. There is a significant variety of technologies for volumetric visualization [14]. Researchers have experimented with visualization technologies for mechanical engineering. Some described their experience with using 3D virtual reality environments and mobile augmented reality applications for teaching mechanical engineering students [15]. Others presented similar mobile augmented reality applications for interacting with 3D objects in engineering education [16]. Details on near-eye display and tracking technologies and their application in visualizing 3D objects have been documented by others [17]. A low-cost prototype, called EducHolo, was developed by scientists [18]. It enables the visualization of holograms on tablets, which were used by students to visualize holograms of mechanical parts. Most of the technologies that can help mechanical engineers for the maintenance and repair of automobile and mechanical equipment are reviewed in detail elsewhere [19].
The goal of the presented qualitative research is to use an experiment to evaluate how useful these new technologies are in comparison to traditional monitor displays and answer the following research question: can they be used to find defects in 3D objects, and to what degree? The general recommendations for qualitative research in [20] are taken into account. The two chosen technologies do not require additional equipment and thus are practical for use in the professional routine. Some of the criteria that categorize 3D display technologies are as follows: illusion, depth, resolution, size, and interactivity. The main purpose of the experiment is to determine if mechanical engineers and IT specialists can optimize their work and see how these new technologies that use different display methods compare in practice. Participants with varying degrees of experience were included in this experiment, in the hope that their ideas and input may reveal other interesting ways to use these display technologies. The expected results from this experiment are valid data from subjective user experience and objective experiment results and analysis of their cross-section. It is expected that newer technologies face pushback from established professionals in their respective fields, and their valid concerns must be noted and analyzed. Different analytical techniques are used in order to refine results and contemplate them from different angles.

2. Experiment Design and Implementation

In order to solve the formulated problem and create a suitable experiment that can be reproduced, a general methodology was created, and a plan for action (a scenario) was developed, following and implementing the methodology. To ensure uniformity of instructions for each participant, the scenario was followed closely, and the instructions were provided in paper format and clarified in a verbal manner.
Figure 1 shows the complete methodology. Each of its steps are explained in detail below:
1.
The display technologies were tested and observed, and the peculiarities of working with each display were discussed.
For example, at this stage, the stylus for the laptops with stereoscopic displays was deemed an unnecessary complication that would introduce a significant challenge for users and could influence experiment results, so it was removed, and a standard computer mouse was used. However, upon further testing with the pilot participant, it was deemed a practically insignificant problem that required a demonstration for a few seconds per participant. Thus, it was decided to keep the stylus and instruct the participants on its use. There was one pilot participant whose role was helping to streamline the entire experimental process. The person chosen for that pilot role was within the 31 to 40 years age range, had 10 years of experience in the field of IT with a wide range of practical experience, and had an average experience in working with 3D modeling. The participant was deemed appropriate to recognize eventual problems with the visualization methods we deployed. Additionally, other more experienced researchers were consulted about other improvements in the process. Their suggestions were considered and complied with, but they themselves were not participants in the experiment, as they would have been biased.
For the holographic display, the workstation was standardized to look like the other workstations present in the experimental room to prevent unnecessary confusion of the users, which could influence the results. In practice, this resulted in a traditional environmental setup, using a wireless mouse and a chair near one of the tables in the room.
The software used was the same or similar in terms of object manipulation between all the workstations. An online 3D viewer was used for two of the workstations, but the stereoscopic workstation included only the compatible stereoscopic viewer—the effects could be achieved only in its own included software.
2.
Categories of 3D object defects were discussed and decided on by consulting experts in the field of mechanical engineering
The main defect types that were discussed were cosmetic, functional, tool-specific, manufacturing-specific, damage (due to wear and due to applied external force), and incompatibility with standard mechanical engineering principles. As the supplier for the objects was an expert in the field, they were consulted heavily on the different defects and their introduction and classification in each of the 3D objects.
3.
Creating 3D objects
Three-dimensional scanning was considered in order to supply more realistic results, as the purpose of this study was to find imperfections that may affect the results of any such process through the introduction of defects or artifacts. The practical limitations of that, however, led to the use of preexisting mechanical engineering parts. Defects were introduced in those objects; afterwards, they were exported in STL and OBJ file formats.
4.
Creating a scenario for the experiment (described in detail in a separate section in this paper)
5.
Creating and categorizing questions, fit for further analysis
We created categories of questions based on the types of errors and defects in the objects, the specialization of participant groups, the number of technologies used, and the 3D object types. We brainstormed and discussed questions for each category in an iterative way, reaching a satisfactory set of questions.
6.
Performing the experiment
This was mainly a logistical issue of finding enough qualified specialists for each participant category, creating a schedule to follow for both participants and researchers, and handling various peculiarities during the experiment itself (such as swapping the workstation and object order to negate random factors as much as possible.)
7.
Performing the interviews
We arranged a group of researchers to accommodate each participant and interview them after they had completed the experiment. A rotation of researchers was assured in order to eliminate bias and minimize mistakes due to repetition or distraction.
8.
Compiling data and deciding on types of analysis that must be performed
We computerized all the gathered data, classified the answer types to the interview questions, and using analytical methods for reliability, including Cronbach alpha for normal distribution; one-sample Kolmogorov–Smirnov test; and for descriptive statistics, mean and standard deviation. We discussed how to visualize the data in the best possible way for other researchers to gain sufficient understanding and value from the experiment, assuring reproducibility of the experiment as much as possible.
The split-half reliability method was also considered for testing survey reliability, but Cronbach’s alpha was chosen as a more suitable method for the analysis due to the bias the split of the data could introduce [21]. The Kolmogorov–Smirnov test was chosen over the Shapiro–Wilk test in the SPSS 26 analysis process because the data did not meet the assumptions of normality that the Shapiro–Wilk test requires. The null hypothesis chosen was that the data were normally distributed. The hypothesis was rejected for the entire dataset, which necessitated the analysis of descriptive statistics for the standard deviation and the mean for each question category.

2.1. Experiment Scenario

The experiment scenario was developed to ensure a transparent, robust, and invariable experimental process.
Figure 2 displays the chosen experiment scenario. Each of the steps in this scenario is explained in detail below:
  • The participant enters the room along with a researcher.
The role of the researcher was only to help the participant track time with their chronometer for each object and clarify any workstation questions that arose. The researcher was not an expert in the field of the 3D objects in order to ensure no bias was present. Instructions were printed on paper and given to the participant. Only one participant worked in the experiment room at a time. Although this required more time, it led to better results that would be isolated from outside influences as much as possible.
2.
The participant starts working at a workstation (the researcher shuffles the order of workstations and objects for each of the participants).
There were three workstations in total: one with a normal monitor (the “control” group), and two experimental workstations with each of the other visualization technologies—a glasses-free stereoscopic display laptop and a front projection holographic simulation.
3.
The participant starts working with three 3D objects. The researcher shuffles their order for each participant.
Each workstation had all nine objects, but only three were provided for each participant. It was crucial to rotate through all the objects for all the workstations to ensure minimal outside factors that could influence the experimental results. The workstations were set up with similar software to ensure as little interference based on technological peculiarities as possible. The only exception was the stereoscopic workstation, as it was impossible to use non-compatible software with it. The choice for the other two workstations was made with that in mind to find the most similar environments.
4.
The participant finishes finding defects and writes their findings in the provided template.
The researcher notes the time it took the participant to finish finding the defects in each workstation using a chronometer. The accompanying researcher gives the participant a template that has the different projections of each 3D object in turn. The participant is instructed to circle the defects that they have found using a marker and write a short description for each of them using a pen.
5.
The participant moves to the next workstation and thus passes through all three workstations and all nine of the 3D objects.
This is a standard iterative process that ensures all objects and technologies are visited; a repetition of the previous steps until completion.
6.
The participant is escorted out of the experiment room and led to the interview room.
The researcher closes the files on each workstation and presents the next three objects.
7.
An interview is conducted between a researcher and the participant. A short survey is completed.
8.
The scenario is completed and the participant’s experiment results are compiled. Possible improvements are discussed.
In order to ensure less interference by outside factors such as light, the glass windows and the glass door panels were covered using black foil. The aim was to achieve almost complete darkness for the hologram workstation light condition recommendations. Several photos of the room with each experiment workstation, as well as the experiment environment, can be seen in Figure 3.
Figure 3 shows each workstation for each of the three display technologies. Workstation 1 is the stereoscopic display laptop, workstation 2 is the holographic simulation display, and workstation 3 is the control workstation—a normal computer.
The hardware specifications can be seen in Table 1. Workstation 1 is a stereoscopic glasses-free laptop that comes with a stylus (a pen that helps manipulate 3D objects within the native environment). Workstation 2 includes a PC that sends the image to a powerful projector, which illuminates a display with silver-infused threads. The setup also includes side-covers that help eliminate direct light coming from other directions. Workstation 3 is a PC with a large display. Specifically, for workstation 2, room modifications were needed in order to achieve better results.
Figure 4 shows the room light insulation, as well as the difference when light is present for workstation 2. The degree to which light affects the experience of working with workstation 2 is apparent, and that made the modifications to the room a necessity.
Figure 5 shows some of the participants working on each of the workstations. During the experiment, a small USB-powered lamp was installed near workstation 2 in order to help the participants mark the defects on their paper template after they finished each of their object defect detections. The participants were always asked if they wanted the lights on during the marking process, but most chose to work in the dark, as the light level difference took time to get used to.

2.2. Software Choice

The software that was used to visualize the 3D objects for workstations 2 and 3 was the same—3Dviewer.net—to ensure uniformity and ease of use. In order for workstation 1’s stereoscopy effect to work, its own proprietary 3D viewer was used (Studio A3, version 5.2.0). Other options included Blender 3.5.0, AutoCAD 24.0, and SolidWorks 2024, but their complexity and the plethora of functionalities they provide were evaluated to introduce too much noise in the experiment results, so a simple solution was found. Microsoft Paint 3D was also considered, as it provides an effective method to mark the defects directly on the object in its native 3D view. However, workstation 1’s stereoscopy did not work there. This necessitated the use of projections printed on paper for the users to mark their results.

2.3. Defect Description

The artificially introduced defects in the CAD models are the crucial part of the conducted experiment. They should reliably reveal the capabilities of the studied screen technologies, being different in size, types, and position, as well as typical for the 3D models in mechanical engineering.
Examples of the chosen types of defects, marked with circles in Figure 6, are as follows:
  • Cutting out: It may result in unacceptable or strange perforation like a hole in some area (Figure 6a).
  • Cropping: The resulting shape is irregular and is illogical for a professional (Figure 6b).
  • Insertion: The additional geometric element is functionally unacceptable and clearly redundant (Figure 6c).
  • Missing triangle: Typical for 3D printing, this looks like a gap that exposes the inside of the shell (Figure 6d).
The size of each defect is classified as large, medium, or small compared to the part’s volume:
  • Below 1% is small.
  • Between 1% and 5% is medium.
  • Above 5% is large.
The total number of part models is 10. The objects are numbered from 1 to 10. Part number 1 is used for the introduction phase, and others are numbered from 2 to 10 in the following text. There are between 4 and 6 defects in each model, which are randomly distributed. The type of defect, size, recurrence, and presence are summarized in Table 2.
A form shown in Figure 7 is presented to each participant during the introduction phase of the experiment. A detailed description of possible defects and short instructions are also presented. The goal is to familiarize the person with the type of defect, upcoming experiment, and completion of a similar template for each of the parts.

2.4. Survey Composition

The complete survey composition is shown in Table 3. The Brook’s system usability scale [22] was considered, but a simplified version of the NASA method’s categorical distribution [23] was helpful to construct the survey.
The interview question classifications can be seen in Table 3 (answer types are as follows: MC = multiple choice, OA = open answer, LIKERT = Likert scale).
Most of the questions (34 out of 42) required answers that could be placed on a Likert scale. The rest (8 of 42) had either a multiple choice category or an open answer. This helped the research team to have an easy-to-understand metric to decide the likely outcomes of the experimental results during the process itself. The idea of in-depth qualitative research assumes that the participant number is enough to ensure data validity [24,25]. This is in stark contrast to quantitative research, where the participant number is assumed to be high enough to be representative of an entire population. Successful qualitative studies are available in different fields; for example, psychology [26], medical diagnostics [27,28,29], and statistics and interview research [30,31]. The format of this survey was inspired by sustainability [32] and good practices in higher education [33].
To reach data saturation for focused qualitative research, statistically, 30 people were considered enough for the current research, according to the referenced sources.
Due to the nature of the experiment, as well as the participant categories, it is difficult and time consuming in practice to find people in these categories (specialists in their own field that have a good amount of experience but are currently in the active workforce). One of the main reasons for this difficulty is the logistical problem of finding timeslots and organizing both participants and researchers due to their schedules.
After the pilot participant completed their work, several changes were made, most of them concerning the number of questions in the survey. During the interview with the first and second participants, almost half of the initial questions were deemed unnecessary, having duplicate meaning, or lacking clarity. The conversations during the interviews helped to streamline the survey questions, refining the final version presented in this paper.
The reliability of the questionnaire was assessed using Cronbach’s alpha, a measure of internal consistency. The survey was deemed acceptable with a value of 0.783 for Cronbach’s alpha. It was estimated that the survey statistical data were of appropriate quality for the experiment [34]. A conclusion can be made that this survey was reliable.
The inter-item correlation in Table 4 helps us to recognize whether the questions were strongly related to each other. A high coefficient of correlation signifies that the survey questions had similar or duplicate meaning. After the SPSS analysis, 8 out of 561 pairs of questions were found to have a high correlation coefficient—above the absolute value of |0.7|, which was 1.43% in total. For the second category, the results were average to high (absolute values between 0.4 and 0.7), including 126 out of 561 pairs, which is 22.46%. A total of 76.11% of the questions had a weak correlation (absolute values below 0.3); therefore, it can be concluded that the survey was successful in its purpose of gathering relevant information from the participants. A conclusion can be made that the gathered survey data are reliable.

2.5. Participants

It should be noted that there were five people with a significant amount of experience working with 3D projections.
The participant categories are shown in Table 5. The target group of participants was defined as mainly working adults in their prime working age, due to the aim of this study to explore whether the display technologies would be beneficial for this category of people. Most of the participants were highly qualified professionals, well-educated, or people with an enormous amount of practical experience.
It should be noted that no one from the research team was included as a participant in the experiment in order to ensure the integrity of the data.
A detailed statistical distribution for the different categories of participants is shown in Figure 8. It was concluded that the participant age groups, gender, field of work, and experience in that field of work are sufficiently representative of the current qualitative study.

3. Results

The collated experiment results of all the participants can be seen in Table 6.
The “Times done” row for each workstation is marked with a different color if it is deemed to be below the threshold of 6. It must be noted that the data are variable enough that such instances do not influence them in a significant manner. It can be concluded that the data gathered from the experiment are reliable for conducting qualitative research.
Figure 9 displays the distribution of correctly found defects in each workstation, per object. workstation 1 in general has the least amount of correctly found defects, as only with object 6 does the result surpass the control workstation 3, and that is only by a small margin. The same is valid for workstation 2, although there are consistent results showing a closer gap between the results there and the control workstation 3. It can be concluded that the control workstation 3 has consistently better results for defect detection in the experiment.
Figure 10 displays the percent of incorrectly found defects in each workstation per object. The mistakes are calculated as a percentage when compared to the total number of defects for each object. The entire category can be seen with its absolute collated numbers in Table 6. An observation can be made that there is no strong relationship between the workstation and the number of mistakes made. These results are possibly related more to the lack of sufficient expertise required to recognize whether a feature is a defect or not; hence, that leads to the current distribution.
Figure 11 shows the measurement of the time taken for each object in each workstation. The results show consistently competitive results for all three workstations. This was also observed during the experiment itself—when the participants became accustomed to the workstations, their work became consistently fast. There were only small differences in the time it took each participant to locate defects in each separate workstation, but it must be noted that the most time it took was for workstation 1. It can be concluded that the time it took had little meaningful difference for defect detection for workstations 2 and 3, while workstation 1 required more time for objects 4, 8, 9, and 10.
In Table 7 and Figure 12, the collated data for each category of participant experience can be observed. The data show that people with less experience in the field mark fewer defects incorrectly. The most- and least-experienced people made the least mistakes (marking an incorrect defect) on workstation 1. On average, the time it took to find the defects for each age group was the same for all age groups on all workstations. Workstation 1 required the most time for all groups, while workstations 2 and 3 were competing for second place depending on the age group—less-experienced participants required less time in workstation 2, while more-experienced participants worked the fastest on workstation 3. It can be concluded that in terms of finding the correct defects, the best workstation was the third—workstation 3.
An observation was been made that most participants with work experience were consistent in their time taken to find the object defects, and their average percentage of correctly found defects followed a smoother distribution than the collective average. This can be observed in Table 8 and Figure 13—these include the sample data compared to the average results of all the participants.

Survey Results

The survey results were tested for normalization using the Kolmogorov–Smirnov one-sample test for each question category. The results are shown in SPSS tables, including the descriptive statistics data, which were analyzed afterwards.
The answers from every relevant category of questions were subjected to the Kolmogorov–Smirnov one-sample test, as seen in Figure 14. The test results were statistically significant due to the value of “p”–“Asymp. Sig.” being below 0.05 for every category. The K-S Statistic (D) parameter displays how much the sample correlates to the theoretical distribution. A value between 0 and 1 indicates a perfect match between them.
Analyzing the results by category:
Overall experience evaluation (Q6–Q11)—all questions’ results, with the sole exception of Q7, have a mean value that is close to the high end, showing the participants positive reception towards the experiment. The standard deviation for Q6, Q7, and Q10 is relatively low, and for Q8 and Q9, it is low. This indicates that the answers are grouped around the mean value.
Difficulty evaluation (Q12A–Q14C): The mean value for Q12B, Q12C, Q13, and Q14 is high, which indicates that participants “felt” working with the workstation was easy, and they did not encounter difficulties while searching for defects. The standard deviation for Q12A, Q12B, Q13, Q14B, and Q14C is relatively low, and for Q12C, it is low. Along with the high mean value, this indicates that the assumed easiest workstation to work with was workstation 3. This was one of the main reasons to determine the correlation between the participants, which indicated the ease of use of the workstations and their actual experiment results. These data were used to check whether their perception about the difficulty (noted in the survey) was proven by their results in the experiment. The answers to this category of questions are grouped around the mean value, with the sole exception of Q14A, where the answers are deviating more.
Certainly evaluation (Q15A–Q15C): The mean value for all questions is around choice number 4, which indicates that the participants felt relatively confident in their defect identification. The standard deviation for all three questions is relatively low and shows that the answers are grouped around the mean value.
Visualization methodology evaluation (Q16A–Q18C): The mean value is high for Q17B, Q17C, and Q18C, which shows that most of the participants found the visualization methodology very intuitive and accurate. Q17A, Q18A, and Q18B have their mean around choice 4, and this indicates that some of the participants found the visualization methodology relatively intuitive and accurate. The mean of Q16A, Q16B, and Q16C was around 3—in the middle—which signifies that the participants evaluated the size of the defects as normal: neither too big nor too small. This is important, as these answers also indicate the participants’ ability to quickly grasp how to work with the workstations and manipulate 3D objects in a way that will render the size of defects irrelevant, at least for the three workstations in this experiment. The standard deviation for Q16A, Q16B, Q16C, Q18B, and Q18C showed an average level of variance. Q17B and Q17C had low variance, and the mean value was the highest, meaning that the participants evaluated workstations 2 and 3 as being most intuitive to work with. Q17A and Q18A display a moderate deviation, which means there is a large variety of responses, meaning that workstation 1 (the stereoscopic laptop) received the lowest evaluation for “intuitive to work with” by the participants.
Experiment, defect, and 3D object evaluation (Q19–Q22): The mean value for the answers for Q19 was high, with an average deviation. Q20 had a mean value nearing 3, but it had a moderate deviation, indicating a higher variance in responses. The mean for Q21 was 2.90 and it had a moderate deviation, which shows that the participants did not see unique defects (defects that were present only once during the entire experiment). Q22’s deviation was small and the mean was high, which indicates that participant attention and interest during the experiment remained high.
Visibility evaluation (Q25A–Q26): Q25A, Q25B and Q25C had means that varied from 3.63 to 4.43, indicating that the objects were visible from most angles for the participants. The overall visibility, Q26, was evaluated as relatively good. The deviation in questions Q25A and Q25B was larger, indicating that some of the participants experienced visibility-related issues in workstations 1 and 2.
Feedback (Q27 and free answers in Q28): Nearly all the participants were fully immersed in the experiment and had positive feedback to share. The constructive criticism by the participants was directed at the technologies themselves and not the experiment. It must be noted that several participants were uncomfortable to different degrees during their work on workstation 1. During the interview discussion part (Q28), when asked whether they had an eye condition, it turned out that every person who had been uncomfortable had a varying degree of astigmatism or was of an advanced age. This must be taken into account when considering using this technology on a large scale, and more research on the topic must be carried out by medical professionals.
From the category for the difficulty evaluation, question Q12 was examined in detail (“How easy was the work during each stage of the experiment?” Q12A was related to workstation 1, Q12B to workstation 2, and Q12C to workstation 3). All participants who answered with 5 (that the workstation was extremely easy to work with) had their experiment results analyzed in detail in order to juxtapose their subjective view to their objective results. The number of each participant and their count by question can be observed in Table 9. The analysis of Q12 is of significant importance, as it is closely related to the main goal of this research.
This slice of the participants’ results was collated in separate tables for closer analysis. Determining their level of success compared to the average of the entire participant population was the goal of this part of the analysis. Three similar tables were separately introduced for each workstation. The number of defects denotes all of the defects present in all of the objects that were presented to the participant at the workstation, where “correct” indicates the number of defects that were successfully recognized, and “not found” indicates the number of defects not found (the “correct” and “not found” together comprise 100% of the total number of defects). The number of incorrectly found defects is not a composite part of the entire number of defects, but rather things that were noted that were not effectively defects, but original parts of the functioning object. The results can be observed in the three following tables: workstation 1—Table 10; workstation 2—Table 11; workstation 3—Table 12.
For workstation 1 (Table 10), three participants (43%) had less than 70% correct and four (57%) had more than 70%.
For workstation 2 (Table 11), five participants (26%) had less than 70% correct and fourteen (74%) had more than 70%.
Figure 15 compares the average of the sample and the average of the entire participant population. It can be observed that fewest participants responded that workstation 1 was extremely easy to work with, and from their results, it can be concluded that they were faster and had more correct responses compared to the average. The same can be concluded for workstation 3. The averages for workstation 2, however, showed very close results between the average participant and the select sample size.
For workstation 3 (Table 12), two participants (7%) had have less than 70% correct, and twenty-six (93%) had more than 70%.

4. Discussion

Over the course of the experiment, it was concluded that the noise in the experiment environment was minimized as much as possible. In order to ensure as much impartiality as possible, each participant was uninformed about the nature of the study. There was a “trick” question in the survey about their “expectations”, Q7, and whether the experiment was as they expected it to be. The answers showed a satisfactory trend—most did not know what to answer, as they did not have any expectations, which confirms their own lack of prior information, and consequently any bias they might have had.
The types of defect and their numbers varied from object to object; hence, they were statistically insignificant, as the defects themselves were not the study’s main goal. This variation was intentionally decided in the beginning, in order to minimize pattern recognition by the participants—either the number or the type of the defect. One possible explanation of the result shown in Figure 9 is that using a large screen hologram results in fewer distractions in a limited light environment and helps with concentration. However, this is not enough to overwhelm preexisting experience on already adopted technologies. Another factor could be the resolution of the screen. This should be included in future research. The harsh change in light conditions in the environment could also contribute to the possible experiment result differences, although the questionnaire and interview results did not reveal this. During the discussions, the participants noted that they were more immersed in the task on workstation 2.
On the other hand, the practical limitations of workstation 2—screen size and light requirements—made it difficult to apply in practice. The difference in results is not significant enough to warrant widespread adoption.
Experiments that involve humans and their live performance are naturally subject to unpredictable and undefined influences. For this reason, several measures for consistency were taken:
  • The exact following of the scenario (Figure 2) was supervised by the designated member of the research team.
  • Over the course of the experiment, the surrounding noise in the room was minimized.
  • In order to ensure objectivity, each participant was uninformed about the nature of the study.
Possible reasons for the results shown in Figure 12 and Figure 13 are overconfident behavior by more experienced participants, underestimating the task at hand due to familiarity with the technology. It took them more time to complete their tasks using the other newer technologies that they were less familiar with.
The average time it took for each participant to complete the experiment was around one hour. Although the researchers’ expectations were that this would be too long and taxing on everyone, according to the feedback, this was an acceptable time range, as no negative opinions were presented during the interviews. Most of the participants were surprised that an hour had passed by the time they finished the experiment, which leads to the conclusion that the time range was appropriate and the experiment was interesting to them.

5. Conclusions

In general, it is easy to come up with a conclusion on how useful a technology will be, based on common sense. This is a very dangerous line of thinking, however, because an unproven opinion has no weight. Thus, it is necessary to conduct an experiment (and in the best-case scenario, a series of both quantitative and qualitative experiments) in order to prove this view. In this case, the experiment had results that were unexpected by the creators, as the varying opinions on the different technologies led to serious considerations on the topic. Each different visualization method had its own positive and negative sides but applying it in practice helped us to further understand exactly how useful it would be in a given field.
After analyzing the data and combining them with the conclusions made during the experiment itself, taking into account all the participants’ feedback, responses, and opinions, it can be said with confidence that both technologies have a future, but the stereoscopic effect without using glasses must be heavily researched from the medical side before being applied to large groups of people. Medical staff were also considered for the experiment. Time constraints have necessitated that their feedback be separated in another paper, as the considerations for the 3D objects are of different categories, and the measurement of participant defect detection is fundamentally different from objects in the mechanical engineering field.
The conclusions about the experiment results were confirmed by both an objective method, experiment results, and a subjective one, survey answers. More-experienced participants were more consistent in their experiment results. The size and variation in the study group was deemed sufficient for the conduct of qualitative research. The sole 3D artist and the various engineers were chosen to validate the researcher’s thought process for the experiment and recognize if the experiment was executed correctly. They are mostly noted in the current research for display and completeness purposes. The survey data and questions were analyzed and found to be reliable. The answers were found to not be normalized and were biased towards positive. Younger participants and some of the older ones appreciated workstation 1 more, although it was, in general, more difficult to work with. The participants that were more confident during their quiz achieved better results in 2 of the 3 workstations (compared to the average of all participants). The different categories in the survey showed that an overwhelming majority of the participants were positive in their opinion both about the experiment and the technologies themselves. Most of the participants did not find it very challenging to perform the assigned tasks, meaning all technologies could be used for the purpose of finding defects in 3D objects. The short interview discussions after the completion of each experiment all concluded in a positive manner, and feedback was positive. Workstation 3—the control workstation—had the best participant results when finding defects. All workstations had similar results when it came to incorrectly found defects—the technologies themselves did not influence the mistakes that the participants made.
It can be concluded that this research was successful in reaching an answer to the research question, and that more similar experiments are necessary for different fields of study to explore the different possible applications of these types of display technologies.

Author Contributions

Conceptualization, V.K.; methodology, V.K. and E.M.; validation, V.K. and M.A.; formal analysis, V.K. and M.A.; investigation, E.M.; resources, V.K. and R.R.; data curation, V.K., E.M. and M.A.; writing—original draft, V.K. and M.A.; writing—review and editing, V.K., E.M., M.A., T.V. and R.R.; visualization, V.K.; supervision, V.K., T.V. and R.R.; project administration, V.K.; funding acquisition, V.K. and T.V. All authors have read and agreed to the published version of the manuscript.

Funding

This study was financed by the European Union—NextGenerationEU, through the National Recovery and Resilience Plan of the Republic of Bulgaria, project No. BG-RRP-2.013-0001-C01.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data are available upon request.

Acknowledgments

This study was financed by the European Union—NextGenerationEU through the National Recovery and Resilience Plan of the Republic of Bulgaria, project No. BG-RRP-2.013-0001-C01.
This publication was developed with the support of Project BG05M2OP001-1.001-0004 UNITe, under Operational Programme “Science and Education for Smart Growth”, by the European Union trough the European Structural and Investment Funds.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Petrov, P.D.; Atanasova, T.V. The Effect of augmented reality on students’ learning performance in stem education. Information 2020, 11, 209. [Google Scholar] [CrossRef]
  2. Aljumaiah, A.; Kotb, Y. The impact of using zSpace system as a virtual learning environment in Saudi Arabia: A case study. Educ. Res. Int. 2021, 2021, 2264908. [Google Scholar] [CrossRef]
  3. Zhou, Z.; Yang, Z.; Jiang, S.; Jiang, B.; Xu, B.; Zhu, T.; Ma, S. Personalized virtual reality simulation training system for percutaneous needle insertion and comparison of zSpace and vive. Comput. Biol. Med. 2022, 146, 105585. [Google Scholar] [CrossRef] [PubMed]
  4. Palumbo, A. Microsoft HoloLens 2 in medical and healthcare context: State of the art and future prospects. Sensors 2022, 22, 7709. [Google Scholar] [CrossRef] [PubMed]
  5. Kontogiorgakis, E.; Zidianakis, E.; Kontaki, E.; Partarakis, N.; Manoli, C.; Ntoa, S.; Stephanidis, C. Gamified VR Storytelling for Cultural Tourism Using 3D Reconstructions, Virtual Humans, and 360° Videos. Technologies 2024, 12, 73. [Google Scholar] [CrossRef]
  6. Lebamovski, P.; Gospodinova, E. Investigating the Impact of Mental Stress on Electrocardiological Signals through the Use of Virtual Reality. Technologies 2024, 12, 159. [Google Scholar] [CrossRef]
  7. Triviño-Tarradas, P.; García-Molina, D.F.; Rojas-Sola, J.I. Impact of 3D Digitising Technologies and Their Implementation. Technologies 2024, 12, 260. [Google Scholar] [CrossRef]
  8. Wu, Y.; Wang, Y.; Lou, X. A large display-based approach supporting natural user interaction in virtual reality environment. Int. J. Ind. Ergon. 2024, 101, 103591. [Google Scholar] [CrossRef]
  9. Chao, C.J.; Yau, Y.J.; Lin, C.H.; Feng, W.Y. Effects of display technologies on operation performances and visual fatigue. Displays 2019, 57, 34–46. [Google Scholar] [CrossRef]
  10. Solari, F.; Chessa, M.; Garibotti, M.; Sabatini, S.P. Natural perception in dynamic stereoscopic augmented reality environments. Displays 2013, 34, 142–152. [Google Scholar] [CrossRef]
  11. Pi, D.; Liu, J.; Wang, Y. Review of computer-generated hologram algorithms for color dynamic holographic three-dimensional display. Light Sci. Appl. 2022, 11, 231. [Google Scholar] [CrossRef] [PubMed]
  12. Kim, J.; Gopakumar, M.; Choi, S.; Peng, Y.; Lopes, W.; Wetzstein, G. Holographic glasses for virtual reality. In Proceedings of the ACM SIGGRAPH 2022 Conference Proceedings, Vancouver, BC, Canada, 7–11 August 2022; pp. 1–9. [Google Scholar] [CrossRef]
  13. Johnson, B.K.; Naris, M.; Sundaram, V.; Volchko, A.; Ly, K.; Mitchell, S.K.; Acome, E.; Kellaris, N.; Keplinger, C.; Correll, N.; et al. A multifunctional soft robotic shape display with high-speed actuation, sensing, and control. Nat. Commun. 2023, 14, 4516. [Google Scholar] [CrossRef] [PubMed]
  14. Geng, J. Three-dimensional display technologies. Adv. Opt. Photonics 2013, 5, 456–535. [Google Scholar] [CrossRef] [PubMed]
  15. Ivanova, G.; Ivanov, A.; Zdravkov, L. Virtual and augmented reality in mechanical engineering education. In Proceedings of the 2023 46th MIPRO ICT and Electronics Convention (MIPRO), Opatija, Croatia, 22–26 May 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 1612–1617. [Google Scholar] [CrossRef]
  16. Waskito, W.; Fortuna, A.; Prasetya, F.; Wulansari, R.E.; Nabawi, R.A.; Luthfi, A. Integration of mobile augmented reality applications for engineering mechanics learning with interacting 3D objects in engineering education. Int. J. Inf. Educ. Technol. (IJIET) 2024, 354–361. [Google Scholar] [CrossRef]
  17. Koulieris, G.A.; Akşit, K.; Stengel, M.; Mantiuk, R.K.; Mania, K.; Richardt, C. Near-eye display and tracking technologies for virtual and augmented reality. Comput. Graph. Forum 2019, 38, 493–519. [Google Scholar] [CrossRef]
  18. Figueiredo, M.J.; Cardoso, P.J.; Gonçalves, C.D.; Rodrigues, J.M. Augmented reality and holograms for the visualization of mechanical engineering parts. In Proceedings of the 2014 18th International Conference on Information Visualisation, Paris, France, 16–18 July 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 368–373. [Google Scholar] [CrossRef]
  19. Prathibha, S.; Palanikumar, K.; Ponshanmugakumar, A.; Kumar, M.R. Application of augmented reality and virtual reality technologies for maintenance and repair of automobile and mechanical equipment. In Machine Intelligence in Mechanical Engineering; Academic Press: Cambridge, MA, USA, 2024; pp. 63–89. [Google Scholar] [CrossRef]
  20. Johnson, P.; Harris, D. Qualitative and quantitative issues in research design. In Essential Skills for Management Research; SAGE Publications Ltd.: Thousand Oaks, CA, USA, 2002; pp. 100–116. [Google Scholar] [CrossRef]
  21. Field, A. Discovering Statistics Using IBM SPSS Statistics; Sage Publications Limited: Thousand Oaks, CA, USA, 2013; pp. 115–121. Available online: https://vlb-content.vorarlberg.at/fhbscan1/330900091084.pdf (accessed on 10 January 2025).
  22. Brooke, J. SUS—A quick and dirty usability scale. In Usability Evaluation in Industry; CRC Press: Boca Raton, FL, USA, 1996; pp. 4–7. [Google Scholar]
  23. Hart, S.G. Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. Adv. Psychol. 1988, 52, 139–183. [Google Scholar] [CrossRef]
  24. Crouch, M.; McKenzie, H. The logic of small samples in interview-based qualitative research. Soc. Sci. Inf. 2006, 45, 483–499. [Google Scholar] [CrossRef]
  25. Creswell, J.W.; Poth, C.N. Qualitative Inquiry and Research Design: Choosing Among Five Approaches; Sage Publications: Thousand Oaks, CA, USA, 2016. [Google Scholar]
  26. Prizeman, K.; McCabe, C.; Weinstein, N. Stigma and its impact on disclosure and mental health secrecy in young people with clinical depression symptoms: A qualitative analysis. PLoS ONE 2024, 19, e0296221. [Google Scholar] [CrossRef]
  27. Abendstern, M.; Davies, K.; Chester, H.; Clarkson, P.; Hughes, J.; Sutcliffe, C.; Poland, F.; Challis, D. Applying a new concept of embedding qualitative research: An example from a quantitative study of carers of people in later stage dementia. BMC Geriatr. 2019, 19, 227. [Google Scholar] [CrossRef]
  28. Lilleheie, I.; Debesay, J.; Bye, A.; Bergland, A. A qualitative study of old patients’ experiences of the quality of the health services in hospital and 30 days after hospitalization. BMC Health Serv. Res. 2020, 20, 446. [Google Scholar] [CrossRef]
  29. Ames, H.; Glenton, C.; Lewin, S. Purposive sampling in a qualitative evidence synthesis: A worked example from a synthesis on parental perceptions of vaccination communication. BMC Med. Res. Methodol. 2019, 19, 26. [Google Scholar] [CrossRef] [PubMed]
  30. Weller, S.C.; Vickers, B.; Bernard, H.R.; Blackburn, A.M.; Borgatti, S.; Gravlee, C.C.; Johnson, J.C. Johnson. Open-ended interview questions and saturation. PLoS ONE 2018, 13, e0198606. [Google Scholar] [CrossRef] [PubMed]
  31. Guest, G.; Namey, E.; Chen, M. A simple method to assess and report thematic saturation in qualitative research. PLoS ONE 2020, 15, e0232076. [Google Scholar] [CrossRef]
  32. Kamranfar, S.; Damirchi, F.; Pourvaziri, M.; Abdunabi Xalikovich, P.; Mahmoudkelayeh, S.; Moezzi, R.; Vadiee, A. A Partial Least Squares Structural Equation Modelling Analysis of the Primary Barriers to Sustainable Construction in Iran. Sustainability 2023, 15, 13762. [Google Scholar] [CrossRef]
  33. Olmos-Gómez, M.D.C.; Luque-Suárez, M.; Ferrara, C.; Cuevas-Rincón, J.M. Analysis of psychometric properties of the Quality and Satisfaction Questionnaire focused on sustainability in higher education. Sustainability 2020, 12, 8264. [Google Scholar] [CrossRef]
  34. Elnabawi, M.H.; Jamei, E. The thermal perception of outdoor urban spaces in a hot arid climate: A structural equation modelling (SEM) approach. Urban Clim. 2024, 55, 101969. [Google Scholar] [CrossRef]
Figure 1. General experiment methodology.
Figure 1. General experiment methodology.
Technologies 13 00118 g001
Figure 2. Experiment scenario.
Figure 2. Experiment scenario.
Technologies 13 00118 g002
Figure 3. From left to right, workstations 1, 2, and 3.
Figure 3. From left to right, workstations 1, 2, and 3.
Technologies 13 00118 g003
Figure 4. Workstation 2 with comparison of lights on and off, achieving the desired holographic effect.
Figure 4. Workstation 2 with comparison of lights on and off, achieving the desired holographic effect.
Technologies 13 00118 g004
Figure 5. Left to right: scenes of workstation 1, workstation 2, and workstation 3.
Figure 5. Left to right: scenes of workstation 1, workstation 2, and workstation 3.
Technologies 13 00118 g005
Figure 6. Types of defects: (a) cutting out, (b) cropping, (c) insertion, (d) missing triangle.
Figure 6. Types of defects: (a) cutting out, (b) cropping, (c) insertion, (d) missing triangle.
Technologies 13 00118 g006
Figure 7. Example of a completed form. A similar template is given for each separate 3D object.
Figure 7. Example of a completed form. A similar template is given for each separate 3D object.
Technologies 13 00118 g007
Figure 8. Participant statistical distribution.
Figure 8. Participant statistical distribution.
Technologies 13 00118 g008
Figure 9. Distribution of correctly found defects in each workstation for each object.
Figure 9. Distribution of correctly found defects in each workstation for each object.
Technologies 13 00118 g009
Figure 10. Distribution of incorrectly found defects in each workstation for each object.
Figure 10. Distribution of incorrectly found defects in each workstation for each object.
Technologies 13 00118 g010
Figure 11. Average time (in seconds) it took for each object in each workstation.
Figure 11. Average time (in seconds) it took for each object in each workstation.
Technologies 13 00118 g011
Figure 12. Results for each category of participant experience.
Figure 12. Results for each category of participant experience.
Technologies 13 00118 g012
Figure 13. Chart view of performance of participants with more than 20 years of experience in their field (sample) compared to the average result of all participants.
Figure 13. Chart view of performance of participants with more than 20 years of experience in their field (sample) compared to the average result of all participants.
Technologies 13 00118 g013
Figure 14. Results from normalization analysis for each question category in the survey based on the Kolmogorov–Smirnov one-sample test. The superscript “a” signifies that the analysis was made with the assumption that the test distribution is normal, while superscript “b” shows that the values were computed directly from the experiment dataset.
Figure 14. Results from normalization analysis for each question category in the survey based on the Kolmogorov–Smirnov one-sample test. The superscript “a” signifies that the analysis was made with the assumption that the test distribution is normal, while superscript “b” shows that the values were computed directly from the experiment dataset.
Technologies 13 00118 g014aTechnologies 13 00118 g014b
Figure 15. Comparison between the average of the sample and the average of the entire participant population.
Figure 15. Comparison between the average of the sample and the average of the entire participant population.
Technologies 13 00118 g015
Table 1. Workstation hardware specifications.
Table 1. Workstation hardware specifications.
WorkstationTechnologySpecifications
1Laptop with a stereoscopic display using two integrated cameras and a stylusCPU: Intel i5-11400H, 2.70 GHz
GPU: Nvidia RTX 3060 Laptop GPU
RAM: 16 GB
OS: Windows 11 Pro (23H2)
2Holographic display, a projector connected to a PCCPU: AMD Ryzen 9 5900X 12 core, 3.70 GHz
GPU: Nvidia RTX 3060
RAM: 64 GB
OS: Windows 11 Pro (23H2)
3PC and monitor (control workstation)CPU: AMD Ryzen 9 5900X 12 core, 3.70 GHz
GPU: Nvidia RTX 4070
RAM: 64 GB
OS: Windows 11 Pro (23H2)
Table 2. Distribution of defects in the CAD models.
Table 2. Distribution of defects in the CAD models.
TypeSizeObjectRecurring
Cutting outSmall1, 2, 8, 101
72
33
46
Medium8, 91
5, 6, 72
Large--
CroppingSmall6, 8, 101
2, 92
13
Medium6, 101
52
Large81
InsertionSmall2, 8, 91
Medium1, 3, 5, 6, 101
Large--
Missing triangleSmall
Medium1, 31
Large--
Table 3. Survey questions, their categories, and answer types.
Table 3. Survey questions, their categories, and answer types.
CategoryNo.QuestionAnswer Type
StatisticalQ1In which age group are you?MC
Q2GenderMC
Q3What field do you work in?OA
Q4How much experience do you have in the specified field?MC
Q5What experience do you have in using tools for working with 3D objects?MC
Overall experience evaluationQ6How satisfied are you with your participation in the experiment?MC
Q7To what extent did the experiment meet your expectations?LIKERT
Q8How well were you able to focus on your tasks during the experiment?LIKERT
Q9How do you evaluate the organizational process during the experiment?LIKERT
Q10How understandable were the instructions given during the experiment?LIKERT
Q11How easy was it to navigate through the experiment?LIKERT
Difficulty evaluationQ12AHow easy was the work during each stage of the experiment? (Workstation 1)LIKERT
Q12BHow easy was the work during each stage of the experiment? (Workstation 2)LIKERT
Q12CHow easy was the work during each stage of the experiment? (Workstation 3)LIKERT
Q13How difficult was it to complete the assigned tasks?LIKERT
Q14AHow often did you have difficulties identifying defects in the objects? (Workstation 1)LIKERT
Q14BHow often did you have difficulties identifying defects in the objects? (Workstation 2)LIKERT
Q14CHow often did you have difficulties identifying defects in the objects? (Workstation 3)LIKERT
Certainty evaluationQ15AHow often did you feel uncertain about the defects you identified? (Workstation 1)LIKERT
Q15BHow often did you feel uncertain about the defects you identified? (Workstation 2)LIKERT
Q15CHow often did you feel uncertain about the defects you identified? (Workstation 3)LIKERT
Visualization methodology evaluationQ16AWhat do you think about the size of the defects (faults) in the objects? (Workstation 1)LIKERT
Q16BWhat do you think about the size of the defects (faults) in the objects? (Workstation 2)LIKERT
Q16CWhat do you think about the size of the defects (faults) in the objects? (Workstation 3)LIKERT
Q17AHow intuitive did you find the workstation for visualizing the object when identifying defects? (Workstation 1)LIKERT
Q17BHow intuitive did you find the workstation for visualizing the object when identifying defects? (Workstation 2)LIKERT
Q17CHow intuitive did you find the workstation for visualizing the object when identifying defects? (Workstation 3)LIKERT
Q18AHow would you rate the workstation for accurate (quality) visualization and detection of defects? (Workstation 1)LIKERT
Q18BHow would you rate the workstation for accurate (quality) visualization and detection of defects? (Workstation 2)LIKERT
Q18CHow would you rate the workstation for accurate (quality) visualization and detection of defects? (Workstation 3)LIKERT
Experiment, defect and 3D object evaluationQ19How would you evaluate the impact of the defects on the overall integrity of the objects? (Is their presence detrimental to the functionality of the objects, in your opinion?)LIKERT
Q20Were the defects consistent across all objects?LIKERT
Q21Were there many unique defects?LIKERT
Q22How well did your interest hold during the experiment?LIKERT
Q23When did you get bored and start to lose focus? (Workstation number, Object number, elapsed time)OA
Q24How often did the design of the object influence your ability to identify defects?LIKERT
Visibility evaluationQ25AHow visible were the defects from different angles or perspectives? (Workstation 1)LIKERT
Q25BHow visible were the defects from different angles or perspectives? (Workstation 2)LIKERT
Q25CHow visible were the defects from different angles or perspectives? (Workstation 3)LIKERT
Q26How would you describe the overall visibility of defects during the experiment?LIKERT
FeedbackQ27How likely are you to provide additional feedback to improve future experiments?LIKERT
Q28Do you have any comments or recommendations for the study?OA
Table 4. Inter-item correlation table. Legend is below the table.
Table 4. Inter-item correlation table. Legend is below the table.
Q6Q7Q8Q9Q10Q11Q12AQ12BQ12CQ13Q14AQ14BQ14CQ15AQ15BQ15CQ16AQ16BQ16CQ17AQ17BQ17CQ18AQ18BQ18CQ19Q20Q21Q22Q25AQ25BQ25CQ26Q27
Q610.340.161−0.134−0.2110.035−0.012−0.0570.1220.0170.009−0.357−0.040.3040.0140.173−0.372−0.361−0.1150.05600.1220.2540.140.0390.0150.08−0.1350.2020.12−0.232−0.162−0.0660.21
Q70.3410.186−0.0910.07−0.010.01−0.327−0.0350.0690.064−0.302−0.0080.148−0.160.075−0.396−0.418−0.4470.141−0.3120.1410.118−0.1330.0830.2630.147−0.2640.088−0.084−0.32−0.049−0.1090.243
Q80.1610.1861−0.0620.0350.0160.1380.171−0.0840.1250.1530.188−0.0180.231−0.156−0.1870.0330.1110.0350.0520.149−0.08400.0350.1080.041−0.042−0.1480.327−0.0940.1850.216−0.031−0.145
Q9−0.134−0.091−0.06210.8070.1880.1410.095−0.0470.1570.370.2360.1950.1290.2030.0450.2−0.248−0.2760.3770.415−0.0470.3140.02−0.140.10.094−0.019−0.0780.233−0.0050.1210.068−0.081
Q10−0.2110.070.0350.80710.20.2130.029−0.1070.1340.2760.1460.0060.0040.091−0.0890.223−0.124−0.2080.4140.237−0.1070.269−0.067−0.1490.163−0.037−0.043−0.0670.165−0.0120.1160.01−0.185
Q110.035−0.010.0160.1880.210.4810.1090.8660.9090.2370.3980.4550.4080.5880.3360.043−0.1770.20.0680.1940.6220.2860.0460.4530.475−0.0550.153−0.0810.2850.030.1370.4250.4
Q12A−0.0120.010.1380.1410.2130.48110.1830.4540.4760.3120.1320.290.3440.2210.1910.197−0.0230.0840.4320.0620.1920.526−0.007−0.041−0.066−0.3190.1490.0290.4790.0220.1280.1010.06
Q12B−0.057−0.3270.1710.0950.0290.1090.18310.030.0670.0970.4760.115−0.097−0.284−0.3610.4340.0260.0080.1790.548−0.170.2670.448−0.124−0.2280.040.2540.3820.4780.6040.0810.083−0.224
Q12C0.122−0.035−0.084−0.047−0.1070.8660.4540.0310.8050.1160.2320.5430.360.5720.465−0.099−0.0560.4280.0390.1130.7870.2140.0270.490.451−0.0320.06−0.1060.252−0.0070.1640.3240.55
Q130.0170.0690.1250.1570.1340.9090.4760.0670.80510.2220.5150.5890.4410.6690.472−0.014−0.1250.2080.0440.1260.5680.2370.1090.4850.420.0620.038−0.1380.2370.1070.2250.5280.367
Q14A0.0090.0640.1530.370.2760.2370.3120.0970.1160.22210.1190.3210.660.126−0.0490.368−0.111−0.0080.3110.4110.180.28−0.076−0.066−0.0250.2030.177−0.0210.227−0.233−0.0310.2530.089
Q14B−0.357−0.3020.1880.2360.1460.3980.1320.4760.2320.5150.11910.420.0080.3880.0340.290.1060.090.0660.2840.053−0.1190.4640.160.0610.0930.130.0150.1410.5940.180.4560.031
Q14C−0.04−0.008−0.0180.1950.0060.4550.290.1150.5430.5890.3210.4210.3610.4690.6090.005−0.0740.240.0950.3210.4040.140.1230.5230.3960.306−0.175−0.2550.21−0.0180.3940.2980.264
Q15A0.3040.1480.2310.1290.0040.4080.344−0.0970.360.4410.660.0080.36110.3540.2720.165−0.2310.0430.20.2120.360.401−0.3060.2290.1260.2720.034−0.1690.251−0.4610.0420.3480.271
Q15B0.014−0.16−0.1560.2030.0910.5880.221−0.2840.5720.6690.1260.3880.4690.35410.722−0.2030.1560.3190.0490.070.375−0.0660.1320.320.2360.002−0.088−0.361−0.052−0.0020.0510.5120.34
Q15C0.1730.075−0.1870.045−0.0890.3360.191−0.3610.4650.472−0.0490.0340.6090.2720.7221−0.259−0.0130.2290−0.1070.3640.0680.1020.3760.3690.068−0.303−0.32−0.025−0.1330.1440.2970.279
Q16A−0.372−0.3960.0330.20.2230.0430.1970.434−0.099−0.0140.3680.290.0050.165−0.203−0.25910.1310.120.1530.0880.0250.3730.094−0.1370.028−0.0340.2110.1440.3410.204−0.0390.144−0.256
Q16B−0.361−0.4180.111−0.248−0.124−0.177−0.0230.026−0.056−0.125−0.1110.106−0.074−0.2310.156−0.0130.13110.760.1040−0.197−0.470.0240.072−0.111−0.011−0.193−0.093−0.2050.40.1170.183−0.29
Q16C−0.115−0.4470.035−0.276−0.2080.20.0840.0080.4280.208−0.0080.090.240.0430.3190.2290.120.761−0.0170.0470.295−0.269−0.0110.4230.2070.047−0.098−0.178−0.0520.2820.3280.2530
Q17A0.0560.1410.0520.3770.4140.0680.4320.1790.0390.0440.3110.0660.0950.20.04900.1530.104−0.01710.405−0.0390.343−0.182−0.337−0.084−0.02−0.048−0.1310.6190.005−0.0080.3290.041
Q17B0−0.3120.1490.4150.2370.1940.0620.5480.1130.1260.4110.2840.3210.2120.07−0.1070.08800.0470.4051−0.1130.2270.047−0.048−0.0930.2260.04600.4920.1440.1560.164−0.117
Q17C0.1220.141−0.084−0.047−0.1070.6220.192−0.170.7870.5680.180.0530.4040.360.3750.3640.025−0.1970.295−0.039−0.11310.214−0.0620.3540.556−0.1120.06−0.106−0.006−0.2290.0380.2080.771
Q18A0.2540.11800.3140.2690.2860.5260.2670.2140.2370.28−0.1190.140.401−0.0660.0680.373−0.47−0.2690.3430.2270.2141−0.03−0.1820.07−0.160.0860.1180.648−0.222−0.127−0.1550
Q18B0.14−0.1330.0350.02−0.0670.046−0.0070.4480.0270.109−0.0760.4640.123−0.3060.1320.1020.0940.024−0.011−0.1820.047−0.062−0.031−0.034−0.1890.047−0.1340.49−0.0520.654−0.122−0.039−0.015
Q18C0.0390.0830.108−0.14−0.1490.453−0.041−0.1240.490.485−0.0660.160.5230.2290.320.376−0.1370.0720.423−0.337−0.0480.354−0.182−0.03410.6080.381−0.187−0.091−0.0880.0250.5710.2030.141
Q190.0150.2630.0410.10.1630.475−0.066−0.2280.4510.42−0.0250.0610.3960.1260.2360.3690.028−0.1110.207−0.084−0.0930.5560.07−0.1890.60810.042−0.157−0.035−0.039−0.1660.2710.0870.271
Q200.080.147−0.0420.094−0.037−0.055−0.3190.04−0.0320.0620.2030.0930.3060.2720.0020.068−0.034−0.0110.047−0.020.226−0.112−0.160.0470.3810.0421−0.465−0.0530.069−0.0130.1920.119−0.138
Q21−0.135−0.264−0.148−0.019−0.0430.1530.1490.2540.060.0380.1770.13−0.1750.034−0.088−0.3030.211−0.193−0.098−0.0480.0460.060.086−0.134−0.187−0.157−0.46510.0290.2070.057−0.0360.2250.193
Q220.2020.0880.327−0.078−0.067−0.0810.0290.382−0.106−0.138−0.0210.015−0.255−0.169−0.361−0.320.144−0.093−0.178−0.1310−0.1060.1180.49−0.091−0.035−0.0530.02910.0430.356−0.042−0.3270
Q25A0.12−0.084−0.0940.2330.1650.2850.4790.4780.2520.2370.2270.1410.210.251−0.052−0.0250.341−0.205−0.0520.6190.492−0.0060.648−0.052−0.088−0.0390.0690.2070.04310.1490.1830.244−0.123
Q25B−0.232−0.320.185−0.005−0.0120.030.0220.604−0.0070.107−0.2330.594−0.018−0.461−0.002−0.1330.2040.40.2820.0050.144−0.229−0.2220.6540.025−0.166−0.0130.0570.3560.14910.3260.172−0.293
Q25C−0.162−0.0490.2160.1210.1160.1370.1280.0810.1640.225−0.0310.180.3940.0420.0510.144−0.0390.1170.328−0.0080.1560.038−0.127−0.1220.5710.2710.192−0.036−0.0420.1830.32610.105−0.109
Q26−0.066−0.109−0.0310.0680.010.4250.1010.0830.3240.5280.2530.4560.2980.3480.5120.2970.1440.1830.2530.3290.1640.208−0.155−0.0390.2030.0870.1190.225−0.3270.2440.1720.10510.28
Q270.210.243−0.145−0.081−0.1850.40.06−0.2240.550.3670.0890.0310.2640.2710.340.279−0.256−0.2900.041−0.1170.7710−0.0150.1410.271−0.1380.1930−0.123−0.293−0.1090.281
Low correlationbelow 0.3
Moderate correlationbetween 0.3–0.7
High correlationabove 0.7
Table 5. Participant categories and reasoning for the choice of participant for each category.
Table 5. Participant categories and reasoning for the choice of participant for each category.
CategoryNo. of ParticipantsCategory Description and Choice Reasoning
IT
(Working in the field of IT)
23This field was chosen because of the general level of technical expertise with different classes of technologies. They can provide extensive feedback after interacting with hardware and software from an expert perspective. The bulk of the participants were from this category, representing the main qualitative part of the research data.
Mechanical engineering4This group comprises people with in-depth understanding of the subject matter who are experienced working with traditional CAD software and visualization methods. They can provide invaluable views on how reasonable all the introduced methods are for their work. They were added as a small participant control group.
3D model experts1This group includes people with 3D model creation experience from companies that work in this field. They can provide expert views on how these different visualization methods can be useful for businesses and what limitations and challenges can occur during practical work with these technologies.
Other engineering2This group includes people with engineering backgrounds that can provide invaluable feedback on the experimental process, the objects used, and the workstation relevance to other engineering fields.
Table 6. Collated experiment results per workstation.
Table 6. Collated experiment results per workstation.
Workstation 1—Stereoscopic Display
Object #2345678910
Correct77333034332443416
%81.0560.0083.3350.0059.7241.0391.6777.2780.00
Incorrect262122151
%2.1110.915.5616.672.782.562.0811.365.00
Not found18226129204104
%18.9540.0016.6716.6740.2825.648.3322.7320.00
Average time, s96.8499.55143.330.0087.0875.3896.6795.45104.0
Times repeated191161121312115
Workstation 2—Holographic Display
Object #2345678910
Correct194135541857392838
%95.0068.3383.3377.1450.0089.0688.6477.7886.36
Incorrect122536432
%5.003.334.767.148.339.389.098.334.55
Not found119716187586
%5.0031.6716.6722.8650.0010.9411.3622.2213.64
Average time, s115.0111.387.8670.3695.0080.0055.4572.7848.64
Times done41271461611911
Workstation 3–Computer
Object #2345678910
Correct32259754424284045
%91.4371.4395.1072.0058.33100.0100.090.9186.54
Incorrect52101060320
%14.295.719.8013.338.330.0010.714.550.00
Not found310521300047
%8.5728.574.9028.0041.670.000.009.0913.46
Average time, s78.1485.0086.4766.6797.50120.0062.8673.1873.46
Times done77171512171113
Table 7. Results for each participant experience category.
Table 7. Results for each participant experience category.
Experience in the Specified Field, YearsWorkstation 1Workstation 2Workstation 3
Number of FlawsCorrectNot FoundIncorrectAverage Time, sNumber of FlawsCorrectNot FoundIncorrectAverage Time, sNumber of FlawsCorrectNot FoundIncorrectAverage Time, s
0–56972.46%27.54%1.45%886479.69%20.31%3.13%688289.02%10.98%6.10%84
6–1012876.56%23.44%7.03%11013084.62%15.38%6.15%8512983.72%16.28%4.65%90
11–157259.72%40.28%11.11%927169.01%30.99%8.45%727270.83%29.17%12.50%71
16–201275.00%25.00%8.33%1031764.71%35.29%11.76%821485.71%14.29%7.14%66
more than 20 years14577.24%22.76%2.07%8713480.60%19.40%7.46%8115082.00%18.00%11.33%71
Table 8. Results of experienced (20+ years) participants compared to average results: % correctly found defects and average time taken to complete their work.
Table 8. Results of experienced (20+ years) participants compared to average results: % correctly found defects and average time taken to complete their work.
Workstation 1Workstation 2Workstation 3
Avg. correctly found
(20+ years of experience)
77.4%78.7%80%
Avg. correctly found
(for all)
73%79%82%
Avg. time taken
(20+ years of experience)
817271
Avg. time taken
(for all)
957879
Table 9. Participants that selected the maximum value of 5, for questions Q12A, Q12B, and Q12C, respectively.
Table 9. Participants that selected the maximum value of 5, for questions Q12A, Q12B, and Q12C, respectively.
Question #Participant #Count
Q12A17, 18, 23, 25, 27, 28, 307
Q12B1, 3, 4, 5, 8, 9, 11, 14, 15, 16, 18, 19, 20, 21, 22, 23, 25, 27, 2819
Q12C1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 3028
Table 10. A sample of participant experiment results for workstation 1 for those who answered that it was extremely easy to work with workstation 1.
Table 10. A sample of participant experiment results for workstation 1 for those who answered that it was extremely easy to work with workstation 1.
Q12ANumber of DefectsCorrectCorrect, %Not FoundNot Found, %IncorrectAv. Time, s
Participant #
1714964.29535.71078
18131292.3117.690110
2314857.14642.860133
2513861.54538.46097
27121083.33216.67230
28131184.62215.38088
30151386.67213.33033
Average:75.70%Average:81 s
Table 11. A sample of participant experiment results for workstation 2 for those who answered that it was extremely easy to work with workstation 2.
Table 11. A sample of participant experiment results for workstation 2 for those who answered that it was extremely easy to work with workstation 2.
Q12BNumber of DefectsCorrectCorrect, %Not FoundNot Found, %IncorrectAv. Time, s
Participant #
1151280.00320.003180
3131292.3117.69070
413969.23430.77278
5131292.3117.69223
815853.33746.67080
914964.29535.71032
11141392.8617.140108
14121083.33216.67118
1512541.67758.33560
16121083.33216.67067
18131184.62215.38090
1912975.00325.00077
20131076.92323.080133
211313100.0000.00297
22151493.3316.67130
23151493.3316.67090
2513753.85646.15093
27151173.33426.67132
28131292.3117.69172
Average:78.70%Average:75 s
Table 12. Sample of participant experiment results for workstation 3, for those who answered that it was extremely easy to work with workstation 3.
Table 12. Sample of participant experiment results for workstation 3, for those who answered that it was extremely easy to work with workstation 3.
Q12CNumber of DefectsCorrectCorrect, %Not FoundNot Found, %IncorrectAv. Time, s
Participant #
3161593.7516.25087
4161487.50212.502103
5161593.7516.251107
2151173.33426.671120
11212100.0000.002180
61212100.0000.00152
7121083.33216.67087
8121191.6718.33065
912975.00325.00090
101212100.0000.00040
11171376.47423.531120
12141071.43428.57038
14171058.82741.18027
16171376.47423.53030
17171482.35317.651103
18171588.24211.76162
19151173.33426.67270
20151493.3316.671105
21151386.67213.33247
22141178.57321.43033
23141178.57321.43063
24141178.57321.43083
25171058.82741.182110
26151386.67213.335130
27151386.67213.33237
28171588.24211.762120
29161593.7516.25165
30151493.3316.67312
Average:83.88Average:78
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kozov, V.; Minev, E.; Andreeva, M.; Vassilev, T.; Rusev, R. Comparative Analysis of Different Display Technologies for Defect Detection in 3D Objects. Technologies 2025, 13, 118. https://doi.org/10.3390/technologies13030118

AMA Style

Kozov V, Minev E, Andreeva M, Vassilev T, Rusev R. Comparative Analysis of Different Display Technologies for Defect Detection in 3D Objects. Technologies. 2025; 13(3):118. https://doi.org/10.3390/technologies13030118

Chicago/Turabian Style

Kozov, Vasil, Ekaterin Minev, Magdalena Andreeva, Tzvetomir Vassilev, and Rumen Rusev. 2025. "Comparative Analysis of Different Display Technologies for Defect Detection in 3D Objects" Technologies 13, no. 3: 118. https://doi.org/10.3390/technologies13030118

APA Style

Kozov, V., Minev, E., Andreeva, M., Vassilev, T., & Rusev, R. (2025). Comparative Analysis of Different Display Technologies for Defect Detection in 3D Objects. Technologies, 13(3), 118. https://doi.org/10.3390/technologies13030118

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop