Next Article in Journal
Sediment Characteristics of Beachrock: A Baseline Investigation Based on Microbial Induced Carbonate Precipitation at Krakal-Sadranan Beach, Yogyakarta, Indonesia
Next Article in Special Issue
The Limited Effect of Graphic Elements in Video and Augmented Reality on Children’s Listening Comprehension
Previous Article in Journal
Effect to the Surface Composition in Ultrasonic Vibration-Assisted Grinding of BK7 Optical Glass
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Influence of Display Parameters and Display Devices over Spatial Ability Test Answers in Virtual Reality Environments

1
Department of Electrical Engineering and Information Systems, Faculty of Information Technology, University of Pannonia, 8200 Veszprem, Hungary
2
Department of Mathematics, Faculty of Information Technology, University of Pannonia, 8200 Veszprem, Hungary
3
Department of Basic Technical Studies, Faculty of Engineering, University of Debrecen, 4028 Debrecen, Hungary
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(2), 526; https://doi.org/10.3390/app10020526
Submission received: 2 December 2019 / Accepted: 8 January 2020 / Published: 10 January 2020
(This article belongs to the Special Issue Applications of Virtual, Augmented, and Mixed Reality)

Abstract

:
This manuscript analyzes the influence of display parameters and display devices over the spatial skills of the users in virtual reality environments. For this, the authors of this manuscript developed a virtual reality application which tests the spatial skills of the users. 240 students used an LG desktop display and 61 students used the Gear VR for the tests. Statistical data are generated when the users do the tests and the following factors are logged by the application and evaluated in this manuscript: virtual camera type, virtual camera field of view, virtual camera rotation, contrast ratio parameters, the existence of shadows and the device used. The probabilities of correct answers were analyzed based on these factors by logistic regression (logit) analysis method. The influences and interactions of all factors were analyzed. The perspective camera, lighter contrast ratio, no or large camera rotations and the use of the Gear VR greatly and positively influenced the probability of correct answers on the tests. Therefore, for the assessment of spatial ability in virtual reality, the use of these parameters and device present the optimal user-centric human–computer interaction practice.

1. Introduction

Spatial ability is an important skill to have in the modern day and age, as multiple jobs require a well-developed spatial ability [1]. If this skill is well-developed, it allows the person to understand the spatial relations between objects and space. It is possible to improve this skill by solving simple geometric problems. Tests were created with this goal in mind in the last century in paper-based formats. Multiple types of these tests exist, but the authors chose three of them for this research: The Mental Rotation Test (MRT) [2], where the user has to rotate objects in their mind; the Purdue Spatial Visualization Test (PSVT) [2,3], where the user also has to rotate objects in their mind; and the Mental Cutting Test (MCT) [2,4], where the user has to cut and also rotate objects in their mind.
Since these tests mostly exist on paper, and virtual reality can improve the learning skills [5] and even the spatial skills [6,7,8] of students, a question arises: What happens when these tests are taken in VR? When doing the tests in VR, new factors must be considered such as Human–Computer Interaction (HCI) and the display parameters in the virtual environment. While the latter makes it possible to see the application with different graphical and display settings, the former differs from application to application. The behavior and interaction of humans towards the computer depends on the tasks, the available devices and even the design of the application [9]. With the use of HCI principles, applications for different purposes can be designed, such as learning applications [10,11], mobile applications [12], helping with assistive technologies [13], entertainment applications [14], interfaces in VR [15], and even the virtual environments themselves [16]. The latter paper also states that there is no perfect HCI principle for VR, as this greatly depends on the type of the application. However, according to them, user-centric development proved to be useful in the past [17].
While the authors of this manuscript did not follow the user-centric development, their aim was to find an optimal preference that was the most user-centric for virtual environments. For this, the authors developed an application for the mentioned spatial ability tests and added options for the users with the ability to change the display parameters on each device. Since this developed application is in VR, it has an interaction that is greatly different from the paper-based methods used previously. As the tests are not on paper, they can be solved with a keyboard and a mouse on a desktop display and with the touchpad on the Samsung Gear VR.
Though the application measures spatial ability skills, it also logs which display factors are used in the tests with each device. These factors are the virtual camera type, its field of view, its rotation, the contrast ratio between the foreground object and the background, and lastly whether shadows are turned on or off. These factors were examined to determine the most user-centric virtual environment and to see how these new factors in the virtual environment influence the users in achieving better results during the spatial ability tests.
After gathering data from 240 students who used a desktop display and 61 who used the Gear VR, the authors analyzed them. In the first round, the authors evaluated one factor. Then, the examination continued by evaluating in pairs. However, those factors which did not have a significant influence on the results were deliberately left out from further examination. After completing the analysis in pairs, the authors continued by evaluating in triplets, and lastly, in four factors. The maximum examination number of four was due to only four factors having significant influences. With these analyses, the authors determined the effects of each display parameter and device separately. However, these factors depend on and even influence each other; therefore, the interactions between them were also investigated.
When the analyses were complete, the authors concluded that with the Gear VR the users had a higher probability of correct answers, and that the perspective camera type, lighter contrast, and no or large rotations also increased this probability. It was not only these factors that affected the user results, but according to the analysis, this is the optimal user-centric option in VR to assess spatial skills.
This manuscript is structured as follows: In the next section, the research questions (RQs) and hypotheses (Hs) are stated. In Section 3, the materials and methods are presented. Section 4 deals with the results, while Section 5 discusses them. In the last section, the conclusions are summarized.

2. Research Questions and Hypotheses

The goal of the authors with this manuscript is to see whether these display parameters and devices positively or negatively affect the interaction of the human with the computer. During the research, the authors set up seven RQs and Hs. The RQs are the following:
  • RQ1: Does the change of camera type influence the probability of correct answers on the tests?
  • RQ2: Does the change of camera field of view influence the probability of correct answers on the tests?
  • RQ3: Does the change of camera rotation influence the probability of correct answers on the tests?
  • RQ4: Does the change of contrast ratio influence the probability of correct answers on the tests?
  • RQ5: Do turning on or off the presence of shadows influence the probability of correct answers on the tests?
  • RQ6: Does changing the device used influence the probability of correct answers on the tests?
  • RQ7: What are the optimal preferences for these factors for achieving the largest probability of correct answers on the tests?
The following are the Hs:
  • H1: Camera type used does not affect the probability of correct answers; opposite to: perspective type positively influence the probability of correct answers on the tests.
  • H2: Changing the camera field of view has no effect on the probability of correct answers; opposite to: changing the camera field of view to a higher degree can positively influence the probability of correct answers on the tests.
  • H3: Camera rotation does not affect the probability of correct answers; opposite to: changing the camera rotation increases the probability of correct answers on the tests.
  • H4: Contrast ratio does not affect the ratio of correct answers; opposite to: changing the contrast ratio from higher to lower values can positively influence the probability of correct answers on the tests.
  • H5: The presence of shadows does not affect the probability of correct answers on the tests; opposite to: the ratios of corrects answers are different in case of shadows and in the absence of shadows.
  • H6: Using a desktop display or the Gear VR, the probabilities of correct answers are equal; opposite to: using the Gear VR the probability of correct answers is larger.
  • H7: Based on the previous hypotheses, the optimal preferences are the perspective camera type, higher field of view, some rotation, lower contrast ratio while also using the Gear VR.

3. Materials and Methods

3.1. The Applied Device

The Unity game engine [18], with the version number 2018.3.14f1, was used for the application development at the University of Pannonia in the C# programming language. The development phase was carried out during the first half of 2019. When developing the application, a problem arose. Unity could not obtain the correct contrast values for the background and the object. This was due to different color spaces. Unity uses the sRGB color space, which had to be converted to the RGB space first for their relative luminance values to be calculated. First, let us define the sRGB and RGB spaces:
R s R G B     [ 0 ; 1 ]   , G s R G B     [ 0 ; 1 ] ,   B s R G B     [ 0 ; 1 ]
R R G B     [ 0 ; 1 ]   , G R G B     [ 0 ; 1 ] ,   B R G B     [ 0 ; 1 ]
After defining this, the authors determined the albedo color of a Unity object using a built-in function. However, there is ambient lighting in a scene in Unity. The albedo color of the object does not contain the ambient lighting, therefore the color values had to be corrected accordingly. To get the correct color of the object, the transformation
w c o r r = w * w a m b i e n t l i g h t * I n t e n s i t y a m b i e n t l i g h t
was used, where w is the color of the object and wcorr is its corrected value. The next step was the conversion to RGB color space. A new q variable was defined, which would contain the R, G, B values like w contained the sR, sG and sB values. Conversion was performed by the following equation:
q = { w c o r r 12.92                                w c o r r 0.0405 ( w c o r r + 0.055 1.055 ) 2.4                      o t h e r w i s e
After obtaining the correct R, G, B values of each object, the relative luminance values (L) can be calculated according to the following equation:
L = 0.2126 R + 0.7152 G + 0.07272 B
When both the relative luminance of the background and the object (in the foreground) have been calculated, the contrast ratio can finally be measured by:
c o n t r a s t = L f o r e g r o u n d   +   0.05 L b a c k g r o u n d   +   0.05
When this contrast problem was solved, the development of the application was finished. As mentioned in the Introduction section, the authors focused on three types of tests: The MRT, the MCT and the PSVT tests. Therefore, the application contains these three test types. Three examples of the tests can be seen in Figure 1. Each test features ten rounds of questions about spatial ability. The application runs on two operating systems: for the desktop version, Windows 7 or newer can be used, and for the Samsung Gear VR (SM-R322 [19]) version, Android 7.0 or newer version can be used. The authors used a Samsung Galaxy S6 Edge+ smartphone [20] for the Gear VR version of the tests.

3.2. Data Collection

Testing and gathering the data with the application were conducted at the University of Pannonia and at the University of Debrecen. This happened in September 2019. At the University of Pannonia, the Gear VR was used, and the tests were carried out by 61 students. At the University of Debrecen, 240 students used an LG 20M37A (19.5″) desktop display [21] for the tests. The VR testers consisted of Information Technology (IT) and non-IT students, while the students who used the desktop display were either Mechanical Engineering (ME) or Architectural Engineering (AE) students, mostly in their first year.
The VR tests were three weeks long and the students tested in a sequential order as only one Gear VR device was available at the University of Pannonia. Different number of students tested each day. The smallest number of testers was two and the largest number was eight. As more desktop display devices were available at the University of Debrecen, testing there was different: The tests were done in a computer laboratory. Due to the laboratory being small, students were grouped into twenty groups, each group consisting of twenty students. Each test type had to be done three times: First, the students had to do the MRT test type once, then the MCT once, then the PSVT once. After that, they started from the beginning with different display parameters.
The reason the testers had to repeat the test types three times is that each test option had different display parameters. However, testing three times was not enough to test all parameters for influences and interactions. Therefore, to test all parameters, the authors used the randomization technique. Due to this, each test randomized two or three, but different, parameters. The authors believe that when testing in large numbers, sufficiently large numbers of results regarding each parameter were obtained.
The following information was saved into a .csv file by the application in real-time:
  • The technical information about the display parameters in each test: The virtual camera type, its field of view, its rotation, the contrast ratio in the scene and whether the shadows are turned off.
  • The user-related information: Their gender, age, primary hand, number of years at a university and what their major is. This category is not focused on in this manuscript.
  • The test type, its completion time and the number of correct and incorrect answers.

3.3. Data Analysis

In this manuscript, the influence of the display parameters was investigated by focusing on the probabilities of the correct answers.
The problems solved were grouped into 10, and all testers carried out 9 of these groups. The parameters of the display belonging to the tests in the separate groups were fixed. Therefore, the authors have 240 × 9 + 61 × 9 = 2709 relative frequencies for estimating the probability of correct answers in the knowledge of the parameter values. The aim was to clarify the effects of the parameters. As probabilities were in the focus, the authors used the logistic regression analysis to verify the influence of the parameters [22].
Logistic regression is a well elaborated statistical method to detect the effects of factors in themselves, additively or by taking the interactions into account. The probabilities are transformed by a monotone increase and invertible transformation into the interval (−∞, ∞), and linear regression models are fitted to the transformed values. The estimated coefficients of the variables are tested as to whether they can be considered zero (no effect), or whether they differ significantly from zero (there exists an effect). The sign of estimated value reveals the direction of the effect too, such as improvement or waste in probabilities. The authors investigated the effects of the variables one by one, in pairs, in triplets and in a quartet too. The numerical calculations were carried out by statistical program package R [23]. The results are presented in the next section.

4. Results

This section is broken into four subsections. In the first subsection, the effect of each single factor is analyzed in itself. In the second subsection, the interactions of two factors are allowed and analyzed. In the third subsection, the interactions of triplets of factors and in the fourth subsection, the interactions of all factors are investigated.

4.1. Results of the Analyses of a Single Factor’s Effects

In this subsection the authors considered the effects of the display parameters and devices separately. This subsection is broken into six subsubsections, each featuring a different factor.

4.1.1. Analysis of the Camera Type

The first factor to analyze was the virtual camera type. Virtual cameras can be one of two types. The first type is a perspective camera, which is similar to the human eye. The second type of virtual camera is orthographic, meaning that it uses orthographic projection, representing 3D objects in two dimensions. Due to the random choice of camera type, 1418 tests were done with perspective camera and 1291 tests were done with orthographic camera. Numerical results of the users can be seen in Table A1. The authors applied logistic regression analysis for the probability of correct answers, and the results of this can be seen in Table 1.
On the basis of p-value = 2.57 × 10−12, the difference is significant. The authors conclude that the type of the camera has an influence on the probability of correct answers: the perspective camera type produces better results than the orthographic camera type. The authors numerically computed the average rates of correct answers in the case of orthographic and perspective, obtaining 0.606 and 0.642, respectively, as can be seen in Table A1.

4.1.2. Analysis of the Camera Field of View

The second factor to be analyzed was the virtual camera’s field of view. The default field of view in Unity is 60°, but this value can easily be changed in the application. The authors were interested in multiple fields of view, such as 45°, 60°, 75°, 90°. 1049 tests were done with 45°, 120 with 60°, 134 with 75°, and 115 with 90°. 1291 tests were done with an orthographic camera. The numerical rates belonging to the groups mentioned are shown in Table A2. The results of the logistic regression analysis can be seen in Table 2
The basis value was −1, belonging to orthographic camera type. All coefficients are estimated to be positive. The p-values (in the Pr(>|z|) column) suggest that every probability is significantly greater in the case of the perspective camera’s field of view options (45°, 60°, 75°, 90°). Moreover, due to the results presented in Table 1, the authors are aware that the perspective camera produces better probabilities. Therefore, the authors eliminated the data belonging to the value of −1 and investigated whether the different levels of the fields of view in case of perspective camera type result in different probabilities. This means that the authors were interested in the results when the −1 is taken out. The results of the analysis based on this restricted data set are contained in Table 3.
The basic level was actually 45°. The signs of the estimated coefficients show that each further level is better, but, except for 90°, the difference is not significant. In the case of 90°, on the basis of p-value = 0.0225, the 90° field of view presents a significantly better probability of correct answers than the others, at the level of 0.05.
However, the difference is not significant at the level 0.01. As the amount of data is quite large, the authors accept that the effect of the variable named Field of View is not significant in the case of the perspective camera type.

4.1.3. Analysis of the Camera Rotation

The next to be analyzed was the camera rotation. The authors wanted to see whether the rotation of the virtual scene influenced the results of the users. 106 tests were performed with a rotation of −45°, 294 with −30°, 294 with −15°, 1251 with no rotation, 312 with 15°, 313 with 30° and 139 with 45°. The numerical values of the average rates are presented in Table A3. The results of the logistic regression analysis are summarized in Table 4.
The basic level was −15°. According to the p-values, the −45°, 0°, 45° rotations presented significant increases in the probability of correct answers to that of −15°. The latter is not significant at the 1% level, but it is close to being significant. To further validate the results, the authors grouped the results into two groups. The first group named “IMP_R” contained those rotations that positively affect the probabilities. These are the rotations of −45°, 0, 45°. The other group, named “NO_R”, contained the rotations that did not have a significant positive effect.
1496 tests fell into the IMP_R group and 1213 tests fell into NO_R. The numerical results are presented in Table A4. When analyzing the results of these two groups through logistic regression analysis, the results were as presented in Table 5.
The reference point was NO_R. Table 5 indicates that the two groups previously defined in this subsubsection had significantly different probabilities. These results prove that the groups can be distinguished from each other, and the authors will use these groups later in the case of variable camera rotation.

4.1.4. Analysis of the Contrast Ratio

After analyzing the results of the camera rotation, the influence of the contrast ratio between the foreground object and the background was measured. The authors considered five contrast ratio values: 1.5:1, 3:1, 7:1, 14:1 and 21:1, and their test numbers were 1066, 167, 1121, 164, 191, respectively. See Table A5 for the numerical average rates. Similar to the previous cases, the regression coefficients were computed by logistic regression and the test statistics (testing their zero values), moreover the appropriate p-values can be seen in Table 6.
Comparing the results to the 1.5:1 contrast ratio, the 7:1, 14:1 contrast ratios produce significantly worse results in the probabilities, even the contrast ratio of 21:1 is on the 0.05 level. Therefore, the authors grouped the contrast ratios into two different groups: IMP_C, which contains 1.5:1 and 3:1, and the other, NO_C, which contains 7:1, 14:1 and 21:1. IMP_C has 1233 tests and NO_C has 1476, as seen in Table A6. For checking the equality of the probability of correct answers by logistic regression analysis, see Table 7.
According to p-value = 2.56 × 10−6, the IMP_C contrast ratio group gives a significantly better probability of correct answers than the NO_C group. This means that the bright scenes give better results.

4.1.5. Analysis of the Shadows

The next factor analyzed was the shadows in the scene. This variable has only two levels: When the shadows are turned on and when the shadows are turned off. 1414 tests were done with the former and 1295 with the latter. The numerical data (average rates and dispersions) are presented in Table A7 and the results of the logistic regression analysis are presented in Table 8.
The reference point was Turned off. According to p-value = 0.204, the shadows do not affect the probability of correct answers on the tests.

4.1.6. Analysis of the Device Used

The last factor to analyze was the device used. Two devices were used in the tests, an LG 20M37A (19.5”) desktop display and the Samsung Gear VR. 2160 tests ran on the desktop display and 549 on the Gear VR as can be seen from Table A8. The results obtained using the logistic regression method are presented in Table 9.
The reference point was desktop display. According to p-value = 0.00677 and the estimated coefficient of 0.07595, the probability of correct answers is significantly larger in case of the Gear VR.

4.2. Results of Analyses of Effects of Two Factors

In this subsection, the authors analyzed the effects of the display parameters in pairs. Those variables were excluded that do not in themselves affect the probabilities. Therefore, neither the influence of shadows nor the influence of camera field of view (see Table 3 and Table 8) are examined further. The authors analyzed the effects of camera type, camera rotation, contrast ratio, and device used. The influences of these factors are investigated in pairs. This section is broken into six different subsubsections, each analyzing a pair of factors using the logistic regression analysis and the ANOVA method.

4.2.1. Analysis of the Pair Camera Type and Rotation

The first pair was camera type and camera rotation. To make the calculations easier and more precise, the same camera rotation groups were used that were previously described in Section 4.1.3. Every level of camera type was paired with every level of camera rotation group. The number of groups involved is equal to 4; the first is named “Orthographic, NO_R”, and it contains 611 tests. The others are “Orthographic, INC_R”, with 680 tests; “Perspective, NO_R”, with 602 tests; and “Perspective, INC_R”, with 816 tests. For the numerical average rates and dispersions belonging to the groups, see Table A9; and for the results of the logistic regression analysis, see Table 10.
According to the p-values, every camera type is significantly better than “Orthographic, NO_R”. There is no significant difference between the “Orthographic, INC_R” and “Perspective, NO_R”. This difference was calculated by means of a t-test and has a p-value of 0.7126. The group “Perspective, INC_R” has the best results, and the improvement as compared to “Perspective, NO_R” is significant (p-value = 0.0033).
Moreover, the authors investigated an additive model on the basis of the variables camera type and camera rotation by allowing their interaction. The result can be seen in Table 11.
The estimated coefficients indicate that both camera type and camera rotation have influence, and p-value = 0.0459 means that there is an interaction, as well. The negative sign of −0.08995 was a surprise to the authors, but this is due to the speed of improvement. The measures of the improvement cannot be summed, the result is a little bit lower than that caused additively. Table 12 shows the results of their interactions.
According to p-value = 0.0459, the authors concluded that the model that takes the interactions into account provides a better probability.

4.2.2. Analysis of the Pair Camera Type and Contrast Ratio

The second pair to be analyzed was the camera type and the contrast ratio. Similarly to above, the previously formed contrast ratio groups were used. In “Orthographic, NO_R”, 696 tests were performed; in “Orthographic, INC_C”, 595 tests; in “Perspective, NO_C”, 780; and in “Perspective, INC_C”, 638 tests. The numerical results are presented in Table A10 and the analysis provided by logistic regression analysis is presented in Table 13.
“Orthographic, NO_C”, on the basis of Table 13, was found to be significantly worse than the others. There are no significant differences among the other three.
The results of the additive model allowing interactions are summarized in Table 14.
As can be seen, both variables have a significant influence on the probability of correct answers, and interaction also exists (p-value = 8.91 × 10−5). Table 15 shows the results of their interaction.
Due to p-value = 8.895 × 10−5, it was concluded that the model that takes interactions into account provided a significantly better probability.

4.2.3. Analysis of the Pair Camera Type and Device Used

The next pair examined was the camera type and the device used. 1065 tests were performed in “Orthographic, desktop display”, 226 tests in “Orthographic, Gear VR”, 1095 tests in “Perspective, desktop display”, and 323 tests in “Perspective, Gear VR”. The numerical data and the results of the logistic regression analysis are presented in Table A11 and Table 16, respectively.
When using a desktop display with an orthographic virtual camera, the worst results are produced. With the Gear VR using an orthographic virtual camera, there is no significant improvement. However, performing the tests on a desktop display or a Gear VR with a perspective virtual camera is significantly better than on the desktop display with an orthographic camera. The difference between a desktop display with an orthographic camera or a perspective camera and using a Gear VR with a perspective camera cannot be distinguished. Table 17 presents the results of the logistic regression analysis of the additive model allowing interactions:
As can be seen, in addition to the camera type, the device used has no significant influence (p-value = 0.0647), and there is no significant interaction (p-value = 0.6164). Table 18 shows the results of their interactions.
According to p-value = 0.6164 in the column of Pr (>Chi), the authors concluded that the model that takes the interactions into account does not provide a better probability than the additive model.

4.2.4. Analysis of the Pair Camera Rotation and Contrast Ratio

The next pair to be analyzed was the camera rotation and the contrast ratio. For this, the authors used groups for both variables as defined in Section 4.1.3 and Section 4.1.4. The “NO_R, NO_C” group comprised 1005 tests, the “NO_R, INC_C” group 208 tests, the “INC_R, NO_C” group 471 tests, and the “INC_R, INC_C” group 1025 tests, as can be seen from Table A12. The results of the logistic regression analysis can be seen in Table 19.
According to the results, every pair has a better probability than “NO_R, NO_C”. The t-test does not indicate a difference between “NO_R, NO_C” and “NO_R, INC_C”, but differences can be seen among “NO_R, NO_C”, “INC_R, NO_C” and “INC_R, INC_C”. These last pairs are not distinguishable. Table 20 shows the results while also taking into account the interaction of the variables.
As can be seen, the influence of INC_R is strong. The influence of INC_C is smaller (as can be seen, the estimation of the coefficient is equal to 0.09323), and there is no significant interaction (p-value = 0.1667).
Table 21 shows the results of their interactions.
According to p-value = 0.1661, the authors concluded that the model which takes the interactions into account does not provide a better probability than the additive model.

4.2.5. Analysis of the Pair Camera Rotation and Device Used

The next pair to be examined was the camera rotation and device used. For the camera rotation, the authors used the same groups that had been formed previously, NO_R and INC_R. 1062 tests were carried out for the group named “NO_R, desktop display”, 151 for the group “NO_R, Gear VR”, 1098 for the group “INC_R, desktop display”, and 398 for the group “INC_R, Gear VR”.
Similarly to earlier comparisons, the numerical values of the average rates in the mentioned groups can be seen in Table A13, and the results of the logistic regression analysis are contained in Table 22.
The reference point was “NO_R, Desktop display”. As the sign of the estimation of the coefficient is negative, it can be seen that the probabilities of the correct answers are a bit smaller in the group “NO_R, Gear VR”, but the difference is not significant (p-value = 0.385). The other two groups present significantly greater probabilities. There is a significant difference between “NO_R, Gear VR” and “INC_R, Gear VR”. If the authors apply the additive model with interactions, the same phenomenon can be seen, as presented in Table 23:
As can be seen, the camera rotation has significant influence (p-value = 2.97 × 10−6), but the influence of the device used is not significant (p-value = 0.3847) in itself. However, there is significant interaction at the 0.05 level of significance (p-value = 0.0326). The reader can check the numerical values of the average rates in the involved groups in Table A14. Table 24 shows the results of their interactions.
Due to p-value = 0.0329, the authors concluded that the model that takes the interactions into account provides a significantly better probability than the additive model.

4.2.6. Analysis of the Pair Contrast Ratio and the Device Used

The last pair to be examined is the contrast ratio and the device used. For the contrast ratio, the authors used the same groups as before, “NO_C” and “INC_C”. After creating the pairings, the “NO_C, desktop display” comprised 1183 tests, “NO_C, Gear VR” comprised 293 tests, “INC_C, desktop display” comprised 977 tests, and “INC_C, Gear VR” comprised 256 tests, as can be seen from Table A14. The results of the logistic regression analysis are presented in Table 25.
The point of reference was “NO_C, Desktop display”. According to the results, the Gear VR does not present a significant improvement with the “NO_C” contrast. However, with the “INC_C” contrast, the results are significantly better. Moreover, if we compare “INC_C, Desktop display” and “INC_C, Gear VR”, although the differences between the average rates are large (see Table A14), this difference is not significant (p-value = 0.06332). This is due to the relatively small number of tests (256 tests).
Although the difference between the average rates of “NO_C, Desktop display” and “INC_C, Desktop display” provides a significant difference because the numbers of the samples is higher, the authors suspect that if they had more tests with Gear VR, the difference would become significant. Performing the logistic regression analysis for the additive model containing the variables contrast ratio and device used and applying the available data set, the same phenomenon can be realized (Table 26).
Table 26 indicates that the contrast ratio has a significant effect (p-value = 0.000583), but aside from the described influence, the influence of the device is not significant (p-value = 0.396699) and interaction has not been detected (p-value = 0.094199). Table 27 shows the results of their interactions.
Due to p-value = 0.09403, the authors concluded that the model that takes the interactions into account does not provide a significantly better probability in contrast to the additive model.

4.3. Results of Analyses Investigating the Effects of Three Factors

After examining the factors in pairs, the next step was to analyze them as triplets. The first triplet to analyze was the camera type, rotation, and contrast ratio. Similarly to the previous sections, the latter two are grouped.
As in the previous section, the authors performed the analyses in two different ways. The first was as follows: the authors created all possible triplets from the levels of variables and performed the logistic regression analysis for these triplets.
Therefore, similarly to earlier sections, the authors created subsubsections for these triplets, each analyzing a different one. When comparing the triplets, the number of these subsubsections was equal to four.

4.3.1. Analysis of the Triplet Camera Type, Rotation and Contrast Ratio

The first triplet to be examined was the camera type, camera rotation and the contrast ratio. The authors used the same rotation and contrast ratio groups as before. The numerical results of these groups as defined by the triplets are presented in Table A15, and the results of the logistic regression analysis are presented in Table 28.
The reference point was “Orthographic, NO_R, NO_C”. As can be seen, the values of “Orthographic, INC_R, INC_C” and the values of all Perspective values are significantly better than “Orthographic, NO_R, NO_C”. The authors checked and concluded that the “Orthographic, INC_R, INC_C” and the Perspective rows could not be distinguished from one another.
With the ANOVA dispersion analysis method, a comparison was made between the logistic regression models with additive property without interactions (I), the models with interactions between two variables (II), and the models allowing interactions among all three variables (III). When comparing I and II, the model took into account the interactions of the camera type and camera rotation, moreover the interaction of camera type and contrast ratio gave significantly better results than the logistic regression models without interactions (p-value = 0.001258). The interaction of camera rotation and contrast ratio was not significant, as was presented in the previous section; therefore, the authors omitted it in this section.
The case is the same when comparing I and III (p-value = 0.002387).
However, when comparing II and III, it can be concluded that the model in which interaction of all three variables is built in does not give better results than model II (p-value = 0.2049). The results of the ANOVA can be seen in Table 29.
Therefore, the authors present the results of the logistic regression analysis in the case of the most appropriate model, which is model II. In Table 30, the effects of the factors according to model II can be seen.
This means that every variable has an influence, and the interaction in case of camera type and contrast ratio is significant. To double-check ourselves, and the results as well, the authors compared the model in which the interactions of the camera type and rotation, as well as the interactions of the camera type and contrast ratio, were built in to the model in which only the interaction of the camera type and contrast ratio were built in. Here, the authors did not get a worse result by means of this reduction (p-value = 0.99).

4.3.2. Analysis of the Triplet Camera Type, Rotation and Device Used

The next triplet to be examined was the camera type, rotation, and device used. The camera rotation factor used the same groups as before. These rotation and contrast ratio groups were defined beforehand in Section 4.1.3 and Section 4.1.4. Then, 8 groups were formed based on all possible levels of the variables.
As before, the numerical values of average ratios and further statistical characteristics are presented in Table A16. Performing the logistic regression analysis, the authors obtained the results presented in Table 31.
As can be seen, “Orthographic, NO_R, Gear VR” has some, but not a significant, decrease in the results compared to “Orthographic, NO_R, desktop display”. The others are significantly better. The smallest improvement is in “Perspective, NO_R, Gear VR” and the greatest improvements are in “Perspective, INC_R, desktop display” and “Perspective, INC_R, Gear VR”. The difference between the two best cases is not significant (p-value = 0.3012).
When comparing the model that does not allow interactions (I), and the model that allows the interactions of two variables (II) and the model that allows the interactions of all variables (III), model II is significantly better than model I (p-value = 0.01175), and model III is not significantly better than model I (p-value = 0.0502). Between model II and model III, there is no significant difference (p-value = 0.7445). When analyzing model II by logistic regression, the analysis results are as presented in Table 32:
When looking at the results, it can be concluded that the influence of the device itself disappeared (p-value = 0.2649), but its interactions are still relevant (see p-values 0.0322 and 0.0286). This means that the device should be taken into account when analyzing the data.

4.3.3. Analysis of the Triplet Camera Type, Contrast Ratio, and Device Used

The next triplet to be examined was the camera type, contrast ratio, and the device used. As before, the contrast ratio is encoded into groups. The contrast ratio groups were defined previously in Section 4.1.4.
The numerical values of the descriptive statistics for the resulting groups are presented in Table A17, and the results of the logistic regression analysis are presented in Table 33.
The reference point was “Orthographic, NO_C, desktop display”. The logistic regression analysis indicates that the “Orthographic, NO_C, Gear VR” was not significantly better than “Orthographic, NO_C, desktop display”. The remaining ones were significantly better. However, they were not distinguishable from each other.
When comparing the model with the variables camera type, contrast ratio, and device used without interactions (I), the model allowing interactions in pairs (II), and the model allowing interactions among all three variables (III), the results were as follows: II is significantly better than I (p-value = 0.0001132), III is significantly better than I (p-value = 0.000546), and there is no significant difference between II and III (p-value = 0.1792). When using model II, the logistic regression analysis results are the following, as seen in Table 34:
This means that the influence of every parameter is significant, and the interaction of the camera type and contrast ratio is also detectable (p-value = 0.000113).

4.3.4. Analysis of the Triplet Camera Rotation, Contrast Ratio, and Device Used

The last triplet to be analyzed is the camera rotation, contrast ratio and the device used. The former two are grouped as before. First, the authors formed 2 × 2 × 2 = 8 groups according to the levels of the variables. The statistical characteristics of the resulting groups are presented in Table A18. The results of the logistic regression analysis for these 8 groups are presented in Table 35.
The results show that there is no detectable difference between “NO_R, NO_C, desktop display” and “NO_R, NO_C, Gear VR” (p-value = 0.82046). The cut is not significant in the row of “NO_R, INC_C, Gear VR”. This requires clarification, as the average rates of correct answers are 0.602 and 0.561. However, with “NO_R, INC_C, Gear VR”, the available data is very low (23), and the dispersion is high. Further investigations (tests) are needed to be able to reject the hypothesis of the equality of the average rates. In other cases, there is a significant improvement. The largest improvement is in the case of “INC_R, INC_C, Gear VR”. Moreover, the results of this case are also significantly better than the results of any other case.
When comparing the model investigating the effects of the variables without interactions (I), the model investigating the effects of variables allowing the interaction of rotation and used devices (II), and the model allowing the interactions of all three variables (III), the following conclusions can be drawn: II is significantly better than I (p-value = 0.02568), III is significantly better than I (p-value = 0.0001853) and is also significantly better than II (p-value = 0.0006448). When using model III, the logistic regression analysis yields results as shown in Table 36:
According to this table, the unique influence of the device disappears, but it has an interaction with the contrast ratio, and even triple interactions can be detected.
Concluding the analyses of the effects of triplets of variables, the following factors should be considered: camera type, rotation, contrast ratio, and the device used. The results show that the most important interactions are those between:
  • Camera type–Camera rotation
  • Camera type–Contrast ratio
  • Camera rotation–Device used
  • Camera rotation–Contrast ratio–Device used

4.4. Results of the Analyses of the Effects of Four Variables

After concluding the analysis of triplets of factors, one more analysis remains: the analysis of all four significant factors. This means that the camera type, camera rotation, contrast ratio and the device used were examined together. If the authors construct the groups based on the possible quartets using the levels of the variables, 24 = 16 groups are formed. The numerical values of the descriptive statistics belonging to these groups are presented in Table A19. Applying logistic regression analysis, the authors obtain the results presented in Table 37.
The reference point was “Orthographic, NO_R, NO_C, desktop display”. According to the logistic regression analysis, there is a significant improvement in every Perspective value except for “Perspective, NO_R, INC_C, Gear VR” (and “Orthographic, NO_R, INC_C, desktop display”, and “Orthographic, INC_R, INC_C, desktop display”) compared to “Orthographic, NO_R, NO_C, desktop display”. The greatest improvements are in “Perspective, INC_R, NO_C, desktop display and “Perspective, INC_R, INC_C, Gear VR”. These two cases represent the best numeric values in Table A19. In any case, the average rates belonging to these groups can be considered to be equal (p-value = 0.7627).
Next, a comparison was carried out between the different additive models. The model that uses 4 variables but does not contain any interactions (I), the model that uses 4 variables and allows the interaction of pairs (II), the model that allows the interactions of three variables (III), and finally the model that allows the interactions of all variables (IV). After comparison on the basis of ANOVA, model II was significantly better than I (p-value = 0.0004441), model III was significantly better than model I (p-value = 2.147 × 10−6), and model III was also significantly better than II (p-value = 0.0003342). Finally, IV was not significantly better than IV (p-value = 0.1701).
According to model III, the logistic regression analysis results are as presented in Table 38:
Now, it can be concluded that the influence of the devices itself cannot be detected (p-value = 0.987212). However, their interactions can be detected. In the end, after examining each factor on their own and in pairs, triplets and fours, the “Perspective, INC_R, INC_C, Gear VR” gave the optimal results.

5. Discussion

The transition from paper to virtual is always difficult. This new environment, namely the virtual environment, presents a different type of interaction to the user. Interacting with virtual spaces is not the same as interacting with real objects. These virtual environments are built differently and have graphics that are unlike reality. The goal of the authors was to make this HCI easier, and in order to achieve that, to present an optimal solution.
According to the research data, an optimal solution was found, and the research questions were clearly answered. The authors demonstrated the effects of the parameters, and H5 was accepted, while H1, H2, H4, H6 were rejected and H3, H7 presented mixed cases. In the following subsections, these are elaborated.

5.1. Rejected Hypotheses—Detected Influences

The first rejected hypothesis to discuss is H1. Originally, the authors suspected that the perspective camera type influenced the probability of correct answers. The null hypothesis was that there is no effect, and the alternative hypothesis was that there exists some effect. According to Table 1, this latter proved to be the case. The perspective camera type positively influenced the probability of correct answers. However, as can be seen from Table 10, Table 11, Table 13, Table 14, Table 16, Table 17, Table 28, Table 30, Table 31, Table 32, Table 33, Table 34, Table 37 and Table 38, the results slightly changed when multiple factors were taken into account. This is due to VR being a complex, synthetic environment: in virtual reality, no scene exists with only a single factor. Therefore, it is safe to assume that when the users are taken into the virtual space, multiple factors should be considered. That is why the authors analyzed all factors in pairs and triplets. After examining everything, the perspective camera was demonstrated to be superior in all tests, and this was always an important factor. This forms T1: when using a perspective camera, the performance of the users was significantly (p-value = 2.57 × 10−12) influenced in terms of increasing their probability of answering correctly; and in pairs, it exhibited significant (p-value = 0.0459) interactions with −45°, 0°, 45° camera rotations or significant (p-value = 8.91 × 10−5) interactions with the 1.5:1 and 3:1 contrast ratios; in triplets, it had no significant interactions; but in fours it exhibited significant (p-value = 0.000133) interactions with −45°, 0°, 45° camera rotations, 1.5:1, 3:1 contrast ratios, and the Gear VR.
T1 is interesting, because when looking at the paper-based tests, the objects on the paper are drawn on the basis of orthographic projection. This fact can also lead to the question as to whether if the tests on the paper were changed to the projection of perspective type, would it also change the probability of the testers’ results?
The next rejected hypothesis to look at is H4. This hypothesis deals with contrast ratios. According to Table 6, smaller contrast ratios (1.5:1 and 3:1) produce a better probability of correct answers than larger ones. This was also confirmed by Table 7, in which the contrast ratios were grouped into two groups. In Table 13, Table 14, Table 19, Table 20, Table 25, Table 26, Table 28, Table 30 and Table 33, Table 34, Table 35, Table 36, Table 37 and Table 38, the contrast ratios were examined in detail. Since this factor has a great influence on virtual environments, the interaction of the contrast ratio groups was also assessed. These facts comprise T4: the contrast ratios of 1.5:1 and 3:1 significantly (p-value = 2.56 × 10−6) influence the performance of the users by increasing their probability of answering correctly, and in pairs they significantly (p-value = 8.91 × 10−5) interact with the perspective camera type, and in triplets they significantly (p-value = 0.000237) interact with the −45°, 0°, 45° camera rotations and the Gear VR, while in fours they significantly (p-value = 0.000133) interact with the perspective camera type, the −45°, 0°, 45° camera rotations and the Gear VR.
After forming T4, let’s think back to the paper-based tests. There are no contrast ratios on the paper-based tests. Everything is white, only the edges of the objects are black. The authors wanted to make a similar virtual environment to the paper-based tests with brighter colors. However, the idea of using even brighter contrast ratios was discarded, as the testers who used the Gear VR with the 1.5:1 ratio said that their eyes hurt after a few minutes. It is interesting to note that this contrast provided the best results, even numerically.
The last rejected hypothesis is H6. This was one of the most interesting hypotheses to the authors, as they wanted to compare the desktop display and the Gear VR headset. According to Table 9, the Gear VR had a significant influence (improvement) over the desktop display, as its HCI level was different. Since the display device used was one of the most important factors, and was influential, it was also examined in pairs, triplets, in fours, and its interactions were also assessed in Table 16, Table 17, Table 22, Table 23, Table 25, Table 26 and Table 31, Table 32, Table 33, Table 34, Table 35, Table 36, Table 37 and Table 38. When talking about unique influences of the display device used when investigated with more factors than one, it disappears in most cases, but its interactions remain. On the basis of these facts, T6 is formed: in contrast to the desktop display, the use of the Gear VR significantly (p-value = 0.00677) increased the probability of correct answers on the tests and in pairs it significantly (p-value = 0.0326) interacted with the −45°, 0°, 45° camera rotations; in triplets it significantly (p-value = 0.000237) interacted with the −45°, 0°, 45° camera rotations and the 1.5:1 and 3:1 contrast ratios; and in fours it significantly (p-value = 0.000133) interacted with the perspective camera type, the −45°, 0°, 45° camera rotations, and the 1.5:1, 3:1 contrast ratios.

5.2. Mixed Cases

The first mixed case hypothesis is H2. Recall that the null hypothesis is that there is no effect; therefore, rejecting it means that its effect can be realized. This fact is interesting, because the camera has two types. For orthographic cameras, the field of view is undefined, for perspective cameras, the authors analyzed the 45°, 60°, 75° and 90° fields of view. Since T1 states that the perspective camera type is better than the orthographic type, and Table 2 also proves this with respect to the fields of view, the main comparison was only carried out among the fields of view of the perspective camera as seen in Table 3. Due to these results, T2 is formed: The field of view of 90° influenced the performance of the users by increasing their probability of answering correctly. This is a significant difference on the level 0.05, but not on the level 0.01 (p-value = 0.0225).
The second mixed hypothesis was H3. This is mixed because the authors hypothesized that some rotation would help the users. This hypothesis was demonstrated to be true, but when no rotation occurred, it was also true. When the rotation was smaller than 45° in a given direction, the results showed the hypothesis to be false. It can be stated that the greatest influence on the results takes place when no, or a large rotation occurs. For this, see Table 4 and Table 5. Since the rotation is influential, it was also analyzed in pairs, triplets, and in fours, and its interactions were assessed in Table 10, Table 12, Table 19, Table 20, Table 22, Table 23, Table 28, Table 30, Table 31, Table 32 and Table 35, Table 36, Table 37 and Table 38. T3 was formed: when rotating the camera −45°, 0°, 45°, the performance of the users will be significantly increased (p-value = 1.12 × 10−10), increasing their probability of answering correctly and in pairs; it will significantly (p-value = 0.0459) interact with the perspective camera type; and in triplets it will significantly (p-value = 0.000237) interact with the 1:5.1, 3:1 contrast ratios and the Gear VR; and in fours it will have significant (p-value = 0.000133) interactions with −45°, 0°, 45° camera rotations, 1.5:1, 3:1 contrast ratios and the Gear VR.
The third and final mixed-case hypothesis was H7, which is related to the optimal preferences in virtual environments for achieving the best HCI results. This hypothesis is only a mixed case, because originally the authors put the camera’s field of view into this hypothesis. However, for the optimal preferences due to similar results between the perspective camera type and the field of view, the latter was discarded and only the former was left in. Therefore, T7 is formed: based on the previous theses, the optimal preference for the virtual environments to positively influence the correct answers on spatial ability tests by affecting the human–computer interaction is a perspective camera type, a camera rotation of −45° or 0° or 45°, a contrast ratio of 1.5:1 or 3:1, and the Gear VR display device.

5.3. Accepted Hypothesis—No Differences Detected

The first and only accepted hypothesis is H5, which deals with the presence of shadows in the virtual environment. On the paper-based tests, there are no shadows, thus the authors investigated whether their presence changed the probability of correct answers. According to Table 8, the shadows did not significantly influence the probability of correct answers. Therefore, the shadows were omitted from the multiple factor analyses. On this basis, T5 is formed: the shadows of the object do not significantly (p-value = 0.204) influence the performance of the users.

6. Conclusions

Designing virtual environments is not an easy task, even if the virtual environment is a virtual version of something in reality. The aim of the authors was to find the factors which positively influence users in VR.
The authors analyzed the virtual camera types, fields of view, and rotations, the contrast ratio between the foreground object and the background, the existence of shadows, and the display device used. 240 students carried out the tests using a desktop display and 61 using the Gear VR, each performing each test three times. The results were analyzed using the logistic regression analysis method.
The measurements and results show that these display factors and devices can influence the interaction between humans and the computer. While these factors and devices all have a unique influence, it has to be kept in mind that no virtual environment exists comprising only one of these factors. Therefore, these factors will always be in effect with multiple others. On this basis, these factors were analyzed in pairs, in triplets and in fours. Some factors lost their unique influences, but interactions emerged.
However, these interactions change depending on the number of examined factors. Many retain their interactions, but most of their significances are lost. Thus, virtual environments should be carefully designed.
In conclusion, a carefully designed virtual environment can positively influence the users in their tasks: The results show that the perspective camera type, a camera rotation of −45° or 0° or 45°, a contrast ratio of 1.5:1 or 3:1 and the Gear VR display device proved to be the optimal factors in virtual environments. When the user is in the virtual space with these factors and display devices, their probability of correct interaction, and even the results, increase.

Author Contributions

Conceptualization, T.G. and C.S.-L.; methodology, T.G., C.S.-L., E.O.-M. and E.P.; software, T.G.; validation, T.G., C.S.-L., E.O.-M. and E.P.; formal analysis, T.G. and E.O.-M.; investigation, T.G. and E.O.-M.; resources, T.G. and E.P.; data curation, T.G. and E.O.-M.; writing—original draft preparation, T.G.; writing—review and editing, T.G., C.S.-L., E.O.-M. and E.P.; visualization, T.G.; supervision, C.S.-L.; project administration, T.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

The authors would like to thank Mónika Szeles and Lóránt Horváth for their help in developing the application and creating the 3D models for the MCT test mode, respectively. The authors would also like to thank everyone who helped by testing the application.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Numerical results of the users regarding the camera type.
Table A1. Numerical results of the users regarding the camera type.
Camera TypeNumber of TestsAverage RateDispersion
Orthographic12910.6060.240
Perspective14180.6420.247
Table A2. Numerical results of the users regarding the camera field of view.
Table A2. Numerical results of the users regarding the camera field of view.
Field of ViewNumber of TestsAverage RateDispersion
−1 112910.6060.240
45°10490.6390.245
60°1200.6370.280
75°1340.6400.256
90°1150.6730.218
1 Orthographic camera. Field of view is undefined in this case.
Table A3. Numerical results of the users regarding the camera rotation.
Table A3. Numerical results of the users regarding the camera rotation.
Camera RotationNumber of TestsAverage RateDispersion
−45°1060.6510.235
−30°2940.6030.241
−15°2940.6060.232
12510.6390.247
15°3120.6110.247
30°3130.6070.240
45°1390.6320.259
Table A4. Numerical results of the users regarding the camera rotation groups.
Table A4. Numerical results of the users regarding the camera rotation groups.
Result of PerformanceNumber of TestsAverage RateDispersion
IMP_R14960.6390.247
NO_R12130.6070.240
Table A5. Numerical results of the users regarding the contrast ratio.
Table A5. Numerical results of the users regarding the contrast ratio.
Contrast RatioNumber of TestsAverage RateDispersion
1.5:110660.6390.247
3:11670.6280.249
7:111210.6150.239
14:11640.6090.247
21:11910.6160.248
Table A6. Numerical results of the users regarding the contrast ratio groups.
Table A6. Numerical results of the users regarding the contrast ratio groups.
Contrast Ratio GroupsNumber of TestsAverage RateDispersion
IMP_C12330.6370.241
NO_C14760.6140.247
Table A7. Numerical results of the users regarding the shadows in the scene.
Table A7. Numerical results of the users regarding the shadows in the scene.
ShadowsNumber of TestsAverage RateDispersion
Turned on14140.6280.242
Turned off12950.6210.247
Table A8. Numerical results of the users regarding the device used.
Table A8. Numerical results of the users regarding the device used.
Device UsedNumber of TestsAverage RateDispersion
Desktop display21600.6200.242
Gear VR5490.6430.252
Table A9. Numerical results of the users regarding the camera type and camera rotation.
Table A9. Numerical results of the users regarding the camera type and camera rotation.
Camera Type and RotationNumber of TestsAverage RateDispersion
Orthographic, NO_R6110.5830.235
Orthographic, INC_R6800.6260.243
Perspective, NO_R6020.6310.243
Perspective, INC_R8160.6500.249
Table A10. Numerical results of the users regarding the camera type and contrast ratio.
Table A10. Numerical results of the users regarding the camera type and contrast ratio.
Camera Type and Contrast RatioNumber of TestsAverage RateDispersion
Orthographic, NO_C6960.5850.235
Orthographic, INC_C5950.6300.244
Perspective, NO_C7800.6400.244
Perspective, INC_C6380.6440.250
Table A11. Numerical results of the users regarding the camera type and device used.
Table A11. Numerical results of the users regarding the camera type and device used.
Camera Type and Device UsedNumber of TestsAverage RateDispersion
Orthographic, desktop display10650.6010.237
Orthographic, Gear VR2260.6260.252
Perspective, desktop display10950.6380.246
Perspective, Gear VR3230.6550.251
Table A12. Numerical results of the users regarding the camera rotation and contrast ratio.
Table A12. Numerical results of the users regarding the camera rotation and contrast ratio.
Camera Rotation and Contrast RatioNumber of TestsAverage RateDispersion
NO_R, NO_C10050.6030.237
NO_R, INC_C2080.6250.253
INC_R, NO_C4710.6380.248
INC_R, INC_C10250.6400.246
Table A13. Numerical results of the users regarding the camera rotation and device used.
Table A13. Numerical results of the users regarding the camera rotation and device used.
Camera Rotation and Device UsedNumber of TestsAverage RateDispersion
NO_R, Desktop display10620.6070.238
NO_R, Gear VR1510.6030.251
INC_R, Desktop display10980.6320.245
INC_R, Gear VR3980.6590.251
Table A14. Numerical results of the users regarding the contrast ratio and device used.
Table A14. Numerical results of the users regarding the contrast ratio and device used.
Contrast Ratio and Device UsedNumber of TestsAverage RateDispersion
NO_C, Desktop display11830.6110.239
NO_C, Gear VR2930.6260.249
INC_C, Desktop display9770.6300.245
INC_C, Gear VR2560.6630.254
Table A15. Numerical results of the users regarding the camera type, rotation and contrast ratio.
Table A15. Numerical results of the users regarding the camera type, rotation and contrast ratio.
Camera Type, Rotation and Contrast RatioNumber of TestsAverage RateDispersion
Orthographic, NO_R, NO_C5110.5790.234
Orthographic, NO_R, INC_C1000.6060.235
Orthographic, INC_R, NO_C1850.6030.235
Orthographic, INC_R, INC_C4950.6350.246
Perspective, NO_R, NO_C4940.6280.238
Perspective, NO_R, INC_C1080.6430.268
Perspective, INC_R, NO_C2860.6610.254
Perspective, INC_R, INC_C5300.6440.247
Table A16. Numerical results of the users regarding the camera type, rotation and device used.
Table A16. Numerical results of the users regarding the camera type, rotation and device used.
Camera Type, Rotation and Device UsedNumber of TestsAverage RateDispersion
Orthographic, NO_R, desktop display5410.5840.232
Orthographic, NO_R, Gear VR700.5740.255
Orthographic INC_R, desktop display5240.6190.242
Orthographic, INC_R, Gear VR1560.6500.248
Perspective, NO_R, desktop display5210.6310.243
Perspective, NO_R, Gear VR810.6290.246
Perspective, INC_R, desktop display5740.6440.248
Perspective, INC_R, Gear VR2420.6640.252
Table A17. Numerical results of the users regarding the camera type, contrast ratio and device used.
Table A17. Numerical results of the users regarding the camera type, contrast ratio and device used.
Camera Type, Contrast Ratio and Device UsedNumber of TestsAverage RateDispersion
Orthographic, NO_C, desktop display5850.5830.232
Orthographic, NO_C, Gear VR1110.5950.251
Orthographic, INC_C, desktop display4800.6230.242
Orthographic, INC_C, Gear VR1150.6570.251
Perspective, NO_C, desktop display5980.6390.243
Perspective, NO_C, Gear VR1820.6450.247
Perspective, INC_C, desktop display4970.6370.248
Perspective, INC_C, Gear VR1410.6680.257
Table A18. Numerical results of the users regarding the camera rotation, contrast ratio and device used.
Table A18. Numerical results of the users regarding the camera rotation, contrast ratio and device used.
Camera Rotation, Contrast Ratio, Device UsedNumber of TestsAverage RateDispersion
NO_R, NO_C, desktop display8770.6020.236
NO_R, NO_C, Gear VR1280.6110.245
NO_R, INC_C, desktop display1850.6330.248
NO_R, INC_C, Gear VR230.5610.287
INC_R, NO_C, desktop display3060.6380.246
INC_R, NO_C, Gear VR1650.6380.253
INC_R, INC_C, desktop display7920.6300.245
INC_R, INC_C, Gear VR2330.6730.248
Table A19. Numerical results of the users regarding the camera type, rotation, contrast ratio, device used.
Table A19. Numerical results of the users regarding the camera type, rotation, contrast ratio, device used.
Camera Type, Rotation, Contrast Ratio, Device UsedNumber of TestsAverage RateDispersion
Orthographic, NO_R, NO_C, desktop display4500.5790.232
Orthographic, NO_R, NO_C, Gear VR610.5780.251
Orthographic, NO_R, INC_C, desktop display910.6120.230
Orthographic, NO_R, INC_C, Gear VR90.5440.296
Orthographic, INC_R, NO_C, desktop display1350.5980.229
Orthographic, INC_R, NO_C, Gear VR500.6150.253
Orthographic, INC_R, INC_C, desktop display3890.6260.246
Orthographic, INC_R, INC_C, Gear VR1060.6670.246
Perspective, NO_R, NO_C, desktop display4270.6260.238
Perspective, NO_R, NO_C, Gear VR670.6410.236
Perspective, NO_R, INC_C, desktop display940.6540.264
Perspective, NO_R, INC_C, Gear VR140.5710.291
Perspective, INC_R, NO_C, desktop display1710.6700.255
Perspective, INC_R, NO_C, Gear VR1150.6480.253
Perspective, INC_R, INC_C, desktop display4030.6330.244
Perspective, INC_R, INC_C, Gear VR1270.6790.252

References

  1. Best Jobs with Good Visual and Spatial Skills|LoveToKnow. Available online: https://jobs.lovetoknow.com/Best_Jobs_with_Good_Visual_and_Spatial_Skills (accessed on 23 November 2019).
  2. Ault, H.K.; John, S. Assessing and enhancing visualization skills of engineering students in Africa: A comparative study. Eng. Des. Graph. J. 2010, 74, 12–20. [Google Scholar]
  3. Branoff, T.J.; Connolly, P.E. The addition of coordinate axes to the purdue spatial visualization test—Visualization of rotations: A study at two universities. In Proceedings of the ASEE Annual Conference, Charlotte, NC, USA, 20–23 June 1999. [Google Scholar]
  4. Bosnyák, Á.; Nagy-Kondor, R. The spatial ability and spatial geometrical knowledge of university students majored in mathematics. Acta Didact. Univ. Comen. 2008, 8, 1–25. [Google Scholar]
  5. Wilson, A. Analysis of Current Virtual Reality Methods to Enhance Learning in Education. Sel. Comput. Res. Pap. 2019, 8, 61–66. [Google Scholar]
  6. Torner, J.; Apliste, F.; Brigos, M. Virtual Reality application to improve spatial ability of engineering students. In Proceedings of the WSCG 2016—24th Conference on Computer Graphics, Visualization and Computer Vision, Plzen, Czech Republic, 30 May–3 June 2016. [Google Scholar]
  7. Molina-Carmona, R.; Pertegal-Felices, M.L.; Jimeno-Morenilla, A.; Mora-Mora, H. Assessing the Impact of Virtual Reality on Engineering Students’ Spatial Ability. In The Future of Innovation and Technology in Education: Policies and Practices for Teaching and Learning Excellence; Emerald Publishing Limited: Bingley, UK, 2018; pp. 171–185. [Google Scholar]
  8. Molina-Carmona, R.; Pertegal-Felices, M.L.; Jimeno-Morenilla, A.; Mora-Mora, H. Virtual Reality learning activities for multimedia students to enhance spatial ability. Sustainability 2018, 10, 1074. [Google Scholar] [CrossRef] [Green Version]
  9. Kortum, P. HCI beyond the GUI: Design for Haptic, Speech, Olfactory, and Other Nontraditional Interfaces; Elsevier: Burlington, MA, USA, 2008. [Google Scholar]
  10. Mirauda, D.; Capece, N.; Erra, U. StreamflowVL: A Virtual Fieldwork Laboratory that Supports Traditional Hydraulics Engineering Learning. Appl. Sci. 2019, 9, 4972. [Google Scholar] [CrossRef] [Green Version]
  11. Al Mahdi, Z.; Naidu, V.R.; Kurian, P. Analyzing the Role of Human Computer Interaction Principles for E-Learning Solution Design. In Smart Technologies and Innovation for a Sustainable Future; Springer: Cham, Switzerland, 2019; pp. 41–44. [Google Scholar]
  12. Liu, P.; Fels, S.; West, N.; Görges, M. Human Computer Interaction Design for Mobile Devices Based on a Smart Healthcare Architecture. arXiv 2019, arXiv:1902.03541. [Google Scholar]
  13. Zhu, Z.; Pan, W.; Ai, X.; Zhen, R. Research on Human-Computer Interaction Design of Bed Rehabilitation Equipment for the Elderly. In Proceedings of the International Conference on Applied Human Factors and Ergonomics, Washington, DC, USA, 24–28 July 2019; pp. 275–286. [Google Scholar]
  14. Ding, T.; Zhu, D. Applications of the human-computer interaction interface to MOBA mobile games. In Proceedings of the 10th International Conference on Signal Processing Systems, Singapore, 17 April 2019. [Google Scholar]
  15. Kharoub, H.; Lataifeh, M.; Ahmed, N. 3D User Interface Design and Usability for Immersive VR. Appl. Sci. 2019, 9, 4861. [Google Scholar] [CrossRef] [Green Version]
  16. Sutcliffe, A.G.; Poullis, C.; Gregoriades, A.; Katsouri, I.; Tzanavari, A.; Herakleous, K. Reflecting on the Design Process for Virtual Reality Applications. Int. J. Hum. Comput. Interact. 2019, 35, 168–179. [Google Scholar] [CrossRef]
  17. Drettakis, G.; Roussou, M.; Reche, A.; Tsingos, N. Design and evaluation of a real-world virtual environment for architecture and urban planning. Presence Teleoper. Virtual Environ. 2007, 16, 318–332. [Google Scholar] [CrossRef]
  18. Unity Real-Time Development Platform|3D, 2D VR & AR Visualizations. Available online: https://unity.com/ (accessed on 24 November 2019).
  19. Gear VR SM-R322 Support & Manual|Samsung Business. Available online: https://www.samsung.com/us/business/support/owners/product/gear-vr-sm-r322/ (accessed on 24 November 2019).
  20. Samsung Galaxy S6 Edge Plus—The Official Samsung Galaxy Site. Available online: https://www.samsung.com/global/galaxy/galaxy-s6-edge-plus/ (accessed on 24 November 2019).
  21. LG LED Monitor 20M37A|19.5 LG LED Monitor—LG Electronics UK. Available online: https://www.lg.com/uk/monitors/lg-20M37A (accessed on 24 November 2019).
  22. Hosmer, D.W., Jr.; Lemeshow, S.; Sturdivant, R.X. Applied Logistic Regression; John Wiley & Sons: Hoboken, NJ, USA, 2013; Volume 398. [Google Scholar]
  23. R: The R Project for Statistical Computing. Available online: https://www.r-project.org/ (accessed on 24 November 2019).
Figure 1. (a) The MRT test with an orthographic camera, 7:1 contrast ratio, shadows turned on and no extra rotation; (b) The MCT test with a perspective camera, a 60° field of view, 3:1 contrast ratio, shadows turned on and no extra rotation; (c) The PSVT test with a perspective camera, a 45° field of view, 1.5:1 contrast ratio, shadows turned off and no extra rotation.
Figure 1. (a) The MRT test with an orthographic camera, 7:1 contrast ratio, shadows turned on and no extra rotation; (b) The MCT test with a perspective camera, a 60° field of view, 3:1 contrast ratio, shadows turned on and no extra rotation; (c) The PSVT test with a perspective camera, a 45° field of view, 1.5:1 contrast ratio, shadows turned off and no extra rotation.
Applsci 10 00526 g001
Table 1. The results of logistic regression by investigating the effect of camera type.
Table 1. The results of logistic regression by investigating the effect of camera type.
Camera TypeEstimateStandard Errorz ValuePr (>|z|)
Intercept0.624520.0159739.113<2 × 10−16
Perspective0.156700.022396.9992.57 × 10−12
Table 2. The results of the logistic regression analysis concerning the camera field of view as variable.
Table 2. The results of the logistic regression analysis concerning the camera field of view as variable.
Field of ViewEstimateStandard Errorz ValuePr (>|z|)
Intercept0.624520.0159739.113<2 × 10−16
45°0.144230.024235.9522.64 × 10−9
60°0.149880.055952.6790.00739
75°0.156760.053512.9300.00339
90°0.278710.058304.7811.75 × 10−6
Table 3. Logistic regression analysis results of the camera field of view without the orthographic field of view.
Table 3. Logistic regression analysis results of the camera field of view without the orthographic field of view.
Field of ViewEstimateStandard Errorz ValuePr (>|z|)
Intercept0.7687460.01822642.179<2 × 10−16
60°0.0056550.0566400.1000.9205
75°0.0125300.0542230.2310.8172
90°0.1344850.0589572.2810.0225
Table 4. Logistic regression analysis results of the camera rotation.
Table 4. Logistic regression analysis results of the camera rotation.
Camera RotationEstimateStandard Errorz ValuePr (>|z|)
Intercept0.6201840.3383918.327<2 × 10−16
−45°0.1761470.0655702.6860.00722
−30°−0.0098320.047978−0.2050.83763
0.1479850.0376903.9268.62 × 10−5
15°0.0153930.0470610.3270.74360
30°0.0121210.0469950.2580.79646
45°0.1458070.0591622.4650.01372
Table 5. The results of the logistic regression analysis by investigating the camera rotation groups.
Table 5. The results of the logistic regression analysis by investigating the camera rotation groups.
Camera Rotation GroupsEstimateStandard Errorz ValuePr (>|z|)
Intercept0.624980.0166437.57<2 × 10−16
IMP_R0.145030.022486.451.12 × 10−10
Table 6. Logistic regression analysis results of the contrast ratio.
Table 6. Logistic regression analysis results of the contrast ratio.
Contrast RatioEstimateStandard Errorz ValuePr (>|z|)
Intercept0.770590.0180142.779<2 × 10−16
3:1−0.053590.04876−1.0990.2717
7:1−0.114370.02493−4.5884.47 × 10−6
14:1−0.128670.04912−2.6200.0088
21:1−0.092640.04554−2.0340.0419
Table 7. Logistic regression analysis results of the contrast ratio groups.
Table 7. Logistic regression analysis results of the contrast ratio groups.
Contrast Ratio GroupsEstimateStandard Errorz ValuePr (>|z|)
Intercept0.657500.0150443.712<2 × 10−16
IMP_C0.105840.022504.7032.56 × 10−6
Table 8. Results of the logistic regression analysis of the shadows.
Table 8. Results of the logistic regression analysis of the shadows.
ShadowsEstimateStandard Errorz ValuePr (>|z|)
Intercept0.690460.0161242.830<2 × 10−16
Turned on0.028640.022391.2710.204
Table 9. Logistic regression results of the device used.
Table 9. Logistic regression results of the device used.
Device UsedEstimateStandard Errorz ValuePr (>|z|)
Intercept0.690020.0124955.231<2 × 10−16
Gear VR0.075950.028052.7080.00677
Table 10. Logistic regression analysis results by pairing the camera type and rotation.
Table 10. Logistic regression analysis results by pairing the camera type and rotation.
Camera Type and RotationEstimateStandard Errorz ValuePr (>|z|)
Intercept0.527840.0230922.864<2 × 10−16
Orthographic, INC_R0.183230.031995.7281.02 × 10−8
Perspective, NO_R0.199430.033345.9822.21 × 10−9
Perspective, INC_R0.292710.031029.437<2 × 10−16
Table 11. Logistic regression results concerning the variables camera type and rotation by allowing interactions.
Table 11. Logistic regression results concerning the variables camera type and rotation by allowing interactions.
Camera Type and RotationEstimateStandard Errorz ValuePr (>|z|)
Intercept0.527840.0230922.864<2 × 10−16
Perspective0.199430.033345.9822.21 × 10−9
INC_R0.183230.031995.7281.02 × 10−8
Perspective and INC_R−0.089950.04507−1.9960.0459
Table 12. Comparison of variables camera type and rotation by ANOVA.
Table 12. Comparison of variables camera type and rotation by ANOVA.
Resid.Df Resid.DevDfDeviancePr (>Chi)
1270610,070---
2270510,06613.98520.0459
Table 13. Logistic regression analysis results of the effects by pairing the camera type and contrast ratio.
Table 13. Logistic regression analysis results of the effects by pairing the camera type and contrast ratio.
Camera Type and Contrast RatioEstimateStandard Errorz ValuePr (>|z|)
Intercept0.534670.0215024.873<2 × 10−16
Orthographic, INC_C0.197650.032156.1487.85 × 10−10
Perspective, NO_C0.237070.030147.8673.64 × 10−15
Perspective, INC_C0.258190.031818.1174.80 × 10−16
Table 14. Logistic regression analysis results of the additive model using the variables camera type and contrast ratio, allowing interaction.
Table 14. Logistic regression analysis results of the additive model using the variables camera type and contrast ratio, allowing interaction.
Camera Type and Contrast RatioEstimateStandard Errorz ValuePr (>|z|)
Intercept0.534670.0215024.873<2 × 10−16
Perspective0.237070.030147.8673.64 × 10−15
INC_C0.197650.032156.1487.85 × 10−10
Perspective and INC_C−0.176530.04505−3.9198.91 × 10−5
Table 15. Comparison of the variables camera type and contrast ratio by ANOVA.
Table 15. Comparison of the variables camera type and contrast ratio by ANOVA.
Resid.Df Resid.DevDfDeviancePr (>Chi)
1270610,085---
2270510,069115.3588.895 × 10−5
Table 16. Logistic regression results analysis by pairing the camera type and the device used.
Table 16. Logistic regression results analysis by pairing the camera type and the device used.
Camera Type and Device usedEstimateStandard Errorz ValuePr (>|z|)
Intercept0.611050.0175234.870<2 × 10−16
Orthographic, Gear VR0.078590.042551.8470.0647
Perspective, desktop display0.158720.025016.3472.20 × 10−10
Perspective, Gear VR0.208910.037355.5932.23 × 10−8
Table 17. Logistic regression analysis results of the additive model of camera type and device used, allowing interactions.
Table 17. Logistic regression analysis results of the additive model of camera type and device used, allowing interactions.
Camera Type and Device UsedEstimateStandard Errorz ValuePr (>|z|)
Intercept0.611050.0175234.870<2 × 10−16
Perspective0.158720.025016.3472.2 × 10−10
Gear VR0.078590.042551.8470.0647
Perspective and Gear VR−0.028410.05672−0.5010.6164
Table 18. Comparison of the variables camera type and device used by ANOVA.
Table 18. Comparison of the variables camera type and device used by ANOVA.
Resid.Df Resid.DevDfDeviancePr (>Chi)
1270610,103---
2270510,10210.250970.6164
Table 19. Logistic regression analysis results concerning the camera rotation and contrast ratio.
Table 19. Logistic regression analysis results concerning the camera rotation and contrast ratio.
Camera Rotation and Contrast RatioEstimateStandard Errorz ValuePr (>|z|)
Intercept0.609410.0182133.469<2 × 10−16
NO_R, INC_C0.093230.044832.0800.0376
INC_R, NO_C0.149220.032354.6133.97 × 10−6
INC_R, INC_C0.165940.025846.4211.36 × 10−10
Table 20. Logistic regression analysis results of the additive model using the variables camera rotation and contrast ratio, allowing interactions.
Table 20. Logistic regression analysis results of the additive model using the variables camera rotation and contrast ratio, allowing interactions.
Camera Rotation and Contrast RatioEstimateStandard Errorz ValuePr (>|z|)
Intercept0.609410.0182133.469<2 × 10−16
INC_R0.149220.032354.6133.97 × 10−6
INC_C0.093230.044832.0800.0376
INC_R and INC_C−0.076510.05533−1.3830.1667
Table 21. Comparison of the variables camera rotation and contrast ratio by ANOVA.
Table 21. Comparison of the variables camera rotation and contrast ratio by ANOVA.
Resid.Df Resid.DevDfDeviancePr (>Chi)
1270610,112---
2270510,11111.91790.1661
Table 22. Logistic regression analysis results, investigating the pair of camera rotation and the device used.
Table 22. Logistic regression analysis results, investigating the pair of camera rotation and the device used.
Camera Rotation and Device UsedEstimateStandard Errorz ValuePr (>|z|)
Intercept0.630390.0177835.465<2 × 10−16
NO_R, Gear VR−0.043880.05049−0.8690.385
INC_R, Desktop display0.116820.025004.6732.97 × 10−6
INC_R, Gear VR0.203640.034615.8834.02 × 10−9
Table 23. Logistic regression analysis results of the interactions between camera rotation and the device used.
Table 23. Logistic regression analysis results of the interactions between camera rotation and the device used.
Camera Rotation and Device UsedEstimateStandard Errorz ValuePr (>|z|)
Intercept0.630390.0177835.465<2 × 10−16
INC_R0.116820.025004.6732.97 × 10−6
Gear VR−0.043880.05049−0.8690.3847
INC_R and Gear VR0.130700.061152.1370.0326
Table 24. Comparison of the variables camera rotation and device used by ANOVA.
Table 24. Comparison of the variables camera rotation and device used by ANOVA.
Resid.Df Resid.DevDfDeviancePr (>Chi)
1270610,113---
2270510,10814.55110.0329
Table 25. Logistic regression analysis results of the additive model with variable contrast ratio and device used, allowing interaction.
Table 25. Logistic regression analysis results of the additive model with variable contrast ratio and device used, allowing interaction.
Contrast Ratio and Device UsedEstimateStandard Errorz ValuePr (>|z|)
Intercept0.651150.0167938.786<2 × 10−16
NO_C, Gear VR0.032040.037800.8480.396699
INC_C, Desktop display0.086470.025143.4400.000583
INC_C, Gear VR0.212960.041085.1842.18 × 10−7
Table 26. Logistic regression analysis results of the additive model with variables contrast ratio and device used allowing interactions.
Table 26. Logistic regression analysis results of the additive model with variables contrast ratio and device used allowing interactions.
Contrast Ratio and Device UsedEstimateStandard Errorz ValuePr (>|z|)
Intercept0.651150.0167938.786<2 × 10−16
INC_C0.086470.025143.4400.000583
Gear VR0.032040.037800.8480.396699
INC_C and Gear VR0.094450.056441.6740.094199
Table 27. Comparison of the variables contrast ratio and device used by ANOVA.
Table 27. Comparison of the variables contrast ratio and device used by ANOVA.
Resid.Df Resid.DevDfDeviancePr (>Chi)
1270610,128---
2270510,12512.8040.09403
Table 28. Logistic regression analysis results investigating the effects of the camera type, rotation, and contrast ratio.
Table 28. Logistic regression analysis results investigating the effects of the camera type, rotation, and contrast ratio.
Camera Type, Rotation and Contrast RatioEstimateStandard Errorz ValuePr (>|z|)
Intercept0.510190.0251220.309<2 × 10−16
Orthographic, NO_R, INC_C0.112280.063791.7600.0784
Orthographic, INC_R, NO_C0.090650.048571.8660.0620
Orthographic, INC_R, INC_C0.243580.036296.7121.92 × 10−11
Perspective, NO_R, NO_C0.206260.036515.6491.61 × 10−8
Perspective, NO_R, INC_C0.267140.062604.2681.98 × 10−5
Perspective, INC_R, NO_C0.355440.043108.246<2 × 10−16
Perspective, INC_R, INC_C0.285760.035937.9521.83 × 10−15
Table 29. Comparison of model II and III by ANOVA.
Table 29. Comparison of model II and III by ANOVA.
Resid.Df Resid.DevDfDeviancePr (>Chi)
1270310,053---
2270110,05023.17030.2049
Table 30. Logistic regression analysis results of the additive model using variables camera type, rotation and the contrast ratio and allowing interactions of two variables.
Table 30. Logistic regression analysis results of the additive model using variables camera type, rotation and the contrast ratio and allowing interactions of two variables.
Camera Type, Rotation and Contrast RatioEstimateStandard Errorz ValuePr (>|z|)
Intercept0.50624780.023861821.216<2 × 10−16
Perspective0.22597400.03448536.5535.65 × 10−11
INC_R0.10542790.03880032.7170.006584
INC_C0.13780760.03898593.5350.000408
Perspective and INC_R0.00066360.05280450.0130.989973
Perspective and INC_C−0.16534000.0528044−3.1310.001741
Table 31. Logistic regression analysis results investigating the variables camera type, rotation and device used.
Table 31. Logistic regression analysis results investigating the variables camera type, rotation and device used.
Camera Type, Rotation and Device UsedEstimateStandard Errorz ValuePr (>|z|)
Intercept0.536530.0244921.907<2 × 10−16
Orthographic, NO_R, Gear VR−0.078700.07340−1.0720.2836
Orthographic INC_R, desktop display0.151140.035084.3081.65 × 10−5
Orthographic, INC_R, Gear VR0.254960.052994.8111.50 × 10−6
Perspective, NO_R, desktop display0.195700.035645.4904.01 × 10−8
Perspective, NO_R, Gear VR0.159420.069352.2990.0215
Perspective, INC_R, desktop display0.266740.034737.6801.59 × 10−14
Perspective, INC_R, Gear VR0.325410.045497.1548.44 × 10−13
Table 32. Logistic regression analysis results with the variables camera type, rotation, and device used, allowing interactions of pairs.
Table 32. Logistic regression analysis results with the variables camera type, rotation, and device used, allowing interactions of pairs.
Camera Type, Rotation and Device UsedEstimateStandard Errorz ValuePr (>|z|)
Intercept0.534060.0237622.479<2 × 10−16
Perspective0.200950.033376.0221.72 × 10−9
INC_R0.159420.033394.7741.80 × 10−6
Gear VR−0.056390.05059−1.1150.2649
Perspective and INC_R−0.096720.04515−2.1420.0322
Perspective and Gear VR0.134180.061302.1890.0286
Table 33. Logistic regression analysis results of the effects of the camera type, contrast ratio, and device used.
Table 33. Logistic regression analysis results of the effects of the camera type, contrast ratio, and device used.
Camera Type, Contrast Ratio and Device UsedEstimateStandard Errorz ValuePr (>|z|)
Intercept0.530240.0234222.644<2 × 10−16
Orthographic, NO_C, Gear VR0.028040.035340.4750.635
Orthographic, INC_C, desktop display0.181310.035345.1302.90 × 10−7
Orthographic, INC_C, Gear VR0.292120.060424.8351.33 × 10−7
Perspective, NO_C, desktop display0.244820.033657.2753.45 × 10−13
Perspective, NO_C, Gear VR0.230700.049364.6742.96 × 10−6
Perspective, INC_C, desktop display0.233170.035336.6004.11 × 10−11
Perspective, INC_C, Gear VR0.367970.055876.5874.50 × 10−11
Table 34. Logistic regression results of the additive model with the variables camera type, contrast ratio, and the device used, allowing interactions of pairs.
Table 34. Logistic regression results of the additive model with the variables camera type, contrast ratio, and the device used, allowing interactions of pairs.
Camera Type, Contrast Ratio and Device UsedEstimateStandard Errorz ValuePr (>|z|)
Intercept0.525530.0219423.951<2 × 10−16
Perspective0.232640.030217.7001.36 × 10−14
INC_C0.195810.032166.0881.14 × 10−9
Gear VR0.058100.028162.0630.039088
Perspective and INC_C−0.173970.04507−3.8600.000113
Table 35. Logistic regression analysis results of effects of the variables camera rotation, contrast ratio, and device used.
Table 35. Logistic regression analysis results of effects of the variables camera rotation, contrast ratio, and device used.
Camera Rotation, Contrast Ratio, Device UsedEstimateStandard Errorz ValuePr (>|z|)
Intercept0.607850.0194631.238<2 × 10−16
NO_R, NO_C, Gear VR0.012520.055170.2270.82046
NO_R, INC_C, desktop display0.134030.047892.7990.00513
NO_R, INC_C, Gear VR−0.202390.11945−1.6940.09020
INC_R, NO_C, desktop display0.166450.038554.3171.58 × 10−5
INC_R, NO_C, Gear VR0.122030.048942.4940.01265
INC_R, INC_C, desktop display0.128820.028414.5345.78 × 10−6
INC_R, INC_C, Gear VR0.304620.044186.8955.37 × 10−12
Table 36. Results of the logistic regression analysis of the model with the variables camera rotation, contrast ratio and the device used, allowing interactions all variables.
Table 36. Results of the logistic regression analysis of the model with the variables camera rotation, contrast ratio and the device used, allowing interactions all variables.
Camera Rotation, Contrast Ratio, Device UsedEstimateStandard Errorz ValuePr (>|z|)
Intercept0.607850.0194631.238<2 × 10−16
INC_R0.166450.038554.3171.58 × 10−5
INC_C0.134030.047892.7990.005131
Gear VR0.012520.055170.2270.820459
INC_R and Gear VR−0.056940.07853−0.7250.468439
INC_R and INC_C−0.171660.06188−2.7740.005539
INC_C and Gear VR−0.348930.13729−2.5420.011033
INC_R and INC_C and Gear VR0.569150.154833.6760.000237
Table 37. Logistic regression analysis results investigating the effects of the camera type, rotation, contrast ratio, and device used.
Table 37. Logistic regression analysis results investigating the effects of the camera type, rotation, contrast ratio, and device used.
Camera Type, Rotation, Contrast Ratio, Device UsedEstimateStandard Errorz ValuePr (>|z|)
Intercept0.513850.0267019.246<2 × 10−16
Orthographic, NO_R, NO_C, Gear VR−0.032020.07884−0.4060.6847
Orthographic, NO_R, INC_C, desktop display0.141070.067172.1000.0357
Orthographic, NO_R, INC_C, Gear VR−0.220870.19458−1.1350.2563
Orthographic, INC_R, NO_C, desktop display0.070530.055591.2690.2045
Orthographic, INC_R, NO_C, Gear VR0.130400.083911.5540.1202
Orthographic, INC_R, INC_C, desktop display0.210360.039665.3051.13 × 10−7
Orthographic, INC_R, INC_C, Gear VR0.353790.064175.5133.52 × 10−8
Perspective, NO_R, NO_C, desktop display0.197900.039045.0694.00 × 10−7
Perspective, NO_R, NO_C, Gear VR0.232620.076953.0230.0025
Perspective, NO_R, INC_C, desktop display0.314000.067724.6373.54 × 10−6
Perspective, NO_R, INC_C, Gear VR−0.042140.15152−0.2780.7809
Perspective, INC_R, NO_C, desktop display0.417980.053037.8823.22 × 10−15
Perspective, INC_R, NO_C, Gear VR0.255280.060624.2112.54 × 10−5
Perspective, INC_R, INC_C, desktop display0.235130.039595.9402.86 × 10−9
Perspective, INC_R, INC_C, Gear VR0.436450.060327.2364.62 × 10−13
Table 38. Logistic regression analysis results of the four variables models allowing interactions.
Table 38. Logistic regression analysis results of the four variables models allowing interactions.
Camera Type, Rotation, Contrast Ratio, Device UsedEstimateStandard Errorz ValuePr (>|z|)
Intercept0.49974280.025261219.783<2 × 10−16
Perspective0.22814870.03453916.6063.96 × 10−11
INC_R0.15016270.04751113.1610.001575
INC_C0.21061810.05478603.8440.000121
Gear VR0.00088610.05527960.0160.987212
Perspective and INC_R−0.00281850.0532272−0.0530.957770
Perspective and INC_C−0.16607640.0532649−3.1180.001821
INC_R and Gear VR−0.07460260.0788879−0.9460.344313
INC_R and INC_C−0.15355190.0619693−2.4780.013217
INC_C and Gear VR−0.34500590.1374857−2.5090.012094
INC_R and INC_C and Gear VR0.59201430.13748573.8210.000133

Share and Cite

MDPI and ACS Style

Guzsvinecz, T.; Sik-Lanyi, C.; Orban-Mihalyko, E.; Perge, E. The Influence of Display Parameters and Display Devices over Spatial Ability Test Answers in Virtual Reality Environments. Appl. Sci. 2020, 10, 526. https://doi.org/10.3390/app10020526

AMA Style

Guzsvinecz T, Sik-Lanyi C, Orban-Mihalyko E, Perge E. The Influence of Display Parameters and Display Devices over Spatial Ability Test Answers in Virtual Reality Environments. Applied Sciences. 2020; 10(2):526. https://doi.org/10.3390/app10020526

Chicago/Turabian Style

Guzsvinecz, Tibor, Cecilia Sik-Lanyi, Eva Orban-Mihalyko, and Erika Perge. 2020. "The Influence of Display Parameters and Display Devices over Spatial Ability Test Answers in Virtual Reality Environments" Applied Sciences 10, no. 2: 526. https://doi.org/10.3390/app10020526

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop