A Combination of Real-World Experiments and Augmented Reality When Learning about the States of Wax—An Eye-Tracking Study

: Burning candles show the solid and liquid states of wax on a macroscopic level. With augmented reality, the submicroscopic and symbolic level of all three states of wax can be shown. The augmented reality environment developed in this study lets students test their knowledge about the position of the three states of wax. So far, how the design parameters of augmented reality learning environments inﬂuence users’ eye movement and learning performance has not been researched. Twenty-three German students between the ages of 9 and 15 form the randomized sample of this study with three different groups. AR learning scenarios were created, varying only in one design parameter: ‘congruence with reality’. Our analysis using audio, video, and eye-tracking data showed that all the participants learned mostly the same and that the participants who saw the real experiment on screen experienced the highest degree of immersion. This study indicates that the presented AR learning environment is an opportunity to learn about what exact part of a candle is burning with the submicroscopic level shown in comparison; before using the learning environment, the students were uncertain about what substance burns when a candle is lit and what function the wick has. This study suggests teachers should think about implementing learning environments such as this to help students connect different levels of representation.


Introduction
In recent years, augmented reality (AR)-supported teaching and learning environments have gained significance both in chemistry as a school subject and in other scientific school subjects.In this context, AR represents an extension of physical reality by providing digital information about a subject with the use of various techniques.This has already been discussed in multiple publications.In the field of educational research, AR is of interest, since it is said to have positive effects on relevant teaching parameters, such as motivation [1][2][3], interest [4], self-efficacy [5], cognitive load [2,[6][7][8], attitude [9][10][11], and learning performance [6,8,12,13].In addition, more and more software solutions for creating AR applications are now available to people without coding experience (e.g., Reality Composer, ZapWorks Designer), and the necessary hardware for using this technology is available in many schools.These indications and developments highlight the need for profound research on how to design educational, effective AR learning environments, since a lot of studies have compared AR learning environments with other non-AR learning environments (e.g., AR apps vs. e-books or textbooks) [14].The research focus of this study is therefore to compare AR learning environments that differ only in various parameters within the AR learning scenario.Designing the optimal AR learning environment is a field of research with many variables (field of application, design parameters, and relevant teaching parameters).This study focused on AR learning environments that digitally enrich Educ.Sci.2023, 13, 177 2 of 13 real world experiments in the laboratory with relevant information [15].This empirical research study focused on the teaching parameters 'learning performance' and 'optical activity with differentiation between '3D registration' and 'photorealism', two indicators of the design parameter 'Congruence with Reality' (see Section 2).These two indicators were used to design various similar scenarios with differing parameters.An AR learning scenario about the states of wax was chosen as the learning environment as the states of matter are an essential part of chemistry curriculums.The designed learning environment is fit for applications in primary or secondary schools.The experiment is error-resistant and should be standard in all curriculums [15].

Theoretical Background
Mixed reality (MR) is one of the technological changes that has characterized the modern 21st century thus far.This technological revolution has also found its way into chemical research.For example, MR has been recognized as one of the top ten emerging technologies in chemistry by the IUPAC: 'Through virtual spaces, researchers explore interactive collaborations that enhance the possibilities of computational chemistry and molecular dynamics.Thanks to these innovative interactions with molecules, researchers reinforce their special reasoning, as well as improve their understanding of quantum chemistry' [16] (p.11).MR ranges in Milgram's [17] reality-virtuality continuum from Augmented Reality (AR) to Augmented Virtuality (AV) to Virtual Reality (VR).These new technologies have implications not only for professional research, but also for education in chemistry: 'The students' understanding of macroscopic and microscopic phenomena seems to be as well [more positive than traditional techniques], thanks to direct observation of atoms and molecules.Therefore, reducing the split-attention can be considered [18].Moreover, digital tools open possibilities for remote education, thus enabling teachers to share their lessons with virtually anybody, anywhere' [16] (p.12) In their review, Krug et al. [19,20] reviewed papers with AR learning scenarios, not only technically presenting the apps but also including empirical research data about their impact on learning parameters (such as motivation, self-regulation, self-efficacy, and learning performance).They found that almost all AR learning scenarios fell into three categories: AR expands reality in paper-based learning, opens so-called black boxes by making the invisible visible (chemistry models) [1,21], and presents experiments and their virtual execution [22].All three possibilities can transform experiments since learners do indeed need external representations (such as chemical symbols, models, diagrams) to understand chemical concepts and phenomena [23].They can open black boxes to make 'sense of the invisible and untouchable' [24] (p.949) and connect the molecular level to the corresponding symbolic and macroscopic level [25].Combinations of multiple representations are used for learning and problem solving; they provide different perspectives on the given phenomenon or interpretations of the material and help to build students' deeper understanding [26].
In their review, Krug et al. [20] identified seven design parameters for constructing AR learning environments.Three of these parameters can be divided into different levels (adaptivity: level 1-4, interactivity: level 1-6, complexity: level 1-5) [20].Adaptivity describes a reaction, such as the dynamic adjustment of software elements or services, to the activities of a user or the program itself, adapting the application to different situations [19,20].According to Krug et al. [19,20], interactivity is defined as an intended interaction with a device, object, or the content of a digital media component.Complexity describes the level of content-related and cognitive structures [19,20].The other four design parameters (immersion, game elements, content proximity to reality, congruence with reality) are differentiated using various indicators and the resulting sum [20].Indicator immersion describes how many of a user's senses (visual, auditory, haptic, olfactory, gustatory) a digital medium can influence, specifically how it creates an 'encompassing, surrounding and living illusion' for the user [20].Eight indicators (rules/goals, conflict/challenge, control, assessment, action language, human interaction, environment, and gamification) describe the parameter game elements [20].AR's proximity to reality regards causal, local, and temporal factors, as well as the plausible use and depiction of the tracking method [19,20].The definition of the indicator 'Congruence with Reality' is divided into social realism, plausibility, proximity to life and social interactions, light and shadow effects, 3D registration, proportions, and photorealism [20].This study focused on the two indicators of 3D registration and photorealism because of their impact on perceptual realism, as this can impact optical activity.
Eye-tracking is a promising method to explore where participants set their optical focus and what their actions are.One area of eye-tracking research is assessments in which participants take tests or solve problems [27][28][29][30].The results of such studies can create better tasks or help to analyze problem solving strategies [31].To create better learning environments, eye-tracking can also provide helpful recommendations [30,[32][33][34].The third research area is laboratory work [29,30].The results of video experiments have given us the first indications that eye movement can explain the results of paper/pencil tests and help teachers create demonstration experiments [30,35].This study combines two research areas of eye-tracking: creating better learning environments and investigating learning environments with AR and experiments.

Research Questions and Hypothesis
Research suggests AR can be used successfully in chemistry classes [21].AR learning environments can be constructed according to different design parameters and reduce split attention [19,20].No studies have yet been conducted that compare AR learning environments for science experiments with different AR design parameters based on eyetracking data.In this preliminary exploratory study, an AR learning environment was created with different design parameters in an AR learning scenario to answer the following research questions (RQ): 1.
What hypotheses do students formulate about what substance burns when a candle is lit? 2.
How are the transmission comprehension tasks in the burning candle experiment affected when using a learning environment with different augmented reality learning scenarios?3.
How do the participants' eyes move when using different augmented reality learning scenarios of the burning candle?
Regarding RQ1 and RQ2, the following assumptions were made.All the students could propose hypotheses and complete the transmission comprehension tasks.
The assumption regarding RQ2 that there was no change in cognitive load (divided attention) was supported by the fact that the same theoretical learning environment was taught, the model animations were the same for all the scenarios, and only the design parameter 'Congruence with reality' was changed with the indicators '3D registration and photorealism'.The assumption for the third research question was that the eye movement would vary between the different augmented reality learning scenarios related to the candle but would stay the same for those related to the models.The basis for this assumption was due to the difference in the visibility of the real candle through the virtual one and with the same view of the models.The obscuring of the real candle on the virtual screen by the virtual candle affects immersion and thus users' eye movements, leading them to focus more on the real object.Therefore, hypothetically, nobody would look at the real object when the real candle can be seen on screen.This study focused on the two indicators '3D registration' and 'photorealism' because of their impact on perceptual realism, as it can impact optical focus.

Learning Environment
The first step was to create a learning environment in which students are taught that gaseous wax burns when a candle is lit.We started with a quiz about the multiple representations of the three states of matter to familiarize all 23 students with the three states of matter, its particle model, and the use of the AR app.This quiz included the states solid (s), liquid (l), and gas (g) as symbols and 3D AR models (Zappar app), as well as multiple pictures with associated captions.When taking the quiz, the participants came, for the first time, in contact with AR while learning chemistry (according to their self-reports).The quiz also tested their content knowledge about the three states of matter.At this point, the students were asked to identify what substance actually burns when a candle is lit.If students said the wax is burning, they were then asked to specify which state of wax is burning.Students who said that the wick was burning were instructed to conduct the first experiment and burn a wick without wax.The experiment showed that a wick burns quickly and sometimes stops burning before it is used up.After the experiment, the same students could begin to think about the wax and which state of wax was burning.To check that the participants were right in their hypotheses about which state of wax burns, they lit up a candle and started the AR.The AR had four scenes per AR scenario ((a), (b), (c)).
First scene: Placing the virtual candle in front of the real candle on the table using world tracking within the Zappar app.Second scene: Three different versions are shown, which are the same in design parameters 'additivity' (level = 2), 'interactivity' (level = 3), complexity (level = 2), 'immersion' (indicator score of 1), 'game elements' (indicator score of 3), and 'proximity to reality' (indicator score of 2) for their AR contents but different in 'congruence with reality'.Scenario (a) reaches an indicator score of 5, scenario (b) reaches an indicator score of 5, and scenario (c) reaches an indicator score of 4.
Second (a) scene: The real candle is displayed on the screen (3D registration a); on the right next to it are solid, gas, and liquid buttons and on the right next to them is the virtual candle.Second (b) scene: The real candle is not displayed on the screen, but is rather concealed by the virtual candle (3D registration b).On the right next to the virtual candle are solid, gas, and liquid buttons.Second (c) scene: The real candle is displayed on the screen and is overlayed by a transparent virtual candle (3D registration b and photorealism loos).On the right next to it are solid, gas, and liquid buttons.
Third scene: The models of a solid (i), a liquid (ii) or a gas (iii) are displayed, depending on the three states of matter selected by pressing a button.The models of the three states of matter are placed at the position where the two lower buttons are visible in second scene.The button that is pressed moves to the highest position of the buttons, as in the second scene (see Figures 1-3).Fourth scene: If the participant pressed the right area of solid, liquid, or gas at the virtual candle in third scene, then a positive feedback message ('great perfect') appears on the screen.All scenes and their connections are shown in Figure 4.
After this initial check, the participants were given the opportunity to conduct an experiment that collects gaseous wax then lights it up and shows first the change in the states from gas to liquid and then from liquid to solid.To test the participants' current state of knowledge, they were then asked: What would happen if we had a room full of candles, all the candles were extinguished at the same time, and at that moment someone entered the room carrying an open flame?If they understood the experiment, they would conclude that the gaseous wax would burn, and all candles would light up again.In the end, the 'candle jump' is shown and the level of understanding is checked once again.The effect of the conducted experiment is as follows: If you blow out a candle and then hold a burning match next to it, the gaseous wax in the air will burn down to the wick and light up the candle again.Fourth scene: If the participant pressed the right area of solid, liquid, or gas at the virtual candle in third scene, then a positive feedback message ('great perfect') appears on the screen.All scenes and their connections are shown in Figure 4.   Fourth scene: If the participant pressed the right area of solid, liquid, or gas at the virtual candle in third scene, then a positive feedback message ('great perfect') appears on the screen.All scenes and their connections are shown in Figure 4.

Context and Participants
The research was conducted in the participants' home environment in Germany, July 2022.The participants were recruited by being approached directly in meetings or through their parents or acquaintances.There was no monetary compensation given as to create the prerequisites for an independent study.A total of 26 students and their parents gave their consent to participate in this study and to analyze, evaluate, and publish the data collected.They were informed about their data privacy as well as their right to withdraw from participation at any time.Participant pseudonyms were assigned, and no identifying information was recorded or scanned.Spoken data were conducted in German and the results have been translated into English for this publication.Three students were excluded from the analysis due to technical problems (no recording, no gaze data recording).The remaining 23 students were familiarized with the three states of matter, the particle model, and the use of the AR app through the states of matter quiz mentioned above at the beginning of the implementations; the students formed a randomized group.The gender distribution was 8 male and 15 female participants.Students from different schools with ages ranging from 11 to 15 and grades from 5 to 9 were chosen; further details can be seen in Figure 5.
entered the room carrying an open flame?If they understood the experiment, they would conclude that the gaseous wax would burn, and all candles would light up again.In the end, the 'candle jump' is shown and the level of understanding is checked once again.The effect of the conducted experiment is as follows: If you blow out a candle and then hold a burning match next to it, the gaseous wax in the air will burn down to the wick and light up the candle again.

Context and Participants
The research was conducted in the participants' home environment in Germany, July 2022.The participants were recruited by being approached directly in meetings or through their parents or acquaintances.There was no monetary compensation given as to create the prerequisites for an independent study.A total of 26 students and their parents gave their consent to participate in this study and to analyze, evaluate, and publish the data collected.They were informed about their data privacy as well as their right to withdraw from participation at any time.Participant pseudonyms were assigned, and no identifying information was recorded or scanned.Spoken data were conducted in German and the results have been translated into English for this publication.Three students were excluded from the analysis due to technical problems (no recording, no gaze data recording).The remaining 23 students were familiarized with the three states of matter, the particle model, and the use of the AR app through the ´states of matter´ quiz mentioned above at the beginning of the implementations; the students formed a randomized group.The gender distribution was 8 male and 15 female participants.Students from different schools with ages ranging from 11 to 15 and grades from 5 to 9 were chosen; further details can be seen in Figure 5.

Data Collection
The study followed a qualitative and quantitative approach and used audio, video, and eye-tracking data to answer the research questions.The learning environment was executed face-to-face with one student and the investigator at present.Communication with students followed a protocol.Students knew that their eye movement, video, and audio data would be tracked during the learning environment, but they did not know what to look for.The eye-tracking data were conducted with Tobii Glasses 3, 100 Hz, for

Data Collection
The study followed a qualitative and quantitative approach and used audio, video, and eye-tracking data to answer the research questions.The learning environment was executed face-to-face with one student and the investigator at present.Communication with students followed a protocol.Students knew that their eye movement, video, and audio data would be tracked during the learning environment, but they did not know what to look for.The eye-tracking data were conducted with Tobii Glasses 3, 100 Hz, for the duration of the learning environment.This mobile eye-tracker served as a tool to capture the participants' audio, video, and eye movements during the learning environment, which lasted between 20 and 42 min.The parts of audio and video data that were important for the research questions were transcribed into Excel and the answers were categorized deductively and inductively.Deductive category included the relationship between "wick burns" and "wax burns" as well as the state of matter.Inductive category included all other answers, such as 'oxygen', 'wax makes burning go slower', or 'kind of substance.'No optimization of the eye-tracking data collection was performed, and the data were manually mapped using the software Tobii Pro Lab.The first fixations in defined areas of interest (AOIs) were used as a metric.The percentage was calculated using the baseline, number of participants in the specific group, divided by the percentage, number of participants who had a first fixation in the corresponding AOI.The Tobii Glasses 3 were individually Educ.Sci.2023, 13, 177 8 of 13 calibrated and validated for each participant.For people wearing glasses, corrective lenses were used.

Results and Discussion
To answer our research questions, various data analyses were used.Inductive and deductive content analyses were used mainly for RQ1, RQ3, and the sub-questions of RQ2.A descriptive analysis of the AOIs was used for RQ2.

RQ1 Hyphothesis-Burning Candle
The audio and video data were reviewed to find the time when the investigator asked 'What part of the candle is burning?'The correct answer was that the wick burns first and when the temperature is high enough, the gaseous wax begins to burn as well.
Nineteen participants answered that it was only the wick (19 out of 23) that burns.One participant also said that oxygen is needed for the wick to burn.Seven of the 19 participants also noted that the wax burns in addition to the wick.One participant said that wax makes the burning process go slower, and another referred to the technical word 'fuel' when talking about the wax.Two participants thought liquid wax burns, and one thought gaseous wax burns.One participant thought wax burns in the solid state and another in the liquid state.Another participant answered: 'There is also some flammable substance; if you light the candle up, there is a reaction.But what kind of substance it is, I don't know.'One participant explained before the hypothesis was asked that all wax states are with a burning candle and that the 'smoke' burns.
Only one participant, one of the youngest, knew before the learning environment the right answer to 'Which part of a candle is burning?'The oldest knew there had to be a reaction but did not know the substance that actually burns.Only a few participants used technical words such as fuel .The participants said that the wick and smoke burn because that is what they saw on the macroscopic level, but their knowledge about the submicroscopic level was missing.The most common answer to the question was 'the wick burns.'This is not wrong, but it is only one part of the entire process.The overall function of a wick and the process of the wax burning was not known by 22 participants out of the 23 participants.
The function of a wick should definitely be planned in learning environments as this is fundamental for understanding what happens with candle wax and knowing which part of a candle burns.Based on the students' responses during the implementation of this learning environment, the investigator communicated this foundation using a model experiment in which water and a paper towel were used as a model for the function of a wick.This model could also be added as part of future AR environments.
Age did not affect the answers.According to the education plans in Germany, everyone should have known the answers.Therefore, the question arises: Is the problem here that they have never learned it and therefore could not understand it?Alternatively, is it that the students cannot transfer what they have learned to everyday life and therefore learn superficially in schools?This study cannot answer these questions.

RQ2 Using AR
The eye-tracking data were mapped and analyzed by the first fixation in the defined areas of interest (AOIs).The AOIs were the areas of the different scenes (the virtual candle, real candle on-screen, and models) and the real-world candle off-screen.
In the second scene, the AOIs analyzed were as follows: the real candle off-screen, real candle on-screen, and the virtual candle (see Table 1).If, hypothetically, nobody looked at the real world when the real candle was on the screen (scenario (a) and (c)), the finding that 50% of the participants in group scenario (c) looked at the real candle should be questioned.The question was therefore whether there was a trigger for the participants to look at the real candle.The answer was that two participants had no trigger for looking at the real candle in the real world, and one looked at the real candle in the real world due to the Educ.Sci.2023, 13, 177 9 of 13 intervention of the investigator.We expected no participants to look at the AOI real candle on-screen in group scenario (b), but actually 63% of participants looked at this AOI.This happened because the virtual candle did not overlay the whole real candle through the screen for those participants.Hypothetically, if the virtual candle is not superimposed on the real candle through the screen, participants should actually look at the real world more than if they were able to see the real candle fully on-screen.The data showed that when the real candle was on the screen, the participants looked at it and also 100% at the real candle off-screen.If the real candle was not on the screen, 67% looked at the real candle off-screen.The hypothesis that students would look at the real world when it is not on the screen is refuted by these results.

Scenario Second Scene Real Candle off Screen Real Candle on Screen
Virtual Candle (a) (n = 9) In the second scene, the AOIs analyzed were as follows: the real candle off-screen, real candle on-screen, and the virtual candle (see Table 1).If, hypothetically, nobody looked at the real world when the real candle was on the screen (scenario (a) and (c)), the finding that 50% of the participants in group scenario (c) looked at the real candle should be questioned.The question was therefore whether there was a trigger for the participants to look at the real candle.The answer was that two participants had no trigger for looking at the real candle in the real world, and one looked at the real candle in the real world due to the intervention of the investigator.We expected no participants to look at the AOI real candle on-screen in group scenario (b), but actually 63% of participants looked at this AOI.This happened because the virtual candle did not overlay the whole real candle through the screen for those participants.Hypothetically, if the virtual candle is not superimposed on the real candle through the screen, participants should actually look at the real world more than if they were able to see the real candle fully on-screen.The data showed that when the real candle was on the screen, the participants looked at it and also 100% at the real candle off-screen.If the real candle was not on the screen, 67% looked at the real candle off-screen.The hypothesis that students would look at the real world when it is not on the screen is refuted by these results.In the second scene, the AOIs analyzed were as follows: the real candle off-screen, real candle on-screen, and the virtual candle (see Table 1).If, hypothetically, nobody looked at the real world when the real candle was on the screen (scenario (a) and (c)), the finding that 50% of the participants in group scenario (c) looked at the real candle should be questioned.The question was therefore whether there was a trigger for the participants to look at the real candle.The answer was that two participants had no trigger for looking at the real candle in the real world, and one looked at the real candle in the real world due to the intervention of the investigator.We expected no participants to look at the AOI real candle on-screen in group scenario (b), but actually 63% of participants looked at this AOI.This happened because the virtual candle did not overlay the whole real candle through the screen for those participants.Hypothetically, if the virtual candle is not superimposed on the real candle through the screen, participants should actually look at the real world more than if they were able to see the real candle fully on-screen.The data showed that when the real candle was on the screen, the participants looked at it and also 100% at the real candle off-screen.If the real candle was not on the screen, 67% looked at the real candle off-screen.The hypothesis that students would look at the real world when it is not on the screen is refuted by these results.In the second scene, the AOIs analyzed were as follows: the real candle off-screen, real candle on-screen, and the virtual candle (see Table 1).If, hypothetically, nobody looked at the real world when the real candle was on the screen (scenario (a) and (c)), the finding that 50% of the participants in group scenario (c) looked at the real candle should be questioned.The question was therefore whether there was a trigger for the participants to look at the real candle.The answer was that two participants had no trigger for looking at the real candle in the real world, and one looked at the real candle in the real world due to the intervention of the investigator.We expected no participants to look at the AOI real candle on-screen in group scenario (b), but actually 63% of participants looked at this AOI.This happened because the virtual candle did not overlay the whole real candle through the screen for those participants.Hypothetically, if the virtual candle is not superimposed on the real candle through the screen, participants should actually look at the real world more than if they were able to see the real candle fully on-screen.The data showed that when the real candle was on the screen, the participants looked at it and also 100% at the real candle off-screen.If the real candle was not on the screen, 67% looked at the real candle off-screen.The hypothesis that students would look at the real world when it is not on the screen is refuted by these results.To go to the third scene, one of the buttons named with the three states of matter (solid (i), liquid (ii), or gas (iii)) must be pressed.Therefore, there were three possible scenes (solid, liquid, or gas) for the three different scenarios ((a), (b), or (c)).Most of the participants (18/23) pressed the solid button at first and then proceeded with liquid (9/18) or gas (9/18).Four participants started with gas (4/23) and one started with liquid (1/23).In the third scenes, the AOIs were analyzed: real candle off-screen, real candle on-screen, and virtual candle and models (solid, liquid, and gas) (see Table 2).The participants looked at the real candle off-screen much less than they did in the second scene.In the four different third scenes (ai, aii, ci, ciii), no one looked at the real candle off-screen.As in the second scene, most of the views into the real world started from scenario (b).Not all participants looked at the models.In the second scene, all participants looked at the real candle off-or on-screen.This did not happen in the third scene.These phenomena can be explained by the fact that this area was no longer interesting and useful enough to continue interacting with the AR learning environment.The participants with the AR scenario (a) looked at the real candle off-screen much less than in scenario (b) or (c).The most views of the real candle off-screen occurred in the group with AR scenario (c).This result shows that AR scenario (a) had the highest immersion of these three AR scenarios and AR scenario (c) had the lowest.No systematic difference could be found on the focus of the participants on the models.Further research should look at the button arrangement (solid, liquid, and gas).Most participants chose solid first, but why? Was it just because solid is the most familiar state or was it the arrangement of the buttons that influenced their choice?

RQ3 Transfer Question
Audio and video were reviewed to find the time when the investigator asked, 'what happens if we have a room full of candles and, hypothetically, we exstinguished all flames same time, and at this exact moment someone came in with an open flame?' as well as 'what happens or why is the 'leap of the fire' possible?'The 'leap of the flame' was shown as a real-life experiment beforehand.The correct answers to question 1 include the following arguments: one big flame appears, all the candles light up again, or the fire of the burning object will be bigger because of the gaseous wax in the air.The correct answer to question 2 is that gaseous wax is burning.
Seventeen participants answered the investigator's question.Without explanation, 14 participants answered question 1 directly and correctly.That means three participants did not respond immediately and correctly.Two of these three participants could answer correctly and directly after the 'leap of fire' experiment.One participant said that wax would be everywhere and become liquid with time.None of the participants answered that it was a combination of the open flame and the gaseous wax.Ten participants correctly explained why the phenomenon occurs.One thought the liquid wax would burn; another thought that nothing would burn and the gaseous wax would simply change back into the liquid and solid states.The other two gave no explanation at all.Thirteen participants were able to explain the 'leap of fire' correctly.Of the remaining participants, one said, 'I don't know,' another said, 'the flame is transported by the liquid gas,' and the last said, 'the high temperature is enough.' Seven participants did not see the 'leap of fire' during the execution of the learning environment.
When the transfer question 'what happens if we have a room full of candles and, hypothetically, we extinguished all flames the same time, and at this exact moment someone came in with an open flame?' was asked, most participants could answer it correctly or answered it correctly after they saw the 'leap of fire'.Only one participant did not identify the gaseous wax as the burning object leading to these phenomena.It was not clear why, because the investigator gave the correct explanation.The participant who said that the liquid gas burned was asked about the previous experiment for them to infer it was the gaseous wax.

Conclusions and Limitations
This study indicates that the presented AR learning environment is one possibility to learn about which part of a candle burns.Only two participants could not give the correct answer to the transfer question.Different AR learning scenarios (with the difference lying in the design parameter 'congruence with reality') did not affect the learning outcome but did affect the immersion experienced by the participants.At the beginning of this learning environment, the participants were unsure about what substance burns when a candle is lit and what function a wick has.Only one participant knew this before the learning environment.Teachers could implement learning environments such as this to help students learn about which part is burning.Another conclusion of this study is that teachers could use any one of the presented AR learning scenarios because the learning effect is the same.If teachers want the highest degree of immersion for their students, they should use scenario (a).This study shows the first results of comparing different AR learning environments, which digitally enrich the same real-world experiment.
However, the results of this descriptive pilot study are subject to limitations, so a generalized conclusion cannot be derived.First of all, the results only provide indications of the effect of the scenarios investigated and cannot be generally applied to all AR enrichments of real-world experiments.Likewise, only a small sample of different ages was examined, so that, e.g., age or gender differences could not be taken into account here.Second, the pilot study measured the effects on latent constructs, such as motivation or self-efficacy.Again, the observed altered immersion experience could impact these latent constructs.
Here, further studies are needed to investigate different AR learning environments for real-world experiments.Further questions could be answered using eye-tracking with this AR learning environment by focusing on different AOIs: How did participants use the 3D models?What happened in the learning scenario when participants used the correct area, and nothing happened, or they chose to use an incorrect area?It would also be interesting to look at new AR learning environments in which the introduction of the wick is more clearly planned and a model of it is shown.

Figure 5 .
Figure 5. Participants split into group, age, and class.

Figure 5 .
Figure 5. Participants split into group, age, and class.