A virtual reality molecular

: Virtual reality provides a powerful way to visualize the three-dimensional, atomic-level structures of molecules and materials. We present new virtual reality software for molecular modeling and for testing the use of virtual reality in organic chemistry education. The open-source software, named VRChem, was developed primarily for building, visualizing and manipulating organic molecules using a head-mounted virtual reality system. The design goal of the VRChem software was to create an easy-to-use and entertaining user experience for molecular modeling in virtual reality. We discuss the design and implementation of VRChem, together with real-life user experiences collected from students and academic research staff.


Introduction
The idea of an artificial, computer-generated world realistic enough to delude our senses has captivated the minds of authors, consumers and institutions for decades with promises of lifelike virtual experiences in settings ranging from the mundane to the otherwise unimaginable.The thought led to the development of virtual reality systems beginning in the 1960s, only 14 years after the first transistor-based computer [1].Early VR devices were of head-mounted display (HMD) type, where a digital display is attached to a helmet or headset worn by the user.An alternative approach to VR is to project the virtual world onto the physical walls surrounding a user, which requires a large empty space and expensive projection equipment, such as the set of up to 6 rear-projection screens and projectors used in CAVE installations [2].Recent developments in display manufacturing, driven in part by the enormous demand in the smartphone and smartwatch markets, have for the first time produced affordable displays for consumer-grade VR head-mounted displays.This technological progress has been primarily funded by the entertainment industry, but the educational and instructional fields seem to be viewing virtual reality with great interest as well for its promising outlook as an educational tool.
Here, we present a new virtual reality software project for molecular modeling and for testing the use of virtual reality in organic chemistry education.The software, named VRChem, was developed primarily for building, visualizing and manipulating organic molecules using a head-mounted virtual reality system.The design goal of the VRChem software was to create an easy-to-use and entertaining user experience (UX).We believe that VRChem is user-experience-wise the quickest and most practical molecule building VR software to date, and it has received highly encouraging feedback from our test audience.

Immersive Virtual Reality
The term "immersive virtual reality" encompasses technologies and devices meant to produce an authentic or immersive experience of a believable virtual world.The concept is a familiar fictional trope, but has until recently been unaffordable for the average electronic entertainment and video game consumer.However, current and next generation HMDs are introducing lower-cost platforms for VR experiences, and the market is growing with new releases of both improved VR devices and new software.
Modern immersive virtual reality systems consist of at least a head-mounted display and some type of human interface device, typically a pair of handheld controllers, which enables interactions with the virtual environment via hand movements and simulated hand gestures.The HMD renders the virtual environment using stereoscopy, and thus produces an accurate illusion of depth in the scene.The prevalent method of hand tracking uses handheld wireless controllers with input buttons and sensors for tracking the each controller's motion and attitude with six degrees of freedom, i.e., position in three dimensions and rotations around three axes.The HMD itself is also tracked to an equal degree, and together this allows for an accurate, natural-feeling method of moving in and interacting with the virtual world.Users can, e.g., grab objects from behind their backs with minimal practice by utilizing kinesthesia and their innate knowledge of their bodies.Immersive VR could utilize other human senses, but commonly focuses on vision, sound and kinesthesia.The "immersive" qualifier is sometimes used to separate the previously described systems from simple three-dimensional computer games and worlds such as Second Life [3], but the qualifier is usually omitted.Later mentions of virtual reality in this article will refer exclusively to devices and experiences that provide an immersive experience to the user.
Assuming good display quality and accurate positional tracking of the VR devices, virtual reality can provide the user a highly credible illusion of a real world, even when it appears completely alien to us.This deep immersion is the main draw of the technology, and its appeal in electronic entertainment is clear, but a growing body of evidence, research and experience suggests that it could have value in several areas of education and training.

Virtual Reality as an Educational Tool
Interest in the educational use of VR in based mainly on the lifelike three-dimensional appearance and manipulation of a virtual environment the technology can enable.The capability can be harnessed to visualize phenomena and do exercises that are impractical or impossible with physical models or two-dimensional visualization aids [4].The technology is being used to train, e.g., vehicle maintenance and product assembly personnel [5,6] and medical personnel [7], and could be beneficial in education universally.This benefit could be more prominent in subjects such as chemistry, where the three-dimensional intricacies of molecule symmetry, chirality and solid-state structures are difficult to display on paper or computer screens and can be unclear for those students who find three-dimensional visualization difficult.Students could benefit from access to a lifelike virtual reality representation where the three-dimensionality of the structure is clear and can be interacted with and examined from any angle.The novelty of VR might also engage pupils and students more than current alternatives.Educational use of VR could thus have a motivational impact on learning.It should be noted that while educational and scientific VR software do have different requirements, well designed and functional VR chemical modeling software can fulfill both roles to an extent.
The educational use of VR has been studied to some degree.The studies involving the largest test groups focused on the effects of VR training on improving general spatial thinking and spatial abilities of 15-17 year old students.The studies measured success in solving two-dimensional spatial thinking tests before and after undergoing VR training.A study by Hauptman [8] involved 192 students, of which 104 used VR to train general spatial thinking, and concluded that VR provided a moderate and clear benefit to learning, as shown by the higher spatial ability test results over the control groups.Another largescale study by Dünser et al. [9], however, only saw a small positive effect among the 215 students tested, and could not claim a clear relationship between VR training and enhanced spatial thinking.Dünser et al. did speculate that three-dimensional training using VR might not translate to solving two-dimensional test problems, and therefore the tests selected for the study might not accurately measure skills gained from the VR training.
In chemistry education, adequate spatial reasoning skills enhance students' success in learning [10][11][12], and training of said general spatial skills both with traditional methods [13] and in VR [8] is expected to be beneficial for chemistry learning.A number of studies have also examined the use of VR for teaching chemistry specifically.For example, Limniou et al. used a CAVE virtual reality room to display animations of chemical reactions to a group of students (N = 14, age 16-18) who after using VR reportedly understood the three-dimensionality of molecules better, achieved improved scores in a questionnaire and were "enthusiastic with the CAVE presentations" [14].Barrow et al. developed an augmented reality (AR) application for life science and biochemistry education that was used alongside the written material by a small student group (N = 7, age [19][20][21][22][23][24] undertaking a biochemistry course.While learning gains were not measured, all of the students agreed or strongly agreed that AR motivated them to understand the processes displayed and that AR was entertaining and fun; 58% agreed or strongly agreed that AR enhanced their learning; and 86% wanted AR to be used more in education [15].Bennie et al. used the Narupa [16] VR software to visualize the rearrangement of the chorismate enzyme to thirdyear university students (N = 22) and compared the experience with using a combination of traditional desktop applications CHARMM [17] and VMD [18].While all software systems were seen as having aided the students' understanding of the reaction nearly equally, the VR application was seen as much more enjoyable, was better for improving the students' understanding of molecular structure and increased their interest in computational chemistry.Edwards et al. [16] tasked a small group of pupils and their teacher (N = 13, ages 12-36) to build molecules using a virtual reality HMD and haptic gloves in a gamefied VR application.Again, learning outcomes were not measured, but participants reported an increase in learning motivation and engagement towards chemistry and saw value in VR as a tool for teaching [19].Finally, Ferrell et al. had their university students (N = 70) compare the sizes of molecules and carbon nanotubes in VR and then assess the bulkiness of similar compounds and sizes in a questionnaire.The results agreed with those from other groups before, with students considering the workshop enjoyable and useful, and several wished for more VR exercises.However, no conclusion could be drawn about the benefits of the workshop, possibly due to the scope and formatting of the questionnaire [20].Other studies reported positive results or outlooks as well [21][22][23][24].
Overall, the current view of using VR for tuition is generally positive, though not all studies fully support this view.For example, in one of their studies, Bernholt et al. saw an initial increase in motivation and a high perceived value of VR training after one three hour VR workshop, but the effect decreased rapidly with more exercises, and VR use was ultimately considered a "fun add-on thing of very little value for the course" in a follow-up interview [22].It should be noted that the exercises had limited interactivity and underutilized the capabilities of VR, which could explain the reactions.Bernholt et al. concluded that VR can be of use in teaching, but that being a new teaching method, it requires more carefully thought out exercises.Additionally, a study by Makransky et al. reported that using immersive VR for science laboratory simulations is disadvantageous to learning when compared to a normal computer interface, despite the increased sense of presence [25].The study, however, was not explicitly chemistry-related and was conducted using a relatively unsophisticated VR device and an awkward-to-use control method, leading to a poor user experience that could have factored into the results.
In conclusion, while most of the studies on using VR as an educational tool used small test groups and often did not attempt to quantify any learning gains, they mostly agreed that VR has potential to be an advantageous teaching aid and at the very least is not counterproductive in chemistry education.Whether the benefit arises from the immersive experience, the stereoscopic three-dimensional molecular models or simply from using an entertaining technological device that helps motivate students is uncertain but seen as worthy of exploring.

Existing Molecular Modeling VR Applications
Various VR applications for molecular modeling, visualization and other tasks have been developed previously [18,[26][27][28].We examined some of these applications for inspiration and assessed their functionality and usability before beginning our own development.We have likewise inspected most new software released after the start of VRChem development in late 2016.Here we focus mainly on software geared towards molecular systems, but the VR learning environments focusing on crystallography and solid-state materials have been reviewed recently [29].
The two most notable pieces of software we examined before our own development were Molecular Rift [30] and Caffeine [31] molecular viewer.Both were among the first pieces of chemical visualization and modeling software to target modern consumer-grade VR devices.Molecular Rift is an open-source application and features a novel user interface based on hand gestures that lets the user load a molecule from a file or an Internet repository and examine it in VR.The Caffeine molecular viewer is intended for similar use and features an impressive array of visualization modes and rendering performances.However, despite being seemingly well developed applications, both were built for visualization only and do not feature any molecule editing capabilities.Given that we believe molecular interactivity to be important in educational use, this shortcoming drove us to develop our own software instead.
Since the inception of VRChem, several new applications have been released.ChemPreview [32] is a prototype piece of visualization software for unreleased Meta Augmented Reality glasses that has some basic editing capabilities.NomadVR [33] is another piece of visualization software designed to work natively on all VR platforms, but it does not allow editing the structure being seen.The very capable and interesting Narupa [34] software is capable of some structural editing, but was not originally designed for building molecules from scratch.However, the separate Narupa builder [35] software is much better suited for molecular modeling in VR, and is one of two application alongside VRChem we have found which features robust structure editing.Similarly to VRChem, Narupa builder enables energy minimization of the molecular structures with OpenBabel library [36].The workflow of Narupa builder differs from what is commonly used in other modeling software, but it makes it quick to learn and is not a detriment to its usage.The builder includes a built-in library of common molecular fragments (rings, amino acids, etc.) for composing large structures.Additionally, Narupa builder was released under an open-source license, like VRChem.Nanome [37] is another piece of software with full structure editing capabilities, and it has been used in a relatively large-scale undergraduate biochemistry class with tetherless VR technology (Oculus Quest) [38].Though the molecule building UI is unnecessarily menu-dependent and in our view could be streamlined, the software has superior visualization, protein editing and multi-user support.The software, however, is not free nor open-source, and it is thus not fully available for free educational or general use.

Implementation
VRChem was developed primarily as prototype molecule building software for the purpose of testing the practicality of molecular modeling in virtual reality.Two nontraditional user input methods and control schemes were evaluated for their suitability in scientific computational chemistry research and tuition.The initial goal laid out for the project was to develop a VR application replicating the features and workflows of commonly used organic molecule editor software.VRChem was developed by a twoperson team of one chemist (O.Pietikäinen) and one UX designer (D.Krupakar).

Hardware
The VR platform chosen for VRChem development was HTC's Vive virtual reality system.Like most contemporary HMDs, Vive uses a pair of digital displays and fresnel lenses to render the digital world, and display it in a comfortable field of view and at a good focal distance.The positional tracking system of the Vive uses external devices that send out timed laser pulses for the headset and controllers to receive.This timing data are used to calculate the position and rotation of both hand controllers and the headset.The handheld controllers are wireless, but the HMD requires a wired connection to a PC and the power grid, although a wireless adapter for the Vive is available.The connected PC is responsible for the graphically intensive task of image rendering and therefore should be fitted with a reasonably fast graphics prosessing unit to avoid any nausea or disorientation caused by a frame rate lower than the recommended 90 frames per second [39,40].The software could be adopted to work on newer, fully integrated and truly wireless VR devices as well, but likely with limited features and performance.A wireless HMD would be more comfortable and convenient to use, and wireless HMDs are easier to use concurrently in a shared space.However, they have reduced computational capacities and often less accurate tracking.While HTC Vive has been used as the main development platform, an early version of the software was developed for Oculus Rift [41], and the current version could also be ported to recent, untethered VR hardware.
VR applications typically use handheld controllers for user inputs.With VRChem, we also experimented with creating a UX relying on tracking the user's bare hands from visual data alone.This method of input was expected to be more convenient and less cumbersome than one based on the handheld controllers due to the reduction in weight and our natural familiarity with using our hands.This was achieved by using a Leap Motion Controller [42], a small USB device attached to the front of the HMD and containing a pair of IR-sensitive cameras and an IR LED light.The camera pair captures a stereoscopic video feed of the user's hands, illuminated by the IR light, which is analyzed by the accompanying device driver in real time to determine the locations and orientations of the palms, hands and all joints of each finger.The left image in Figure 1 illustrates how the device functions.The driver software creates digital 3D skeletal models of the user's hands and exposes them to external applications through an API that can be used to request access to the data, which can then be used to register actions such as pressing a virtual button, or a hand gesture, such as a thumbs up.The right image in Figure 1 displays how the hand models are presented in our software.

Design
Initially, VRChem was designed for both scientific and academic use, and to augment or partially replace conventional modeling software used in tuition.In one of the considered usage scenarios, a computational chemist would plan and set up their experiment parameters with ordinary desktop software, but would then switch to VR to build or load the molecular models for the experiment.Later, after a successful calculation, VR would also be used to analyze the three-dimensional models and reaction steps of the calculation, along with any graphs and logs produced.However, in early development, this workflow was found to be impractical for day-to-day scientific work as a result of the frequent switching between VR and desktop use and consequently seen as unlikely to be adopted.Despite this, seeing molecules and reactions as three-dimensional, macro-sized and nearly tangible was still an impressive experience and could motivate and teach students, and hence the target focus was shifted towards educational and demonstrational use in universities or upper secondary schools.Software requirements were adjusted accordingly towards a simplified, fluid and more entertaining user experience, with the final requirements list for the minimum viable product comprising: • A quick and simple system for building organic molecules.The modeling mechanics should conform with those of common modeling software for faster learning.

Software Structure
VRChem is based on the Unity 3D game engine for core functionality and rendering.The Unity engine has good computational and memory performance and visual quality for our demands, and is widely supported by VR system developers and other third parties.The scripting language used in the project was C#, which was advantageous for its .NET underpinning and thereby supports other third-party software and APIs based on the .NET framework.Unity also has a plugin system and digital storefront for extending the engine's functionality, which was used to integrate several plugin packages.Most notably, a Virtual Reality ToolKit (VRTK) plugin provides support for most HMDs with minimal additional configuration; an official plugin for the Vive HMD is used to support some features specific to the device; and a Leap Motion plugin allows access to the Leap Motion hand tracking data and enables configuration of customized gestures.Additionally, a .NET wrapper for the OpenBabel [36] chemistry toolkit is included for integrating some of OpenBabel's features into VRChem.VRChem was developed for and tested in a Windows environment with the HTC Vive VR system, but should be capable of running on other platforms and VR devices with small changes.
A detailed description of the software structure falls outside the scope of this article, though it is open for examination in the form of open-source code.One implementation detail that warrants discussion is that we chose to handle each atom and bond as a separate Unity object.Unity objects each have a location, rotation and belong to a hierarchical object tree, and are essentially containers for Unity components that determine the functionality of the object.Unity components are properties such as 3D meshes, collision meshes for physics calculations, rendering instructions and sound emitters.Including each atom and bond as separate objects simplifies the program structure and is easier to maintain, but can lead to reduced performance when rendering large molecules.In VRChem, each atom and bond is a separate object and has a separate ball or cylinder mesh, material and instructions for rendering, increasing the draw call count.VRChem uses geometry instancing with an instanced shader and object batching to reduce the draw call count, but with large molecules, the number of draw calls is expected to rise to a level where performance is affected and the application framerate drops below a comfortable level.Therefore, VRChem might not be suitable for, e.g., displaying large biomolecules, in its current state, and exploring other rendering alternatives is likely necessary to achieve greater performance.Quadric surface rendering [43] and the billboard rendering technique used in, e.g., QuteMol [44], are prime candidates for this.However, a laptop meeting the minimum performance requirements for VR use (4 core Intel i5 7300HQ 3.5 GHz, 8 GB DDR4 2133 MHz, Nidia GTX 1060 3 GB) running VRChem manages to render scenes with 5000 bonded atoms without performance issues.The performance is considered adequate for a program focusing mainly on building molecular models by hand.

User Experience
Designing a good user interface and experience for a VR application, especially one featuring actions that lack obvious real life analogues, presents a very different set of design challenges and demands rather dissimilar approaches to those used in desktop application design.The low button count of the controllers sets strict limits for quickly selectable actions and shortcuts, and usually forces the user to use a slow, pointing-based method of text input.Menus are therefore necessary for most tasks, and need to feature larger buttons to account for any inaccuracies in the 3D-tracking of the controllers.Furthermore, most controllers are not designed to take advantage of the fine motor skills of the human fingers, opting instead for a palm grasp hold and a wrist-based method of pointing that is often more applicable for entertainment use, adding to the need for larger menu items.A different controller design could increase the accuracy and speed of pointing and selecting menu items in VR [45], whereas abandoning the controllers entirely might result in a less unwieldy experience in general.
To test the practicality of controller-free operation, we picked the Leap Motion Controller as the primary method of user input during early development.In our testing, the physical controllers were more accurately tracked in space and gave unambiguous signals of button presses, but were slightly heavy to hold, more cumbersome to use for scientific work and shaped for a grip that did not appear optimal for precise work.Using hand tracking was less awkward and enabled text input using a keyboard without needing to take off the head-mounted display.Additionally, the accuracy of Leap Motion Controller's hand tracking was more accurate than anticipated.A challenge with the Leap Motion Controller was mapping different user actions to suitable hand gestures that would be reliably registered by the device while simultaneously feeling natural to perform and easy to learn and remember for new users.The gesture recognition difficulties arise from the hand tracking device being fixed to the front of the HMD and thus only having a fixed single view point of the user's hands.This occasionally leads to situations where the view of the user's fingers is obscured by their palm.The hand tracking software attempts to analyze other hand elements to continue tracking the positions of the obscured fingers, but the accuracy is yet to reach a satisfactory level, which renders many gestures too unreliable for use due to input inconsistency and subsequent frustration in the user.The most natural-feeling gestures with highest recognition accuracy were a pinch using an index finger and a thumb, a full hand grab and extending a single finger.The gestures are pictured in Figure 2.
Due to the limited quantity of feasible control gestures, a system was designed that allowed a single gesture to perform different actions depending on the user-controlled state of the software.The state system is intended to be transparent to the user; i.e., the user does not need to consider the current state of the system, but rather the state should be obvious at all times.An alternative system would be one where one gesture would perform different actions based on, e.g., what tool is selected from a menu.However, this system can lead to mode errors in which the user performs a correct sequence of actions to reach their goal, but in the wrong mode, leading to an unintended situation.Additionally, our intent was to reduce the need for menu interactions to a minimum to limit any attention shifting from the primary modeling task and to speed up the usage of the software.At the present time, the software state is dependent on the selection of atoms and bonds, which modifies the functionality of some control gestures.A further increase in the number of possible user inputs was achieved with an asymmetric control scheme, where the same gestures with the left or right hands can induce distinct actions.The motivation behind this was that while our hands are anatomically symmetrical and could logically perform the same actions, most of us are more skilled in using one over the other and are already familiar with the asymmetric user interface of a mouse and keyboard, and many instruments, vehicles and more.During the implementation of VRChem, we carried out user studies, comparing the use of traditional desktop application and VRChem for building molecules [41].A user test group consisting of 11 students in chemistry, game design and graphic design, carried out three different molecule-building tasks, which were performed with both a regular keyboard-mouse user interface and the VRChem software using the Leap Motion controller.The molecules that the users had to build from scratch included ethylene glycol, 3,3 ,3benzene-1,3,5-triyltripyridine and 3-(N-methyl-N-ethynylamino)-1,2-propadien-1-ol.In the case of the simplest molecule (ethylene glycol), the students built the molecule on average in approximately the same time as with VR and regular user interfaces.For the two more complex molecules, the students were able to build them 20-30% faster with the regular interface.Not surprisingly, those students with little previous VR experience struggled most with the VR interface.Notably, one student could not complete the test, as the hand gesture VR interface had been designed for right-handed users and the student was lefthanded.While the VR user interface did not lead to significant advances over the refular keyboard-mouse builder interface, the overall feedback from non-chemistry students suggested that a VR environment is good for understanding molecular 3D structures.Some chemistry students also appreciated the powerful 3D visualization in virtual reality, even if the manipulation of the molecules with hand gestures were not as convenient as with a traditional keyboard-mouse interface.
Overall, the early testing proved that the Leap Motion device's hand tracking could not provide an adequate level of accuracy for precise VR use.Most test users experienced occasional tracking errors that resulted in missing an intended target.Another common failure was related to gesture recognition, where a pinch-and-move gesture would be released prematurely.Updates to the device driver could improve the experience, as could the improved hardware that is now being integrated into some VR devices, but due to these problems the development focus was shifted to a UI based on the HTC Vive controller pictured in Figure 3.The hand gestures chosen previously were adapted to be used with the controller buttons with relative ease.The pinch was mapped to the index finger-activated trigger button, and the grab was mapped to a button on the side of the controller.Functions previously tied to touching objects with an extended finger were implemented by using the controller track pad.The asymmetric design was maintained by creating a pair of controller modes, one for modeling and another for viewing and menu manipulation, which could be toggled using the menu button.Object selection when using hand tracking is done by touching objects with the virtual hand model, but the controller uses a ray cast from the controller that acts as a pointer.Both input implementations use comparable motions and hand muscles, and familiarity in one should be transferable to the other.The controller-based scheme also presents a slight accessibility improvement, as it can be used with a single controller by switching between interaction modes.Figure 4 shows a typical view from the HMD when using the controllers in a modeling task.When first loading into the application, the user is presented with a plain, empty virtual room with a floating menu panel.The purpose of the room is to give the illusion of standing on firm ground instead of floating in an empty void, which has caused discomfort or nausea in some users.The starting view is shown in Figure 5.

Application Menu
Functions with infrequent use or that were difficult to map to hand gestures were implemented with virtual buttons on the floating menu panel shown in Figure 5.In hand tracking mode, the menu floats on the left side of the user at arms length and follows the user as they walk around.In controller mode, the menu is attached to one of the controllers, akin to a painter's palette, but can also be frozen in place.The menu is implemented using Unity's built-in menu objects and can be used by touching the menu items with the skeletal hand model or pointing and pressing the trigger on a hand controller.
The top section of the menu contains features of high priority that affect the entire scene.The Undo and Redo buttons are used for traversing the editing history of the scene, where each step in the history is currently implemented by saving the entire scene in memory, but an incremental history is under consideration.The top section also has a button for erasing all scene objects permanently.Selecting the Remove all function launches a short physics-based animation where the atom and bond objects are decoupled from each other and physics simulation is enabled on each.The objects are given a small impulse away from the center of the scene, giving an appearance of the molecule exploding and falling on the ground before disappearing.This effect has been surprisingly well received and has often been seen as delightful by test audiences.
The main part of the menu contains building and editing-related controls.The central part contains a quick select panel for common chemical elements and a button for opening a full periodic table.Each element has a unique color and size based on its van der Waals radius.Several elements also feature VSEPR parameters based on their electron structures, and adjusting or adding parameters for more elements is done easily via a CSV file.Two check boxes toggle the automatic saturation of created atoms, and angle snapping during bond rotation in either 15 • increments or free rotation.When using hand tracking, a slider controls the size of an object removal volume.A button for loading molecules opens a file system explorer for selecting a supported molecular model file for loading.
The bottom section contains options for turning on or off a force field structure minimization routine and a drop-down menu for switching between supported input methods.

Functions and Usage
The usage of the software is meant to imitate familiar patterns from commonly used software to make it more accessible and faster to master.A suite of modeling software was studied, including Avogadro [46], GaussView [47], ChemDraw [48] and MolView [49], which inspired the design of the software's functionality.

Atom and Bond Creation and Removal
New atoms and bonds are created with a right hand pinch gesture or trigger press, emulating a mouse left click-demonstrated in Figure 6.When performing the create-action in empty space, a new atom is spawned at either the tip of the index finger of the virtual hand model or circa 5 cm in front of the controller.The element of the created atom can be selected from the menu.For elements with the necessary parameters set in a configuration file, the created atom is saturated with bonded hydrogen atoms in accordance with the VSEPR theory, if this functionality is turned on from the application menu.Performing the create action while pointing at or touching an existing atom will replace the referred atom with one of the user-selected elements.The new element inherits all bonds from the replaced atom, and while no existing bonds are removed, if the inherited bond count is less than dictated by VSEPR theory for the new element, extra hydrogen atoms are added automatically.Finally, starting the create action while pointing at an atom and dragging and releasing it over another creates a new single bond between the atoms.Bonds are modified through the same touch-and-pinch or point-and-click gestures, which cycles between a single, a double and a triple bond.With "Auto Saturation" enabled, hydrogen atoms are added or removed as necessary, and other bonded atoms are left unmodified.This building logic is used in many applications and allows quick molecule modeling.The mechanics of this scheme are also already familiar to most chemists and the familiarity should be transferable between software for most users.The control scheme also minimizes menu interactions while building molecules and offers a fluid workflow.Atoms and bonds are deleted by pointing at them and pressing the upper quadrant of the controller touchpad or by extending the right thumb and touching the target object, as shown in Figure 7.When using hand tracking, a red cube around the tip of the extended thumb indicates readiness for object destruction.Atoms and bonds are removed on collision with the cube, and the size of the deleting volume can be adjusted from the application menu.All atoms and bonds can also be removed via a menu button, as shown in Figure 8.

Editing and Measuring the Molecule Structure
The molecular structure can be edited with a right hand grab gesture or controller side button press, which when combined with a system for selecting atoms and bonds gives the user robust control over the molecule structure.Editing flow is illustrated in Figure 9. Objects are selected for editing by touching the object with an extended index finger or pointing at the object while pressing the lower quadrant of the controller touchpad.Selected object are highlighted with a green outline and overlaid with a measurement of the selected bond length or angle.Selecting different features of the molecule switches the function of the grab gesture.In a state with a single bond selected, grabbing either of the bonded atoms adjusts the length of the selected bond, moving the connected residue as a whole.Selecting a bond and either of the bonded atoms before grabbing one of the adjacent bonded atoms adjusts the rotational angle of the selected bond.The dihedral angle between two bonds is adjusted by selecting two adjacent bonds and grabbing one of the atoms at the ends of the selected chain.This control scheme allows the structure of the molecule to be edited with no menu interaction or while switching between build, measure and edit tools, and should therefore be quicker to use and more practical than other solutions.The main drawback of the approach is the lack of feature discovery via menu searching which has to be addressed with tooltips and tutorials.

Scene Rotation
One of the immersive features of virtual reality is the ability to physically walk around the virtual objects.For practical purposes, the objects can also be rotated by a left hand grab-and-move gesture or pressing the side controller button and moving the controller.With this asymmetric system, the user can simultaneously rotate the molecule with one hand and build or edit it with the other.

OpenBabel, Energy Minimization and File Handling
VRChem includes support for the OpenBabel [36] computational chemistry toolkit for coarse structure optimization and molecule loading.The energy minimization method VRChem currently uses is based on OpenBabel's mmff94 [50] force field implementation.The force field is computationally lightweight and reasonably accurate for the small organic molecules VRChem is envisioned to work with.The feature is toggled on or off from the menu and when turned on will calculate a single minimization step before rendering each frame.The calculation is fast and can handle molecules with up to 200 atoms before impacting frame rate.To support realtime optimization of larger molecules, the calculation could be disconnected from the framerate with few modifications.The created animation of a structure changing shape is visually pleasant without interpolation while being reasonably accurate.The optimization is stopped when another animation is in progress or the molecule is being edited, and resumes thereafter, producing an interactive molecule editing experience.OpenBabel is also utilized for file format parsing when opening and loading chemical file formats.OpenBabel supports parsing of 84 file formats, and thus in theory VRChem should be able to read most chemical file formats currently in use, but not all have been tested.

Discussion
The finalized minimum viable version of the software has been demonstrated to several hundred BSc, MSc and PhD level students, researchers and pupils from secondary education.The feedback from these sessions has been generally positive.The sessions have been either one-on-one feedback sessions or public demonstrations where volunteers could test the software and bystanders could follow the VR view from a TV screen.The latter were either public poster sessions at various university events or small group tuition sessions for students or pupils.For example, in one session, members of small 4-5 person groups of visitors from a local upper secondary school would each use the software while the rest of the group followed the VR activity from a TV screen.The viewers were tasked to identify a drug molecule for treating a given disease, e.g., the migrane drug Frovatriptan, and then step by step guide the person using the software to build the molecule.Afterwards, the group would work together to identify molecular features, such as chiral centers or functional groups.The VR activity was a part of a three-day course in medicinal chemistry.In anonymous feedback collected from 14 students, over 90% of the students mentioned that the VR activity was highly interesting and motivating.In more specific comments, the students found the possibility to build the molecules themselves a highly exciting aspect of the program.While the results of other workshops and demonstrations have not been evaluated statistically, anecdotal observations support the findings of other studies conducted on virtual reality-based learning.Most test users have been visibly excited about the software, and we have seen volunteers queue to try it out themselves at events.Virtual reality and seeing molecules as interactive objects has clearly increased interest in chemistry in many users and the technology has been seen as having great potential for at least visualizing chemical phenomena.Several have even expressed a wish that something similar had existed during their time in school.Virtual reality is naturally heavily tied to computing and the Internet, and can therefore easily be combined with other ideas, such as gamification [51,52], to perhaps achieve even greater learning results [53].
However, in its current developmental stage, the VRChem software would likely be most beneficial as a tool to motivate students to learn chemistry instead of a full-fledged teaching aid.We believe that educational use would require designing novel training exercises and learning material to specifically take advantage of the strengths of virtual reality, and simply adapting the existing material for VR use would not improve learning to any significant degree.Before such VR-native learning material exists, research results on the effectiveness of using VR in education will likely be limited in accuracy, and thus assessing the effectiveness of the VRChem application in educational use requires further experimentation.
Going forward, our plan is to take advantage of the increasingly affordable computing power and Internet connectivity to create even better tools for the envisioned novel training chemistry learning exercises.The next goal is to combine immersive three-dimensional visualization and molecular interactivity with fast ab initio [54] computational methods to enable real-time simulations and examinations of chemical reactions in an immersive setting.Additionally, we are also planning a switch to using augmented reality (AR) [55] in favor of virtual reality.A major obstacle of using virtual reality in an educational setting is that it isolates the user from others into their own virtual world and complicates cooperation, observing one's surroundings and searching for information from other sources, including books or the Internet.AR devices such as Microsoft's HoloLens improve upon these aspects markedly and should therefore be more suitable for learning.Unlike virtual reality, augmented reality does not aim to transport the user to a virtual world but rather bring parts of the virtual world to augment the real one by superimposing the two.With AR, instead of building the molecular model inside a virtual room, the user is able to build it on top their physical desk, for instance.In one of our visions for a novel AR exercise, a group of students using AR glasses connected through a wireless network collaborates on designing a chemical reaction.Using augmented reality, the group is able to discuss and cooperate, sharing opinions and ideas while working on the shared virtual reaction model.The virtual environment is wirelessly synchronized between the group's devices letting the group build and edit the reagents together, then launch the reaction calculation and inspecting the reaction progress and products.We firmly believe that such an application and exercise would be genuinely helpful for educational purposes provided that affordable AR glasses become available in the future.

Conclusions
The VRChem project has reached its initial goal of developing a fully functional molecular modeling virtual reality application, albeit with a limited feature set.The software is one of the first VR molecular modeling applications focusing on building molecular models from the ground up and features workflows adapted from commonly used traditional modeling software to provide a quick and familiar user experience for molecular modeling.User tests indicate that VR use in academic chemistry research is unlikely to provide many advantages due to unwieldy virtual reality devices.However, the software could be impactful in educational use, and the program has been demonstrated to a large audience and has been positively received by university faculty and students.
As part of the project, we evaluated the practicality of registering user inputs with handheld controllers and via hand tracking using a Leap Motion camera.We found hand tracking conceptually intriguing, as it allowed rapid switching from VR to desktop use and did not prevent typing on a keyboard.However, actual use was jarring due to the imperfect tracking accuracy of the LeapMotion device.Until the accuracy of hand and gesture tracking improves, we will focus development on a physical controller-based UI.The Vive Wand controllers feel unnecessarily unwieldy for small-scale precision work, but have superior features and accuracy over hand tracking.
VRChem has low system requirements and has been tested to work on a HTC Vive VR system on a Microsoft Windows operating system.The software is free to use and modify under the GNU GPLv2 [56] license, and available on GitHub at https://github.com/XRChem/VRChem, (accessed on 13 November 2021) [57] in both binary form and as

Figure 1 .
Figure 1.The left image shows a still from Leap Motion Controller's infrared camera overlaid with the interpreted hand model.The image on the right shows how the hands appear in VRChem.

Figure 2 .
Figure 2. Hand gestures used in a hand tracking based control scheme.(1) Pinch gesture-create new atoms and bonds.(2) Point gesture-select atoms and bonds for measuring and editing, interact with menus.(3) Thumb up-delete atoms or bonds.(4) Grab gesture-move atoms, rotate bonds or rotate molecule.

Figure 3 .
Figure 3. Overview of the Vive Wand wireless controllers.(1) Trigger-create new atoms and bonds; interact with menus.(2) Grip button-move atoms, rotate bonds, or rotate molecule.(3) Menu button-cycle controller between menu and build mode.(4) Touchpad, top quadrant-remove atom or bonds.(5) Touchpad, bottom quadrant-select atoms and bonds for measuring and editing.

Figure 4 .
Figure 4. Building a molecule in VRChem using wireless hand controllers.

Figure 5 .
Figure 5.The default starting environment of VRChem.The right controller is in build mode and the left controller is in menu mode.The application menu is attached to the left controller, but can be switched over to the right one or set to hover in place.

Figure 6 .
Figure 6.Atoms and bonds are created with a right hand pinch gesture (above) or a controller trigger press (below).Touching existing atoms or molecules with the pinch or pointing at them with the controller replaces or modifies them.

Figure 7 .
Figure 7. Atoms and bonds can be deleted by touching them with an extended thumb or pointing at them with a controller and using the delete button.

Figure 8 .
Figure 8.All atoms and bonds can be removed by using a menu button.

Figure 9 .
Figure 9. (1) Activate structure measuring and editing by selecting a bond.Selected objects are highlighted in green and the relevant measure appears.(2) To edit the highlighted property, such as bond length, grab and one of highlighted atoms.Other properties can be edited by making further selections.The dihedral angle in (3) can be edited by selecting two adjacent bonds.