Next Article in Journal
Correction: Mahmood et al. Hard Real-Time Task Scheduling in Cloud Computing Using an Adaptive Genetic Algorithm. Computers 2017, 6, 15
Previous Article in Journal / Special Issue
User Experience in Mobile Augmented Reality: Emotions, Challenges, Opportunities and Best Practices
Article Menu
Issue 2 (June) cover image

Export Article

Computers 2018, 7(2), 34; doi:10.3390/computers7020034

Recommendations for Integrating a P300-Based Brain Computer Interface in Virtual Reality Environments for Gaming
IHMTEK (Interface Homme-Machine Technologie) Company, 38200 Vienne, France
Department of Image and Signal, University Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, 38000 Grenoble, France
Author to whom correspondence should be addressed.
Received: 8 March 2018 / Accepted: 18 May 2018 / Published: 28 May 2018


The integration of a P300-based brain–computer interface (BCI) into virtual reality (VR) environments is promising for the video games industry. However, it faces several limitations, mainly due to hardware constraints and constraints engendered by the stimulation needed by the BCI. The main limitation is still the low transfer rate that can be achieved by current BCI technology. The goal of this paper is to review current limitations and to provide application creators with design recommendations in order to overcome them. We also overview current VR and BCI commercial products in relation to the design of video games. An essential recommendation is to use the BCI only for non-complex and non-critical tasks in the game. Also, the BCI should be used to control actions that are naturally integrated into the virtual world. Finally, adventure and simulation games, especially if cooperative (multi-user) appear the best candidates for designing an effective VR game enriched by BCI technology.
brain computer interface (BCI); virtual reality (VR); design; game design; brain computer interface; virtual reality

1. Introduction

A video game can be defined as “a mental contest, played with a computer according to certain rules, for amusement, recreation, or winning a stake”. It has also been defined briefly as “story, art, and software” [1]. In some cases, for example in serious games, amusement is not the main goal, however, to date amusement still plays a major role in video game industry. Although by completely different means, virtual reality (VR) and the brain–computer interface (BCI) are both excellent candidates for enhancing the possibilities of entertainment and satisfaction in video games. Indeed, both enhance immersion and it is a common belief that this encourages the feeling of amusement. The concept of immersion was defined in [2], observing that everybody may enjoy a game with immersion, even if the gaming control seems to play the main role in the user’s enjoyment. According to [1] this immersion feeling is created by computer graphics, sound, haptics, affective computing and advanced user interfaces that increase the sense of presence. Virtual reality is a collection of devices and technologies enabling the end user to interact in three dimensions (3D) [3], e.g., spatialized sounds and haptic gloves (for example Dexmo, Dexta Robotics, Shenzhen, China). The particular type of experience that is created by VR is emphasized by [4]. Such experience is named telepresence, defined as the experience of presence in an environment by means of a communication medium [4], joining the concept of presence of [1]. A BCI can also enhance the feeling of presence in the virtual world since it can replace or enhance mechanical inputs. According to [2], immersive games are played using three different kinds of inputs: visual, auditory and mental; since a BCI may transform ‘mental’ signals into input commands, such an interface may play an unique role in the mentalization process involved in the feeling of immersion. However, considering the limitations of a BCI system (to be analysed later), it is still not clear to what extent current BCI technology may improve immersion. As pointed out in [2], “engagement, and therefore enjoyment through immersion, is not possible if there are usability and control problems”.
An element of amusement derives from the originality and futuristic aspect of BCI technology as compared to other traditional inputs, like a mouse, a joystick or a keyboard. Nonetheless, as often happens in the technological industry, BCI technology risks being dropped by the general public if the improvement it brings is not worth as compared to the effort needed for its use. Virtual reality has already enjoyed the “wow-factor” and VR systems tend to be employed nowadays in commercial events especially for raising this effect (Feel Wimbledon by Jaguar, Coca Cola’s Santa’s Virtual Reality Sleigh Ride, McDonald’s Happy Meal VR Headset and Ski App, Michelle Obama’s VR Video, XC90 Test Drive by Volvo, etc.) ( The “wow-factor” is defined in the Cambridge dictionary as “a quality or feature of something that makes people feel great excitement or admiration”, and was previously studied in the domain of marketing and education (e.g., [5,6]). The recent development of dedicated VR headsets, that is, head-mounted devices (HMDs, e.g., the Oculus, Facebook, Menlo Park, CA, USA; HTC Vive, HTC, Taoyuan, Taiwan; Google Cardboard, Google, Mountain View, CA, USA) has paved the way to the commercialization of combined BCI+VR technology. Indeed HMDs provide an already built-in structure that can support the embedding of EEG (electroencephalography) electrodes, which are needed for the BCI. The Neurable Company (Cambridge, MA, USA) has recently announced a product combining an HTC Vive (Taoyuan, Taiwan) with an EEG cap. The HTC Vive (Taoyuan, Taiwan), as well as other HMDs such as the SamsungGear (Samsung, Seoul, Korea) use inboard electronics, thus herein we refer to them as active devices. On the contrary, passive HMDs consist of a simple mask with lenses in which a smartphone is inserted (Figure 1). Passive HMDs are particularly promising for the BCI+VR field since they are very affordable, and smartphones are nowadays ubiquitous.
Prototypes of BCI-based video games already exist [7,8,9,10,11,12,13,14,15,16,17,18]. They are mainly based on three different BCI paradigms: the steady-state evoked potential (SSVEP), P300 event-related potential (ERP) and mental imagery (MI). The first two necessitate sensorial stimulation of the user, usually visual, and are defined as synchronous because the application decides when to activate the stimulation so that the user can give a command [19]. In this article we focus on P300-based BCIs. As compared to MI-based BCIs, P300-based BCIs require shorter training, achieve a higher information transfer rate (amount of information sent per unit of time) and allow a higher number of possible commands [20,21]. As compared to SSVEP-based BCIs, they feature a lower information transfer rate, however, the flickering used for eliciting SSVEPs is annoying and tiring, besides presenting an increased risk of eliciting epileptic seizure [22]. P300-based BCIs are based on the so-called oddball paradigm. The oddball paradigm is an experimental design consisting in the presentation of discrete stimuli successively; most are neutral (non-TARGET) and a few (rare) are TARGET stimuli. In the case of P300-based BCI, items are flashed on the screen, typically in groups. A sequence of flashes covering all available items is named a repetition. The goal of the BCI is to analyze the ERPs in one or more repetitions to individuate which item has produced a P300, a positive ERP that appears around 300–600 ms after the item the user wants to select (TARGET) has flashed. The typical accuracy of P300-based BCIs has risen over the past years from about 75% after 15 repetitions of flashes [23] to about 90% after 3 repetitions using modern machine-learning algorithms based on Riemannian geometry [24,25,26]. In practice, this means that at least one second is necessary to such BCI to issue a command, but more may be needed to issue reliable commands.
We anticipate that the integration of BCI in VR games thanks to the development of integrated HMD-EEG devices will foster the acceptance of this technology by both the video game industry and gamers, thus pushing the technology into the real world. However, the development of a concrete application for the general public faces several limitations. In the domain of virtual reality, motion sickness appears to be one of the most severe limitation. However, this limitation seems relatively weak in comparison to those risen by the BCI system. Above all, BCI are often unsightly, and the electrodes are not easy to use. Also, users in virtual reality may move a lot and this jeopardizes the quality of the EEG signal. In this work we analyse these limitations and give recommendations in order to circumvent them. This work has been inspired by previous contributions along these lines concerning BCI technology [12,27,28] and its use in VR [29]. Here we will integrate these previous works with similar guidelines found in the literature on VR [30,31,32,33] and spatial mobility [14,34,35,36], focusing on P300 technology for BCI and gaming applications for VR. In the following section, we will present the limitations concerning general public use, divided according to the fact whether they are introduced by (1) the HMD, (2) the BCI system (in general or by the P300) or (3) both BCI and VR. For the purpose of clarity, we present the recommendations directly after each corresponding limitation. Numerous limitations and recommendations already discussed in the literature are considered and others are added here. Since there is not the same level of evidence for all of them, the limitations and recommendations we report will be labelled according to the following taxonomy:
Level of Evidence (LoE) A: The recommendation or limitation is a fact, or there is a strong evidence supporting it; for instance, it has been reported in a review paper or in several studies.
LoE B: The evidence supporting the limitation or recommendation is weak for one or more of the following reasons:
It appears relevant, but it still not exploited currently in BCI or VR.
The limitation or recommendation was stated in some papers but challenged in others.
The limitation or recommendation has been sparsely reported.
The limitation or recommendation has appeared in old publications and is now possibly outdated in light of technological improvements.
LoE C: The limitation or recommendation is introduced here by the authors, thus it requires independent support and validation.
The authors acknowledge that all listed limitations and recommendations are not equally relevant when designing a BCI+VR system. For example, the limitation of the field of view (discussed later) is very specific to the VR domain. It is not a major concern for a BCI+VR system in comparison to other limitations such as the need for an ergonomic EEG cap. In parallel to the LoE just defined, we then label the limitations and recommendations also according to their Level of Interest (LoI), that is, their pertinence in the process of designing a BCI+VR system:
LoI 1: The recommendation or limitation deeply impact the conception of a BCI coupled with VR.
LoI 2: The recommendation or limitation is relevant for the field, but might be ignored for a prototypical version of a BCI+VR.
LoI 3: The recommendation or limitation is secondary.

2. Limitations and Recommendation

2.1. Limitations of the Head-Mounted Device (HMD)

2.1.1. Inertial Measurement Unit (IMU) Accuracy


The Inertial Measurement Unit (IMU) is not accurate enough (LoE A; LoI 3). The smartphone position and rotation in space is determined by the IMU, the accuracy of which widely varies in different models of smartphone (see [37] for a benchmark of different smartphones, and [38] for a deep review of this problem). As a consequence, a VR device may detect movement when the user is not moving. This creates the perception that the virtual world moves slightly around the user, forcing the user to rotate his/her head to compensate and follow the scene. In turn, these movements generate artifacts in the EEG signal.


This problem seems restricted to passive HMD (when the smartphone is the only device of virtualisation). It is solved in the active SamsungGear device (Samsung, Seoul, Korea), which incorporates an IMU of good quality. It may be solved also by tracking the user head by means of an external tracker (LoE C; LoI 3). A new generation of VR devices which are based on sensor fusion (sensor fusion refers to the combination of sensory data or data derived from disparate sources) may solve this problem as well as positional tracking by correcting the IMU bias according to a video camera input (LoE B: Sensor Fusion is not working yet; LoI 2). The main commercial products are:
The Daydream SDK (Google, Mountain View, US), which is associated with Lenovo (Lenovo, Hong Kong, China) and Vive (HTC, Taipei, Taiwan; Valve, Washington, US).
The Structure Sensor (Apple, Cupertino, US).
The ZED camera (Stereolabs, San Francisco, US).
The Windows Mixed Reality platform (Microsoft, Washington, US) (Among others, Lenovo (Hong Kong, China), HP (California, US), Acer (Taipei, Taiwan) and Sam sung (Seoul, South Korea) have already build headsets for the Windows Mixed Reality platform.).

2.1.2. Locomotion in Virtual Reality


Tracking user position is problematic (LoE A; LoI 3). Tracking a user’s position in an indoor environment like a room is a global issue for VR devices. Low-cost or portable devices use a gyroscope to track the user’s head rotation, however, they cannot determine the user’s position. More expensive devices, like the Vive or the Oculus, can track user position in a specific area. This area is limited by the size of the room where the game takes place as well as by the position of the motion capture sensors (around 25 m² for the Vive and 5 m² for the Oculus). Often game designers and application creators want to allow movement in a virtual world that is substantially bigger than the size of the room where the user is. This limitation especially applies to VR systems only. Indeed, the use of locomotion with BCI is not recommended (see limitation of the BCI system in general).


We propose here five solutions for locomotion in VR. The reader is referred to [39] for a systematic review of locomotion in virtual reality.
Teleportation (LoI 1: very relevant for BCI since this technique does not require any movement), which for example is used in the following games: Portal Series in VR (Valve, Bellevue, WA, USA), Robo Recall (Epic games, Cary, CA, USA) or Raw Data (Survios, Los Angles, CA, USA); the user focuses on the area of interest and then clicks to teleport on the place s/he has selected. This solution is simple to implement, but the effect of motion sickness is not clear. In fact, the apparition of an unnatural cutting scene could induce motion sickness (LoE C). However, teleportation may also reduce motion sickness since it does not involve visible motion (hypothesis presented in [40] with mitigate results). (LoE B: According to the systematic review [39], teleportation is a mainstream technique, but there is a lack of empirical studies about it.)
Walk-in-place [30,41], e.g., VR-Step (VRMersive, Reno, NV, USA) and RIPMotion (RIPMotion, Raleigh, CA, USA). The user first focuses on the area where s/he wants to move and then performs a walk-in-place to execute the movement and arrive at the selected destination. This solution reduces motion sickness as the user receives the sensation of movement as the virtual world moves. However, it is more complicated to be implemented because accelerometer data are needed in order to detect vertical movements of the user (walking). The accelerometer ordinarily employed is not always sufficiently accurate to obtain information about the step length and the movement may appear unnatural, again possibly resulting in motion sickness. Moreover, the walk-in-place input is restricted to situations where the user has to walk, since the in-place gesture cannot be easily mimicked for applications where the player swims or flies, for example. (LoE A: Stated in many games and studies; LoE C: limited to a few situations; LoI 3: this technique is secondary since motion is not recommended with BCI.)
Gesture recognition, implemented by Raptor-lab (Lyon, France). For example, this reproduces the way skiing people push their poles to move forward. The company claims that their solution “offers human locomotion in VR with full freedom of movement, ‘as real life’ agility and liberty of action, which means, walking, running, climbing, jumping, crawling, and even swimming. And all that while avoiding motion sickness”. The gesture system was used in a game called the “Art of Fight” for the HTC Vive that has reached 10,000 players ( (LoE B: Used in games, but not evaluated in scientific studies; LoI 3: it requires motion.)
Motion platforms (similar to a fitness treadmill) allow the use of movement in order to navigate in a restricted area. There are already commercial motion platforms such as Virtuix omni (Virtuix, Austin, TX, USA), WalkOVR (WalkOVR, Istanbul, Turkey) or VR Motion Simulators (Virtec Attractions, Balerna, Swiss)—Figure 2. So far, this technology has been restricted by the encumbrance as well as by the fact that different physical movements require different platforms and that movement are sometimes unnatural—in particular, Virtuix omni was criticized because it reproduces an unnatural walking. Thus, these VR motion platforms are likely restricted to arcade rooms. (LoE A; LoI 3.)
Sensor fusion (LoI 3: Sensor fusion is used when the player is expected to physically move). Currently employed accelerometers are not sufficiently accurate to determine the position of the user by integrating two times the sensor’s input. By adding information from a gyroscope, magnetometer and camera (image recognition), a next generation of devices may be able to determine accurately relative position without the use of any external sensors. This will allow the expansion of the game area and reduce the encumbrance of the system. Such use of elements from the real world into the virtual world is known as augmented virtuality (Figure 3) and is part of the mixed reality domain. The reader may refer to [31,42] for a classification of virtualisation technology and a description of mixed reality. (LoE B: sensor fusion is still in an early research phase.)

2.1.3. Motion Sickness


Motion sickness (LoE A; LoI 1: the comfort of the user is a main concern). The HMD may provoke motion sickness [43,44]. In general, motion sickness arises when there is a mismatch between the visual and the vestibular systems, for example when travelling on a ship or a car without seeing the horizon and the road [45]. Sensory conflict is commonly used to explain such a sensation of discomfort in a VR context [33,46]; however, this is still unproven [33,47] (LoE B: There is no agreement concerning the cause of motion sickness.) Practical factors that have been found to induce motion sickness in virtual environments include:
The user observes a movement that does not happen in the real world [44]. (LoE B; LoI 1.)
The lag between the movement of the user and the movement of the avatar in the virtual world [43]. For instance, this lag may be due to the refresh rate of the screen and/or computation time due to high-quality graphics. (LoE B; LoI 1.)
Motion sickness increases with higher field of view, with an asymptote starting at 140° [48]. This figure is criticized in [49], but [49] did not use the same measure of motion sickness as [48]. (LoE B; LoI 2.)
Postural instability [50,51]. (LoE A; LoI 3: since movements are limited in BCI this factor does not really apply here.)
In evaluating the risk of motion sickness, the following factors should be considered:
Mental rotation ability and field in/dependence; better mental rotation and weak tendency to field dependence or independence may result in lesser motion sickness. (LoE A, LoI 3: this limitation applies for the end user, independently of the application design.)
People feeling motion sickness in real life are more likely to experience motion sickness in virtual reality environments [52]. (LoE B; LoI 3.)
Age and gender are correlated to virtual sickness; motion sickness is less common in the age range 21–50 [33,45,53] and women are more exposed than men [54,55,56]. (LoE B; LoI 3.)
The motion sickness limitation especially applies to VR. However, this limitation appears to be a major limitation for a BCI+VR system since people feeling sick will not be able to use the system.


The following recommendations are useful to reduce motion sickness in a VR setting:
Avoid motion parallax effect [57,58]. (LoE A; LoI 1.)
Avoid flickering, that is, visible fading between images of the display [33]. (LoE A; LoI 1.) Main constructors such as Oculus and HTC Vive use a refresh rate of about 90 Hz, suggesting that less than 90 fps may result in a flickering effect ( However, headsets such as Oculus (Facebook, Menlo Park, CA, USA) already provide a frame interpolation if the render-rate is not high enough ( (LoE B: The minimum requirement of 90Hz is going to be outdated.)
Avoid cutting scenes, since such transitions do not apply in the real world. (LoE C; LoI 3: see locomotion in virtual reality.)
Avoid extreme downward angle, such as looking downward at short distance in front of the virtual feet [59]. (LoE B; LoI 1.)
Try to use stroboscopic vision and overlaying glasses as suggested in [32]. (LoE C: Stated for motion sickness but not for virtual sickness; LoI 3.)
Take breaks out of the VR immersion as motion sickness increases with playing time [59]. (LoE B; LoI 1.)
Introduce a static frame of reference [60,61,62]. (LoE A; LoI 1.)
Dynamically reduce the field of view in response to the visually perceived motion [63]. (LoE B; LoI 2.)
Try movement in zero gravity ( (LoE C, LoI 2: this technique may be relevant for application design but, has to be studied in depth.) The link we have given is a short description of the game “Lone Echo” (Ready at Dawn Studios, 2017) that was nominated at the Game Award for best VR game. This game takes place in space and reproduces the sensation of floating by disabling gravity and continuously moving the player with an endless drift. In the aforementioned link it is suggested that this might diminish motion sickness.

2.1.4. Unsolved Limitations: Asset Restriction and Field of View

The following limitation applies to VR in particular. The impact of these limitations on BCI+VR systems appears limited.
  • Asset restriction (LoE A). The use of HMD does not fit large assets, with graphics restricted in term of polygon and texture size. This restriction is due to the graphic engine, hardware and stereoscopic vision. Stereoscopic vision appears as the main problem since a texture has to be drawn two times, one for each eye. Next, big assets require more computation or power, which is mainly limited by the hardware capability, in particular when using a mobile platform such as the SamsungGear (Samsung, Seoul, South Korea) ( The graphic engine may also have a great impact on performance when either multithreading or batching (among others) are enabled. (LoI 3: The quality of the graphics may be considered secondary when designing an ergonomic interface.)
  • Field of View (FOV) is limited (LoE A; LoI 3). The HTC Vive (HTC, Taoyuan, Taiwan) has the largest FOV among the currently available virtual reality headsets. Its FOV is about 100°, that is, around 80° less than the human FOV. Such restricted FOV limits the feeling of immersion, while a wider FOV causes optical distortion of the image and increases the sensation of motion sickness [48,49].

2.2. Limitations of the Brain–Computer Interface (BCI) Systems

2.2.1. BCI System in General


  • Comfort and Ergonomics of the electrodes (LoI 1: The use of electrodes introduces a major discomfort). Traditional EEG caps need the use of a gel or a paste to establish the contact between the electrodes and the scalp or the use of dry electrodes, which often are more uncomfortable (Figure 4). This limitation is a major concern for the end-user, since an unsightly, messy or painful product has little chance to be successful (LoE C). Research is ongoing to develop EEG caps that are easy to set up, easy to clean and comfortable, with the key requirement of allowing accurate EEG signal recording [14,35,36,64]. Among the commercial products, we can mention: the “Mark IV” (OpenBCI, New York, US) and the “Muse” headband (Muse, Toronto, Canada) are very easy to set up, but covering only a small portion of the head; the “Quick-20” (Cognionics, San Diego, US) a dry EEG headset that can be set up in a few minutes. Most of these systems diminish the setting time by providing an already built-in support structure and dry electrodes. Nevertheless, the affirmation that dry electrodes are easier to install was challenged in [65]. This study concluded that the set up time was equal or even longer using dry electrode since they do not easily adapt to the shape of the head. This observation does not take into account the fact that the user may have to clean their hair after using the gel-based electrode. A common concern for the aforementioned products is that the number of electrodes and/or the quality of the signal is not sufficiently high for P300-based applications ([36,66] have a mitigated point of view, whereas [67] is more optimistic). (LoE A: wet electrodes are more accurate, stable and comfortable than dry electrodes; LoE B: Dry electrodes are easier to install and remove if the user is experimented and the cap can easily adjust to the shape of the head.)
  • Locomotion with the electrodes. The use of BCI when moving around is very disputable. It has been shown that the recognition of the P300 component is still possible when walking, but the performance of the system is reduced [36]. (LoE B; LoI 2: It is possible to design applications with limited movements, sacrificing the immersion feeling and the VR capabilities.)
  • Tagging. An important technical aspect of P300-based BCIs is that the BCI engine needs to be informed about the exact moments when the stimulations (flashes) are delivered. Traditionally this is obtained by hardware tagging, that is, through a serial or parallel port by which the user interface (UI) sends a tag to the EEG acquisition unit at each flash, which in turn synchronizes the tags with the incoming EEG data. This is an accurate method, allowing, in general, a tagging error within ±2 milliseconds. The alternative is known as software tagging and can be achieved in several ways. Software tagging is better achieved synchronizing the clocks of the machine on which the UI software runs and of the EEG acquisition device, the lack of which may result in a rather high tagging error [6]. This kind of synchronization problem is well-known in the network domain where multiple servers must be synchronized or when dealing with an array of wireless sensors [68]. If hardware tagging is used, the wire connecting the UI to the EEG acquisition unit may limit the movement of the user. (LoE A, LoI 1: From the perspective of the user, it is important to guarantee the accuracy of the system while minimizing its encumbrance.)


  • In order to diminish set up time, the EEG cap could be embedded in the VR device (LoE C; LoI 1). For this purpose, since pin-shaped dry electrodes result in being uncomfortable, a new generation of dry electrodes based on conductive polymers could be preferred (e.g., [69]) (LoE B; LoI 1). For gaming, this solution is preferable to wet electrodes, as flexible dry electrodes do not require a long set up and cleaning (LoE C; LoI 2: see previous discussion on dry and wet electrodes). The problem concerning the quality of signal might be addressed by a shield placed over the electrodes to prevent electromagnetic contaminations (prototype of A. Barachant: (LoE C; LoI 1). Another option is the use of miniaturized electrodes (LoE B; LoI 1). In [70], the authors conceived a system made of 13 miniaturized electrodes placed in and around the ear in addition to the traditional sites for P300 recognition (central, parietal and occipital locations). An offline analysis showed that the accuracy of this system was comparable to that obtained in a previous study with state-of-the art equipment [34]. However, the authors point out the lack of robustness of such a system if the user moves around.
  • The camera and accelerators could be used to detect the user movement and remove the corresponding EEG signal from the analysis (LoE C; LoI 3). It has also been suggested to use an automatic online artifact rejection such as [71], or to use new features such as the weighted phase lag index when walking [72]. However, we note a lack of studies in real life such as [36] (LoE B: It works in laboratory conditions, but there is a need for out-of-the lab studies; LoI 1: The removal of movement artifacts, if effective, will allow the user to freely move in VR.).
  • When the UI and the BCI engine run on separate platforms: if the EEG acquisition unit is not mounted on the head, like in [34], prefer software tagging over hardware tagging (LoE B; LoI 1)—Figure 5b.
  • It would be even better to directly embed a wireless EEG acquisition unit on the VR device. With such a system, the tagging problem would be solved for good [34] (LoE B; LoI 1).
  • Embed the EEG acquisition unit as well as the EEG analysis on the EEG cap, making the BCI completely independent of an external computer (Figure 5a). (LoE C; LoI 1). Such a system would also avoid dealing with problems related to wireless communication (e.g., data loss, signal perturbation, etc.) (LoE B; LoI 1). A simple application of this recommendation consists in placing the PC on which the BCI engine runs in a backpack [34]. For HMD linked to a PC, MSI (Zhonghe, Taiwan) has released VR-ready PCs placed into a backpack (

2.2.2. Limitations of P300–Based BCIs

Possibly the most severe limitations are engendered by the P300 paradigm itself. We list here four limitations, the first three being well-known [12] (LoE A; LoI 1). The last one is a suggestion from the authors (LoE C; LoI 1).

Synchronous BCIs


As we have seen the P300 is a synchronous BCI, thus it is not possible to control a continuous process where constant error correction is required, for example when driving a car. Rather, it is possible to perform a goal selection task, like for example choosing the final destination of our vehicle.


Enable goal selection strategies and gradual control strategies [12]. Gradual control means controlling a continuous process in a discrete way by use of separated and limited goals. For example, the player may control speed by focusing on items such as SLOW, FAST and MODERATE. (LoE A; LoI 1.)
Use the concept of cone of guidance [27], inspired from a game described in [73], where the player has to guide a helicopter through floating rings. In the process of approaching a ring, the player is assisted by an invisible cone that improves the player performance, but this is not necessary to win the game. From a larger perspective, the cone of guidance may refer to any optional computer assistance that may help the user to perform a task, but it still requires enough input from the user to finish this task. (LoE B; LoI 1.)
Use high-level commands [29]; they drive faster toward the sought result, although they are less intuitive. An example of navigation in a museum using high-level commands is shown by [29] . In this example, the user has to select a point of interest in the museum using three commands: two commands are for selecting the point of interest through a succession of binary choices; the last one is for deleting the last binary choice. The authors compared this method of navigation to a navigation using low level commands (such as turn left/right and go forward). The results of a subjective questionnaire show that high-level navigation is faster and less tiring than low-level navigation, but that high-level navigation is less intuitive because of the succession of binary choices. (LoI 1; LoE B: the given reference is about MI-based BCI; it is not clear to what extent the result applies to P300-based BCI since the latter BCIs allows a higher number of choices.)
Do not separate stimuli and action and always incorporate them in the virtual-world [12]. A more radical solution is the use of the P300 BCI control for actions that are normally “synchronous”, such as stopping when a traffic light switches to red [12,74]. (LoE A; LoI 1.)
Limit the use of complex movements such as controlling speed and movement at the same time. (LoE C; LoI 1.)
Design cooperative BCI games (whenever possible) where each player controls one parameter of the game. For example one player could be responsible only for changing the direction and another could control the speed of a moving avatar or vehicle (LoE C; LoI 1). In [75] multiuser interaction using P300-based BCI has been studied using the game Brain Invaders [17], showing the feasibility of cooperative BCIs for gaming [75].
To show how these recommendations may be applied in practice, Table 1 gives practical examples in relation to the design of a car race game. We have chosen this kind of game for the example because, due to its nature, it is not the best candidate game for introducing BCI control. This gives us an opportunity to show that an adequate design may make BCI control possible even in unfavourable situations.

Visual Fatigue


The flashing in a P300-based BCI is more tiring as compared to a normal visual scene and risks of photosensitive irritation should be taken into account. Furthermore, since P300-based BCIs work thanks to brain responses to stimulation (e.g., flashing of items on the screen), it continuously elicits cognitive resources from the user.


Ways to reduce this fatigue include:
Incorporation of the flashing items in the game scene [12,74] (LoE A; LoI 1). In [12] it was concluded that stimuli should be natural discrete events that should occur at expected locations. Examples are: blinking lights in the sky, advertisements in a city during the night, attraction park and horror scenes (graphical reference from video games: Planet Coaster, Until Dawn) or a diving experience (graphical reference from video game: Sub Nautica). Moreover, in the process of a goal selection only the controls that are specific to the current context should appear. For example, only the navigation commands should be displayed when moving an avatar, whereas these commands should disappear when the user is not controlling the movement of the avatar anymore. A game could therefore automatically switch among different control panels depending on the context.
Adopting stimulation as less tiring as possible. The use of audio stimuli which, coupled with visual stimulation, may lower visual fatigue, was recommended in [76] (LoE B; LoI 2). This study shows that combining auditory and visual stimulation is a good choice for a BCI speller to lower the workload. This study also reports that the use of audio stimuli alone leads to worse performance and higher workload as compared to a unimodal visual stimulation, thus the use of audio stimuli alone is not recommended. We should mention a promising study [77], which as stimuli used spoken sounds representing concepts as close as possible to the action they represent. Again, audio stimuli should be natural for the gaming environment and should vary according to the type of game (horror game, game with enigma, infiltration game, etc.). Another solution could be to couple the visual P300 paradigm with other BCI paradigms such as motion onset visual evoked potential (mVEP) (LoE B; LoI 1). mVEP is a type of visual evoked potential (like SSVEP or P300) allowing more elegant stimuli [78]. In [78] moving targets had low contrast and luminance, still they could evoke prominent mVEP. The protocol was nearly the same as for P300, but using moving instead of flashing targets. The usability of this paradigm needs to be studied specifically in the VR context, where targets can move in 3D around the user, thus the user may have to turn the head to follow them (LoE C; LoI 3).
Lowering the stimulation time. BCI systems that do not need calibration are definitely preferable [75] (LoE A; LoI 1). Also, in designing a game, BCI control may be activated only in some situations, totaling a small amount of the gaming time (LoE C; LoI 1).

Low Transfer Rate


The low transfer rate of a P300-based BCI refers to the fact that several repetitions of flashes are needed for achieving accurate item selection and that, unless a large number of repetitions are employed, selection errors are unavoidable [12]. This introduces the need for repetitive actions instead of single actions to achieve a goal and the frustration of not being able to issue a command immediately, which is important in critical gaming situations.


There are at least eight ways to circumvent this limitation:
Using a-priori, user and/or context information to improve item selection (LoE C; LoI 1). To this end we may employ a so-called passive BCI to monitor physiological information about the user and adapt the gameplay consequently [27]. A passive BCI is a cognitive monitoring technology that can provide valuable information about the users’ intention, situational interpretation and emotional state [79]. For example, in the game Alphawow [9], the avatar’s character changes its behaviour according to the player’s relaxation state. Statistics are also relevant to predict the user’s behaviour. The use of natural blinking objects, such as advertisements, may inform the system about the user’s preferences and help to determine his/her choices. Also, it can be useful to keep a database of statistics from other users. For example, if 80% of people answer “yes” to a form in the game, the “yes” button could be given a higher weight (visually, or by putting a weight in the BCI engine output) to facilitate this selection.
Using appropriate stimulation. Recommendations given in the section ‘Visual fatigue’ also apply here. In addition, the use of a spatial frequency in the visual stimuli is known to generate high-frequency oscillations in the EEG that can be used to help the detection in P300-based BCIs [80] (LoE C; LoI 3). The shape, colour and timing of the stimuli may also play a role: reference [81] showed that stimuli representing faces lead to better classification (LoE A; LoI 1) while [81,82] suggest that the use of contrasted colours and the modification of flash duration impact the accuracy (LoE B; LoI 1).
Reduce the time needed to trigger an action and make each action non-essential. Increasing the number of flash repetitions leads to higher classification accuracy, but this forces the user to stay focused for a longer time (LoE A; LoI 1). As a consequence, the fatigue of the user increases, the task is perceived more difficult and the application is less responsive. A compromise between accuracy and responsiveness is to keep the number of repetitions low while making the BCI commands non-critical. For example, in a car-driving application, at each repetition of flashes the trajectory may be slightly adjusted in the sought direction, thus, despite occasional errors, on the long run the player will succeed in giving the car the sought trajectory. Two other examples from previous studies are the Brain Invaders [17] and the Brain Painting [83]. In the Brain Invaders an alien is destroyed after each repetition, however such action is not critical since the player has eight chances to hit the target, hence to finish the level. Brain Painting is a game that is used by patient suffering from the locked-in-syndrome [28]. It consists in a P300 speller where the selection items are special tools for drawing. The concept itself retains our attention because the errors are not critical since the painting can be always retouched without the need of starting again.
Dynamic stopping (LoE B; LoI 2). Current P300-based BCIs usually make use of a fixed number of repetitions, forcing the user to keep focusing even if the BCI may have already successfully detected the target. Dynamic stopping consists in determining the optimal number of repetitions required to identify the target and thus it can decrease the time required for selection and provide higher robustness and performance [84,85].
Use feedback (LoE B; LoI 2). Study [27] recommends the use of positive feedback. Nevertheless, feedback is mainly used for motor imagery-based BCIs, while it finds little use in P300-based BCIs. In the Brain invaders [17] there is a binary feedback that indicates if the result is correct or wrong. However, the feedback does not indicate how close accurate selection is. Also, people playing video games are used to immediate feedback: when driving a virtual car, there is no appreciable delay between the command and its effect on the scene. That is to say: the feedback must be given as soon as possible. In the presentation of an EEG acquisition unit prototype, A. Barachant ( used a probabilistic feedback that set the size of each item according to its probability of being the target chosen by the user. The feedback is updated after each item is flashed. This idea could be a starting point for designing an appropriate feedback for P300. Another established way to use a feedback is to analyse the error-related potentials, which are produced by the brain after an error feedback is delivered to the subject. This can be used to automatically correct erroneous BCI commands, effectively increasing the consistency and transfer rate of the BCI [86,87,88].
Control non-critical aspects of the game (LoE C; LoI 1). In a race game for example, the speed is a critical aspect of the game and should not be controlled by a BCI. However, a BCI may be used for triggering a “boost effect” that would help the player by temporarily increasing the speed of the vehicle. Such a triggered effect would have an incidence on the score, but would not be an obstacle to finish the game. Also, we suggest restricting the use of the BCI to a limited set of aspects (LoE C; LoI 1).
Use a cone of guidance, as already defined in section “Synchronous BCIs” (LoE B; LoI 1).
Define levels of difficulty (LoE C; LoI 1). The above parameters could be set as a level of difficulty in the game, with the following limitations. First, the expected behaviour must be known by the game. This is the case for Brain Invaders [17], were the player is expected to concentrate on a specific alien, but not the case for a P300 puzzle game for example, where the player can place the puzzle pieces in any desired order. Second, lowering the difficulty lowers the impression of control. In general, it is not recommended to use adaptive difficulty, as suggested in [89].

Intention to Select


Looking at a stimulus does not mean we want to trigger an action. For example, one can look at a door without having the intention to open it.


Current designs of P300 applications suppose that the user is focusing on the target even if the user is not looking at the screen at all. It has been suggested to use motor imagery [74] or the analysis of alpha rhythms [14] as a supplementary input to enable the user to signal the intention to select (LoE B; LoI 2).
Another option is to define a threshold for the certitude of the P300 classifier, below which the application will not take any decisions. This is what dynamic stopping (see also the recommendation for diminishing the “Low Transfer Rate”) performs by dynamically changing the number of repetition according to the certitude of the P300 classifier. Again, study [90] presents a benchmark of the methods for dynamic stopping (LoE B; LoI 1).

2.3. Limitations That Are Common to VR and BCI Technology

A major limitation of both VR and BCI hardware is the price. This is currently steadily decreasing and we expect it to become very affordable in the next few years (LoE A; LoI 2). The tendency can be explained by the increasing interest for such technologies, which foster larger productions permitting the price for a single unit to be lowered. All studies are consistent in saying that the BCI and VR market are both increasing. To cite some examples, consulting company Business Insider (London, UK) and Grand View Research (San Francisco, US) predicts that the VR and BCI market will reach 60 million and 1.77 billion of dollars in 2022, respectively. This can be compared to the BCI market today, which is estimated to be about 807 million dollars and to the VR market, which has been negligible until 2015.

3. Type of Game Recommendations

Reference [27] studied the possible applications of BCI technology depending on the kind of game. In the following, we review the recommendations given by these authors.
Real-time strategy (RTS) games are too complex and need continuous control, thus P300 does not suit them. In RTS games, P300 can still be used, but restricting it to non-critical control aspects. In general, however, RTS is not particularly adapted to the VR context, for only 7% of existing VR games are of this kind (Figure 6a). For these games, the recommendation is to have a third-person point of view. For example, the player’s avatar controls a map representing the game field and this map is an object in a virtual room.
Role-play games (RPG) are also problematic for P300-based BCI because of their complexity (LoE A; LoI 1). The general recommendations are the same as for RTS. The RPG should be turn-based and the BCI should be restricted to minor aspects of the game. An existing example is “Alpha wow” [9] where the user’s mental state is used to change the avatar’s behaviour in the virtual world.
Action games are the most popular type of game employing BCI technology (LoI 1; LoE A: [27]). This is surprising since action games often include fast moving gameplay. For this reason, the use of BCI is not recommended with action games without specific adaptation.
Sport games meet the same requirements as action games [27]. They often require fast-moving gameplay and continuous control. As a consequence, we do not recommend the use of BCI for sport game without specific adaptation (LoE A; LoI 1), such as the one given as an example in Table 1. In VR, sport games represent a moderate percentage of the games (9%, Figure 6a).
Puzzle games are very well-suited for P300-based application (LoE A; LoI 1). They should be turn-based, allowing the users to make simple choices at their own pace. The use of popular existing puzzles helps players because they are already familiar with the game’s rules. However, the problem is the same as for strategy games and board games in general, i.e., it is not very useful to adopt a 3D perspective with a board game (puzzle games represents only 3% of VR games—Figure 6a). We suggest the use of the same workaround as for strategy games and to use a third-person point of view (LoE C; LoI 1). In such a scenario, puzzle games may be presented as a board game inside a virtual room. Another idea is to design a puzzle in 3D allowing the player to move the pieces in all directions (and to move inside the puzzle itself).
Adventure games are well-suited for P300-based BCI, if the player is given a set of limited options within a given time interval (LoE A; LoI 1).
Simulation games (for training or education purpose for example) are also well-suited for P300-based BCI, especially in the case of management simulation. Simulation games should feature a slow gameplay, allowing the player to adjust and learn how to control the BCI. In addition, simulation games are not based on “score”, and therefore the player can relax and obtain better performance using a BCI control (LoE A; LoI 1).
In conclusion, the P300 paradigm suits well turn-based strategy game (board games such as chess or some PC game as Civilization or Heroes of Myths’ and Magic). Adventure and simulation games appear to be the most adapted types of games for a BCI+VR game (LoE A; LoI 1). In Figure 6b, these two types of game are highly suitable for either VR or BCI technology.

4. Conclusions and Discussion

4.1. Summary

In this article we have exposed limitations of current BCI-enriched virtual environments and recommendations to work around these limitations. The recommendations address several software and hardware problems of currently available systems. We have proposed different ways to resolve or circumvent these limitations. Hopefully, this will help and encourage game creators to incorporate BCI in VR. We have focused on P300-based BCI since as per today this BCI paradigm features the best trade-off between usability and transfer rate. An essential recommendation is to use the BCI only for small and non-critical tasks in the game. Concerning software limitations, using actions naturally integrated into the virtual world is important. A cooperative game is also a good solution since it enables multiple actions and enhances social interaction and entertainment. In addition, the use of passive BCIs appears essential to bring a unique perspective into VR technology. In fact, only a BCI may provide information on the user’s mental state, whereas for giving commands, traditional input devices are largely superior to current BCIs. However, more promising results can be obtained combining different stimulations for BCI, such as coupling a visual P300 to an audio P300, or mVEP to a SSVEP. In general, BCI integrates easier with turn-based games that require high levels of concentration and logical thinking (for example: strategy, artificial life, simulation, puzzle and society games). Among these games, simulation and adventures games appear the best choice for VR. However, P300-based BCI technology may be used in other types of games to control a specific action in the game and for increasing the level of excitement (like for sport or an RPG game). Concerning hardware limitations, an ideal solution would be to use a VR device with an embedded EEG headset, together with sensor fusion capabilities to track precisely the position of the user and the rotation of the head. The main recommendation for avoiding user sickness is to avoid unnatural effects, like a lag in the animation, performing user motion when the user is not actually moving or modifying the natural parallax of the user. Furthermore, locomotion in a virtual world should imply a motion of the user itself, which is not always possible considering the gesture to be performed (such as swimming, climbing or flying) and the space where the game takes place (the virtual world can be bigger than the real space). In such case, designers can use teleportation or walk-in-place to help moving in the virtual world, keeping in mind that cutting the scene and translations should be avoided. Motion sickness also varies according to individual characteristics such age, genre or psychological abilities, but it can be diminished by experience and by taking regular breaks when using VR devices.

4.2. Consideration, Challenges and Perspectives

The recommendations we have listed are numerous. We believe that a framework is required in order to maximize their usefulness. As a matter of fact, the global picture is complicated by the heterogeneity of gameplay modes in different types of games. To build such framework, there is a need of a posteriori data and of a method that can evaluate the impact of recommendations. As stated in [7], the current methods and paradigms devoted to interaction with games and BCI based on visual stimulation remain in their infancy. Future investigations in the human computer interaction (HCI) domain are needed to overcome the limits of BCI and facilitate its use within virtual worlds. Along these lines, [27] suggests that Fitt’s law may be used to compare BCI application designs. Fitt’s law is based on the assumption that in any game the objective would be to minimize the time required to accomplish a mission as well as minimizing the concentration or effort required by the user. This law is a good way to evaluate designs and elaborate patterns for BCI games in conjunction with VR applications. For instance, in designing BCI technology in the healthcare domain, [28,65] describe several concerns about the daily usage of the BCI for patients suffering from disabilities, including ergonomics of the electrodes and functional requirement from the patient. Then, the authors designed and created a system meeting the requirement of the patient, before testing it through a standardized questionnaire of satisfaction [91]. The recommendations we have developed here appear to be a first and a necessary step in the creation of a BCI+VR game at an “out-of-the-lab” destination.
Finally, an extended discussion is needed concerning ethical concerns and the medical consequences of video game designs [92,93]. About the latter, reference [94] introduces the term of VRISE (virtual reality induced symptoms and effects) and reports than VRISE might be serious for a small percentage of people, even if the symptoms seems to be short-lived and minor for the majority of people. Virtual reality might modify heart rate, induce nausea and increase the level of aggressiveness [95,96]. It is not clear, however, if these effects persist (more than a few days) or if they are temporary. Concerning the positive effects of VR-based therapy, after analyzing 50 studies on the subject, [97] concluded that the effectiveness of such therapy remains still unproven. We are not aware of any study concerning the long-term side effects of control-oriented BCIs, however, it exists a long-lasting literature demonstrating the potential of BCI technology for neurotherapy (i.e., neurofeedback, see [98]). A natural question arises concerning the long-term possible side-effects and therapeutical effects of using BCI-technology in VR environments.

Author Contributions

G.C. and M.C. conceived and draft the article; A.A. and C.M. contributed to the draft and made critical revisions of the article. All authors collected the data and made a final approval of the version to be published.


We thank Sarah Cattan (independent artist and doctor in neuroscience) for drawing the sketches presented in Figure 1, Figure 2 and Figure 4.

Conflicts of Interest

This study was partially funded by the IHMTEK Company, within the framework of a PhD thesis co-directed by GIPSA-lab, and concerning the use of BCI in VR. The funding sponsor had no role in the design of the study; in the collection, analyses, or interpretation of the data; and in the decision to publish the results. IHMTEK participated into the writing of the article as G.C. and C.M. are part of the company.


  1. Zyda, M. From visual simulation to virtual reality to games. Computer 2005, 38, 25–32. [Google Scholar] [CrossRef]
  2. Brown, E.; Cairns, P. A Grounded Investigation of Game Immersion. In CHI’04 Extended Abstracts on Human Factors in Computing Systems; ACM: New York, NY, USA, 2004; pp. 1297–1300. [Google Scholar]
  3. Harvey, D. Invisible Site: A Virtual Sho. (George Coates Performance Works, San Francisco, California). Variety 1992, v346, p87. [Google Scholar]
  4. Steuer, J. Defining Virtual Reality: Dimensions Determining Telepresence. J. Commun. 1992, 42, 73–93. [Google Scholar] [CrossRef]
  5. Tokman, M.; Davis, L.M.; Lemon, K.N. The WOW factor: Creating value through win-back offers to reacquire lost customers. J. Retail. 2007, 83, 47–64. [Google Scholar] [CrossRef]
  6. Bamford, A. The Wow Factor: Global Research Compendium on the Impact of the Arts in Education; Waxmann: Münster, Germany, 2006; ISBN 978-3-8309-6617-3. [Google Scholar]
  7. Lécuyer, A.; Lotte, F.; Reilly, R.B.; Leeb, R.; Hirose, M.; Slater, M. Brain-Computer Interfaces, Virtual Reality, and Videogames. Computer 2008, 41, 66–72. [Google Scholar] [CrossRef]
  8. Andreev, A.; Barachant, A.; Lotte, F.; Congedo, M. Recreational Applications of OpenViBE: Brain Invaders and Use-the-Force; John Wiley Sons: Hoboken, NJ, USA, 2016; Volume 14, ISBN 978-1-84821-963-2. [Google Scholar]
  9. Van de Laar, B.; Gürkök, H.; Bos, D.P.-O.; Poel, M.; Nijholt, A. Experiencing BCI Control in a Popular Computer Game. IEEE Trans. Comput. Intell. 2013, 5, 176–184. [Google Scholar] [CrossRef]
  10. Mühl, C.; Gürkök, H.; Bos, D.P.-O.; Thurlings, M.E.; Scherffig, L.; Duvinage, M.; Elbakyan, A.A.; Kang, S.; Poel, M.; Heylen, D. Bacteria Hunt. J. Multimodal User Interfaces 2010, 4, 11–25. [Google Scholar] [CrossRef]
  11. Angeloni, C.; Salter, D.; Corbit, V.; Lorence, T.; Yu, Y.C.; Gabel, L.A. P300-based brain-computer interface memory game to improve motivation and performance. In Proceedings of the 2012 38th Annual Northeast Bioengineering Conference (NEBEC), Philadelphia, PA, USA, 16–18 March 2012; pp. 35–36. [Google Scholar]
  12. Kaplan, A.Y.; Shishkin, S.L.; Ganin, I.P.; Basyul, I.A.; Zhigalov, A.Y. Adapting the P300-Based Brain–Computer Interface for Gaming: A Review. IEEE Trans. Comput. Intell. 2013, 5, 141–149. [Google Scholar] [CrossRef]
  13. Pires, G.; Torres, M.; Casaleiro, N.; Nunes, U.; Castelo-Branco, M. Playing Tetris with non-invasive BCI. In Proceedings of the 2011 IEEE 1st International Conference on Serious Games and Applications for Health (SeGAH), Braga, Portugal, 16–18 November 2011; pp. 1–6. [Google Scholar]
  14. Liao, L.-D.; Chen, C.-Y.; Wang, I.-J.; Chen, S.-F.; Li, S.-Y.; Chen, B.-W.; Chang, J.-Y.; Lin, C.-T. Gaming control using a wearable and wireless EEG-based brain-computer interface device with novel dry foam-based sensors. J. Neuroeng. Rehabil. 2012, 9, 5. [Google Scholar] [CrossRef] [PubMed]
  15. Edlinger, G.; Guger, C. Social Environments, Mixed Communication and Goal-Oriented Control Application Using a Brain-Computer Interface. In Universal Access in Human-Computer Interaction. Users Diversity; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2011; pp. 545–554. [Google Scholar]
  16. Gürkök, H. Mind the Sheep! User Experience Evaluation & Brain-Computer Interface Games; University of Twente: Enschede, The Netherlands, 2012. [Google Scholar]
  17. Congedo, M.; Goyat, M.; Tarrin, N.; Ionescu, G.; Varnet, L.; Rivet, B.; Phlypo, R.; Jrad, N.; Acquadro, M.; Jutten, C. “Brain Invaders”: A prototype of an open-source P300- based video game working with the OpenViBE platform. In Proceedings of the 5th International Brain-Computer Interface Conference 2011 (BCI 2011), Graz, Austria, 22–24 September 2011; pp. 280–283. [Google Scholar]
  18. Ganin, I.P.; Shishkin, S.L.; Kaplan, A.Y. A P300-based Brain-Computer Interface with Stimuli on Moving Objects: Four-Session Single-Trial and Triple-Trial Tests with a Game-Like Task Design. PLoS ONE 2013, 8, e77755. [Google Scholar] [CrossRef] [PubMed]
  19. Wolpaw, J.; Wolpaw, E.W. Brain-Computer Interfaces: Principles and Practice; Oxford University Press: Oxford, UK, 2012; ISBN 978-0-19-538885-5. [Google Scholar]
  20. Zhang, Y.; Xu, P.; Liu, T.; Hu, J.; Zhang, R.; Yao, D. Multiple Frequencies Sequential Coding for SSVEP-Based Brain-Computer Interface. PLoS ONE 2012, 7, e29519. [Google Scholar] [CrossRef] [PubMed]
  21. Sepulveda, F. Brain-actuated Control of Robot Navigation. In Advances in Robot Navigation; Alejandra Barrera: Mountain View, CA, USA, 2011; Volume 8, ISBN 978-953-307-346-0. [Google Scholar]
  22. Fisher, R.S.; Harding, G.; Erba, G.; Barkley, G.L.; Wilkins, A. Epilepsy Foundation of America Working Group Photic- and pattern-induced seizures: A review for the Epilepsy Foundation of America Working Group. Epilepsia 2005, 46, 1426–1441. [Google Scholar] [CrossRef] [PubMed]
  23. Guger, C.; Daban, S.; Sellers, E.; Holzner, C.; Krausz, G.; Carabalona, R.; Gramatica, F.; Edlinger, G. How many people are able to control a P300-based brain-computer interface (BCI)? Neurosci. Lett. 2009, 462, 94–98. [Google Scholar] [CrossRef] [PubMed]
  24. Congedo, M. EEG Source Analysis; Université de Grenoble: Grenoble, France, 2013. [Google Scholar]
  25. Barachant, A.; Bonnet, S.; Congedo, M.; Jutten, C. Multiclass brain-computer interface classification by Riemannian geometry. IEEE Trans. Biomed. Eng. 2012, 59, 920–928. [Google Scholar] [CrossRef] [PubMed]
  26. Barachant, A.; Congedo, M. A Plug & Play P300 BCI Using Information Geometry. arXiv, 2014. [Google Scholar]
  27. Marshall, D.; Coyle, D.; Wilson, S.; Callaghan, M. Games, Gameplay, and BCI: The State of the Art. IEEE Trans. Comput. Intell. 2013, 5, 82–99. [Google Scholar] [CrossRef]
  28. Miralles, F.; Vargiu, E.; Dauwalder, S.; Solà, M.; Müller-Putz, G.; Wriessnegger, S.C.; Pinegger, A.; Kübler, A.; Halder, S.; Käthner, I.; et al. Brain Computer Interface on Track to Home. Available online: (accessed on 29 April 2018).
  29. Lotte, F. Les Interfaces Cerveau-Ordinateur: Conception et Utilisation en Réalité Virtuelle. Rev. Sci. Technol. Inf. 2012, 31, 289–310. [Google Scholar] [CrossRef]
  30. Templeman, J.N.; Denbrook, P.S.; Sibert, L.E. Virtual Locomotion: Walking in Place through Virtual Environments. Presence Teleoper. Virtual Environ. 1999, 8, 598–617. [Google Scholar] [CrossRef]
  31. Ohta, Y.; Tamura, H. Mixed Reality: Merging Real and Virtual Worlds, 1st ed.; Springer Publishing Company: Berlin, Germany, 2014; ISBN 978-3-642-87514-4. [Google Scholar]
  32. Reschke, M.F.; Somers, J.T.; Ford, G. Stroboscopic vision as a treatment for motion sickness: Strobe lighting vs. shutter glasses. Aviat. Space Environ. Med. 2006, 77, 2–7. [Google Scholar] [PubMed]
  33. Kolasinski, E.M. Simulator Sickness in Virtual Environments; U.S. Army Research Institute for the Behavioral and Social Sciences: Fort Belvoir, VA, USA, 1995. [Google Scholar]
  34. Vos, M.D.; Kroesen, M.; Emkes, R.; Debener, S. P300 speller BCI with a mobile EEG system: Comparison to a traditional amplifier. J. Neural Eng. 2014, 11, 036008. [Google Scholar] [CrossRef] [PubMed]
  35. Park, J.; Xu, L.; Sridhar, V.; Chi, M.; Cauwenberghs, G. Wireless dry EEG for drowsiness detection. In Proceedings of the 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Honolulu, HI, USA, 17–21 July 2011; pp. 3298–3301. [Google Scholar]
  36. Debener, S.; Minow, F.; Emkes, R.; Gandras, K.; de Vos, M. How about taking a low-cost, small, and wireless EEG for a walk? Psychophysiology 2012, 49, 1617–1621. [Google Scholar] [CrossRef] [PubMed]
  37. Kos, A.; Tomažič, S.; Umek, A. Evaluation of Smartphone Inertial Sensor Performance for Cross-Platform Mobile Applications. Sensors 2016, 16. [Google Scholar] [CrossRef] [PubMed]
  38. Kok, M.; Hol, J.D.; Schön, T.B. Using Inertial Sensors for Position and Orientation Estimation. arXiv, 2017. [Google Scholar]
  39. Boletsis, C. The New Era of Virtual Reality Locomotion: A Systematic Literature Review of Techniques and a Proposed Typology. Multimodal Technol. Interact. 2017, 1, 24. [Google Scholar] [CrossRef]
  40. Bozgeyikli, E.; Raij, A.; Katkoori, S.; Dubey, R. Point & Teleport Locomotion Technique for Virtual Reality. In Proceedings of the 2016 Annual Symposium on Computer-Human Interaction in Play, Austin, TX, USA, 16–19 October 2016; pp. 205–216. [Google Scholar]
  41. Usoh, M.; Arthur, K.; Whitton, M.C.; Bastos, R.; Steed, A.; Slater, M.; Brooks, F.P., Jr. Walking Walking-in-place Flying, in Virtual Environments. In Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques, Los Angeles, CA, USA, 8–13 August 1999; pp. 359–364. [Google Scholar]
  42. Milgram, P.; Kishino, F. A Taxonomy of Mixed Reality Visual Displays. IEICE Trans. Inf. Syst. 1994, E77, 1321–1329. [Google Scholar]
  43. Hettinger, L.J.; Riccio, G.E. Visually Induced Motion Sickness in Virtual Environments. Presence Teleoper. Virtual Environ. 1992, 1, 306–310. [Google Scholar] [CrossRef]
  44. Akiduki, H.; Nishiike, S.; Watanabe, H.; Matsuoka, K.; Kubo, T.; Takeda, N. Visual-vestibular conflict induced by virtual reality in humans. Neurosci. Lett. 2003, 340, 197–200. [Google Scholar] [CrossRef]
  45. Reason, J.T.; Brand, J.J. Motion Sickness; Academic Press: Cambridge, MA, USA, 1975; ISBN 978-0-12-584050-7. [Google Scholar]
  46. Johnson, D.M. Introduction to and Review of Simulator Sickness Research; Rotary-Wing Aviation Research Unit; U.S. Army Research Institute for the Behavioral and Social Sciences: Fort Belvoir, VA, USA, 2005. [Google Scholar]
  47. McCauley, M.E.; Sharkey, T.J. Cybersickness: Perception of Self-Motion in Virtual Environments. Presence Teleoper. Virtual Environ. 1992, 1, 311–318. [Google Scholar] [CrossRef]
  48. Jeng-Weei Lin, J.; Duh, H.; Abi-RAched, H.; Parker, D.A.; Furness, T. Effects of Field of View on Presence, Enjoyment, Memory, and Simulator Sickness in a Virtual Environment. In Proceedings of the Virtual Reality Conference, Orlando, FL, USA, 24–28 March 2002. [Google Scholar]
  49. Xiao, R.; Benko, H. Augmenting the Field-of-View of Head-Mounted Displays with Sparse Peripheral Displays. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, San Jose, CA, USA, 7–12 May 2016; pp. 1221–1232. [Google Scholar]
  50. Gueugnon, M.; Salesse, R.N.; Coste, A.; Zhao, Z.; Bardy, B.G.; Marin, L. Postural Coordination during Socio-motor Improvisation. Front. Psychol. 2016, 7. [Google Scholar] [CrossRef] [PubMed]
  51. Smart, L.J.; Stoffregen, T.A.; Bardy, B.G. Visually induced motion sickness predicted by postural instability. Hum. Factors 2002, 44, 451–465. [Google Scholar] [CrossRef] [PubMed]
  52. Groen, E.L.; Bos, J.E. Simulator Sickness Depends on Frequency of the Simulator Motion Mismatch: An Observation. Presence 2008, 17, 584–593. [Google Scholar] [CrossRef]
  53. Brooks, J.O.; Goodenough, R.R.; Crisler, M.C.; Klein, N.D.; Alley, R.L.; Koon, B.L.; Logan, W.C.; Ogle, J.H.; Tyrrell, R.A.; Wills, R.F. Simulator sickness during driving simulation studies. Accid. Anal. Prev. 2010, 42, 788–796. [Google Scholar] [CrossRef] [PubMed]
  54. Park, G.; Wade Allen, R.; Fiorentino, D.; Cook, M.L. Simulator Sickness Scores According to Symptom Susceptibility, Age, and Gender for an Older Driver Assessment Study (PDF Download Available). In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, San Francisco, CA, USA, 16–20 October 2006. [Google Scholar]
  55. Kennedy, R.S.; Lilienthal, M.G.; Berbaum, K.S.; Baltzley, D.R.; McCauley, M.E. Simulator sickness in U.S. Navy flight simulators. Aviat. Space Environ. Med. 1989, 60, 10–16. [Google Scholar] [PubMed]
  56. Kennedy, R.S.; Frank, L.H. A Review of Motion Sickness with Special Reference to Simulator Sickness. In Transportation Research Record; Canyon Research Group Inc.: Westlake Village, CA, USA, 1986. [Google Scholar]
  57. Jinjakam, C.; Hamamoto, K. Study on Parallax Affect on Simulator Sickness in One-Screen and Three-Screen Immersive Virtual Environment; Tokai University: Tokyo, Japan, 2011. [Google Scholar]
  58. Jinjakam, C.; Odagiri, Y.; Dejhan, K.; Hamamoto, K. Comparative study of virtual sickness between a single-screen and three-screen from parallax affect. World Acad. Sci. Eng. Technol. 2011, 75, 233–236. [Google Scholar]
  59. Ruddle, R.A. The effect of environment characteristics and user interaction on levels of virtual environment sickness. In Proceedings of the IEEE Virtual Reality, Chicago, IL, USA, 27–31 March 2004; pp. 141–285. [Google Scholar]
  60. Duh, H.; Parker, D.A.; Furness, T. Does a Peripheral Independent Visual Background Reduce Scene-Motion-Induced Balance Disturbance in an Immersive Environment. In Proceedings of the 9th International Conference on Human-Computer Interaction, New Orleans, LA, USA, 5–10 August 2001. [Google Scholar]
  61. Lin, J.J.-W.; Abi-Rached, H.; Kim, D.-H.; Parker, D.E.; Furness, T.A. A “Natural” Independent Visual Background Reduced Simulator Sickness. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Baltimore, MD, USA, 29 September–4 October 2002. [Google Scholar]
  62. Prothero, J.D.; Draper, M.H.; Furness, T.A.; Parker, D.E.; Wells, M.J. The use of an independent visual background to reduce simulator side-effects. Aviat. Space Environ. Med. 1999, 70, 277–283. [Google Scholar] [PubMed]
  63. Fernandes, A.K.; Feiner, S. Combating VR sickness through subtle dynamic field-of-view modification. In Proceedings of the 2016 IEEE Symposium on 3D User Interfaces (3DUI), Greenville, SC, USA, 19–20 March 2016; pp. 201–210. [Google Scholar]
  64. Lopez-Gordo, M.A.; Sanchez-Morillo, D.; Valle, F.P. Dry EEG Electrodes. Sensors 2014, 14, 12847–12870. [Google Scholar] [CrossRef] [PubMed]
  65. Käthner, I.; Halder, S.; Hintermüller, C.; Espinosa, A.; Guger, C.; Miralles, F.; Vargiu, E.; Dauwalder, S.; Rafael-Palou, X.; Solà, M.; et al. A Multifunctional Brain-Computer Interface Intended for Home Use: An Evaluation with Healthy Participants and Potential End Users with Dry and Gel-Based Electrodes. Front. Neurosci. 2017, 11. [Google Scholar] [CrossRef] [PubMed]
  66. Mayaud, L.; Cabanilles, S.; Langhenhove, A.V.; Congedo, M.; Barachant, A.; Pouplin, S.; Filipe, S.; Pétégnief, L.; Rochecouste, O.; Azabou, E.; et al. Brain-computer interface for the communication of acute patients: A feasibility study and a randomized controlled trial comparing performance with healthy participants and a traditional assistive device. Brain Comput. Interfaces 2016, 3, 197–215. [Google Scholar] [CrossRef]
  67. Guger, C.; Krausz, G.; Allison, B.Z.; Edlinger, G. Comparison of Dry and Gel Based Electrodes for P300 Brain–Computer Interfaces. Front. Neurosci. 2012, 6. [Google Scholar] [CrossRef] [PubMed]
  68. Sundararaman, B.; Buy, U.; Kshemkalyani, A.D. Clock synchronization for wireless sensor networks: A survey. Ad Hoc Netw. 2005, 3, 281–323. [Google Scholar] [CrossRef]
  69. Chen, Y.-H.; de Beeck, M.O.; Vanderheyden, L.; Carrette, E.; Mihajlović, V.; Vanstreels, K.; Grundlehner, B.; Gadeyne, S.; Boon, P.; Van Hoof, C. Soft, Comfortable Polymer Dry Electrodes for High Quality ECG and EEG Recording. Sensors 2014, 14, 23758–23780. [Google Scholar] [CrossRef] [PubMed]
  70. Bleichner, M.G.; Lundbeck, M.; Selisky, M.; Minow, F.; Jäger, M.; Emkes, R.; Debener, S.; Vos, M.D. Exploring miniaturized EEG electrodes for brain-computer interfaces. An EEG you do not see? Physiol. Rep. 2015, 3. [Google Scholar] [CrossRef] [PubMed]
  71. Barachant, A.; Andreev, A.; Congedo, M. The Riemannian Potato: An automatic and adaptive artifact detection method for online experiments using Riemannian geometry. In TOBI Workshop lV; Clark County School District: Sion, Switzerland, 2013; pp. 19–20. [Google Scholar]
  72. Lau, T.M.; Gwin, J.T.; McDowell, K.G.; Ferris, D.P. Weighted phase lag index stability as an artifact resistant measure to detect cognitive EEG activity during locomotion. J. Neuroeng. Rehabil. 2012, 9, 47. [Google Scholar] [CrossRef] [PubMed]
  73. Royer, A.S.; Doud, A.J.; Rose, M.L.; He, B. EEG Control of a Virtual Helicopter in 3-Dimensional Space Using Intelligent Control Strategies. IEEE Trans. Neural Syst. Rehabil. Eng. 2010, 18, 581–589. [Google Scholar] [CrossRef] [PubMed]
  74. Bayliss, J.D.; Ballard, D.H. A virtual reality testbed for brain-computer interface research. IEEE Trans. Rehabil. Eng. 2000, 8, 188–190. [Google Scholar] [CrossRef] [PubMed]
  75. Korczowski, L.; Barachant, A.; Andreev, A.; Jutten, C.; Congedo, M. Brain Invaders 2: An open source Plug & Play multi-user BCI videogame. In Proceedings of the 6th International Brain-Computer Interface Meeting (BCI Meeting 2016), Pacific Grove, CA, USA, 30 May–3 June 2016. [Google Scholar]
  76. An, X.; Höhne, J.; Ming, D.; Blankertz, B. Exploring Combinations of Auditory and Visual Stimuli for Gaze-Independent Brain-Computer Interfaces. PLoS ONE 2014, 9, e111070. [Google Scholar] [CrossRef] [PubMed]
  77. Höhne, J.; Tangermann, M. Towards User-Friendly Spelling with an Auditory Brain-Computer Interface: The CharStreamer Paradigm. PLoS ONE 2014, 9, e98322. [Google Scholar] [CrossRef] [PubMed]
  78. Guo, F.; Hong, B.; Gao, X.; Gao, S. A brain-computer interface using motion-onset visual evoked potential. J. Neural Eng. 2008, 5, 477–485. [Google Scholar] [CrossRef] [PubMed]
  79. Zander, T.; Kothe, C. Towards passive Brain-Computer interfaces: Applying Brain-Computer interface technology to human-machine systems in general. J. Neural Eng. 2011, 8. [Google Scholar] [CrossRef] [PubMed]
  80. Hermes, D.; Miller, K.J.; Wandell, B.A.; Winawer, J. Stimulus Dependence of Gamma Oscillations in Human Visual Cortex. Cereb. Cortex 2015, 25, 2951–2959. [Google Scholar] [CrossRef] [PubMed]
  81. Jin, J.; Allison, B.Z.; Kaufmann, T.; Kübler, A.; Zhang, Y.; Wang, X.; Cichocki, A. The Changing Face of P300 BCIs: A Comparison of Stimulus Changes in a P300 BCI Involving Faces, Emotion, and Movement. PLoS ONE 2012, 7, e49688. [Google Scholar] [CrossRef] [PubMed]
  82. Jin, J.; Allison, B.Z.; Wang, X.; Neuper, C. A combined brain–computer interface based on P300 potentials and motion-onset visual evoked potentials. J. Neurosci. Methods 2012, 205, 265–276. [Google Scholar] [CrossRef] [PubMed]
  83. Münßinger, J.I.; Halder, S.; Kleih, S.C.; Furdea, A.; Raco, V.; Hösle, A.; Kübler, A. Brain Painting: First Evaluation of a New Brain-Computer Interface Application with ALS-Patients and Healthy Volunteers. Front. Neurosci. 2010, 4. [Google Scholar] [CrossRef] [PubMed]
  84. Schreuder, M.; Höhne, J.; Blankertz, B.; Haufe, S.; Dickhaus, T.; Tangermann, M. Optimizing event-related potential based brain-computer interfaces: A systematic evaluation of dynamic stopping methods. J. Neural Eng. 2013, 10, 036025. [Google Scholar] [CrossRef] [PubMed]
  85. Kindermans, P.-J.; Tangermann, M.; Müller, K.-R.; Schrauwen, B. Integrating dynamic stopping, transfer learning and language models in an adaptive zero-training ERP speller. J. Neural Eng. 2014, 11, 035005. [Google Scholar] [CrossRef] [PubMed]
  86. Ferrez, P.W.; del Millán, J.R. You Are Wrong!—Automatic Detection of Interaction Errors from Brain Waves. In Proceedings of the 19th International Joint Conference on Artificial Intelligence, Edinburgh, UK, 30 July–5 August 2005. [Google Scholar]
  87. Schmidt, N.M.; Blankertz, B.; Treder, M.S. Online detection of error-related potentials boosts the performance of mental typewriters. BMC Neurosci. 2012, 13, 19. [Google Scholar] [CrossRef] [PubMed]
  88. Farquhar, J.; Hill, N.J. Interactions between pre-processing and classification methods for event-related-potential classification: Best-practice guidelines for brain-computer interfacing. Neuroinformatics 2013, 11, 175–192. [Google Scholar] [CrossRef] [PubMed]
  89. Schell, J. The Art of Game Design: A Book of Lenses, 2nd ed.; CRC Press: Boca Raton, FL, USA, 2014; ISBN 978-1-4665-9864-5. [Google Scholar]
  90. Mak, J.N.; Arbel, Y.; Minett, J.W.; McCane, L.M.; Yuksel, B.; Ryan, D.; Thompson, D.; Bianchi, L.; Erdogmus, D. Optimizing the P300-based brain-computer interface: Current status, limitations and future directions. J. Neural Eng. 2011, 8, 025003. [Google Scholar] [CrossRef] [PubMed]
  91. Zickler, C.; Riccio, A.; Leotta, F.; Hillian-Tress, S.; Halder, S.; Holz, E.; Staiger-Sälzer, P.; Hoogerwerf, E.-J.; Desideri, L.; Mattia, D.; et al. A brain-computer interface as input channel for a standard assistive technology software. Clin. Neurosci. 2011, 42, 236–244. [Google Scholar] [CrossRef] [PubMed]
  92. Brey, P. The ethics of representation and action in virtual reality. Ethics Inf. Technol. 1999, 1, 5–14. [Google Scholar] [CrossRef]
  93. Dill, K.E.; Dill, J.C. Video game violence. Aggress. Violent Behav. 1998, 3, 407–428. [Google Scholar] [CrossRef]
  94. Cobb, S.V.G.; Nichols, S.; Ramsey, A.; Wilson, J.R. Virtual Reality-Induced Symptoms and Effects (VRISE). Presence 1999, 8, 169–186. [Google Scholar] [CrossRef]
  95. Calvert, S.L.; Tan, S.-L. Impact of virtual reality on young adults’ physiological arousal and aggressive thoughts: Interaction versus observation. J. Appl. Dev. Psychol. 1994, 15, 125–139. [Google Scholar] [CrossRef]
  96. Hasan, Y.; Bègue, L.; Scharkow, M.; Bushman, B.J. The more you play, the more aggressive you become: A long-term experimental study of cumulative violent video game effects on hostile expectations and aggressive behavior. J. Exp. Soc. Psychol. 2013, 49, 224–227. [Google Scholar] [CrossRef]
  97. Gregg, L.; Tarrier, N. Virtual reality in mental health. Soc. Psychiatry Psychiatr. Epidemiol. 2007, 42, 343–354. [Google Scholar] [CrossRef] [PubMed]
  98. Arns, M.; Batail, J.-M.; Bioulac, S.; Congedo, M.; Daudet, C.; Drapier, D.; Fovet, T.; Jardri, R.; Le-Van-Quyen, M.; Lotte, F.; et al. NExT group Neurofeedback: One of today’s techniques in psychiatry? L’Encephale 2017, 43, 135–145. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The SamsungGear head-mounted device (HMD) (a) can be used in passive mode (inserting a smartphone without plugging it into the mask through the micro-Universal Serial Bus (USB) port) or active mode (with on-board electronic, mainly the gyroscope, supplied through the micro-USB port). The Google Cardboard (b) is a very simple and economic passive HMD. The Neurable headset (c) combines electroencephalography (EEG) with the HTC Vive (d), an active virtual reality (VR) headset linked to a powerful computer.
Figure 1. The SamsungGear head-mounted device (HMD) (a) can be used in passive mode (inserting a smartphone without plugging it into the mask through the micro-Universal Serial Bus (USB) port) or active mode (with on-board electronic, mainly the gyroscope, supplied through the micro-USB port). The Google Cardboard (b) is a very simple and economic passive HMD. The Neurable headset (c) combines electroencephalography (EEG) with the HTC Vive (d), an active virtual reality (VR) headset linked to a powerful computer.
Computers 07 00034 g001
Figure 2. Examples of motion platforms: (a) Virtuix omni and (b) VR Motion Simulator.
Figure 2. Examples of motion platforms: (a) Virtuix omni and (b) VR Motion Simulator.
Computers 07 00034 g002
Figure 3. Representation of Milgram and Kishino [42] virtuality continuum of real and virtual environments (Figure rearranged from [42]).
Figure 3. Representation of Milgram and Kishino [42] virtuality continuum of real and virtual environments (Figure rearranged from [42]).
Computers 07 00034 g003
Figure 4. Dry electrodes: (a) ‘g.Sahara’ (Guger Technologies, Graz, Austria) and (b) ‘Flex sensor’ (Cognionics, San Diego, US). Image of wet electrodes (c) showing how the gel is injected into a ‘g.LADYbird’ (Guger Technologies, Graz, Austria) electrode attached to an elastic cap and (d) ‘Gold cup electrodes’ (OpenBCI, New York, US), which can be attached to the scalp using a fixating paste.
Figure 4. Dry electrodes: (a) ‘g.Sahara’ (Guger Technologies, Graz, Austria) and (b) ‘Flex sensor’ (Cognionics, San Diego, US). Image of wet electrodes (c) showing how the gel is injected into a ‘g.LADYbird’ (Guger Technologies, Graz, Austria) electrode attached to an elastic cap and (d) ‘Gold cup electrodes’ (OpenBCI, New York, US), which can be attached to the scalp using a fixating paste.
Computers 07 00034 g004
Figure 5. Examples of a brain–computer interface (BCI) VR system when the EEG acquisition unit and the BCI engine (Analysis) are running on different platforms: (a) the HMD is linked to a PC; (b) the HMD works without the need of a PC. In (a) the PC could be miniaturized and embedded into the mobile HMD-EEG system. In (b) the HMD is in charge of running and displaying the UI, but also of acquiring and tagging the signal (software tagging).
Figure 5. Examples of a brain–computer interface (BCI) VR system when the EEG acquisition unit and the BCI engine (Analysis) are running on different platforms: (a) the HMD is linked to a PC; (b) the HMD works without the need of a PC. In (a) the PC could be miniaturized and embedded into the mobile HMD-EEG system. In (b) the HMD is in charge of running and displaying the UI, but also of acquiring and tagging the signal (software tagging).
Computers 07 00034 g005
Figure 6. Benchmark of the type of game in VR and BCI. (a) The repartition of VR games by type according to the Steam Platform (2017). (b) Classification of the different types of game in regard of the previously exposed recommendations. The color code indicates either the type of game is suitable for VR or BCI. The suitability for VR or BCI increases from right to left.
Figure 6. Benchmark of the type of game in VR and BCI. (a) The repartition of VR games by type according to the Steam Platform (2017). (b) Classification of the different types of game in regard of the previously exposed recommendations. The color code indicates either the type of game is suitable for VR or BCI. The suitability for VR or BCI increases from right to left.
Computers 07 00034 g006
Table 1. Examples of practical implementations of the design recommendations given in this section.
Table 1. Examples of practical implementations of the design recommendations given in this section.
RecommendationWhen It AppliesExample with a Car Race Game
Goal ControlEvery time you use a synchronous BCI.Do not control the movement but set objectives that the car must reach.
High-Level CommandsAs much as possible, but trying to make them intuitive.Control the speed of the car through a simple interface (SLOW, MODERATE and FAST). Avoid real-time commands such as “activate clutch, select driving gear from one to six”.
Incorporate stimuli in the gameAs much as possible.At the start of the race, incorporate the stimuli in the signal light. The car direction can be set up by looking at different billboards on the left or right side of the road.
Separate complex actionsWhen controlling an action that has more than two possibilities, or when each possibility can take too many values.The user action is to control the trajectory of the car, which depends on speed and direction. Usually these are done simultaneously with keyboard or joystick, but have to be set one after the other when using a synchronous BCI.
Enumerate all possibilities for an actionWhen an action can only take a small set of discrete values.The speed can be slow or fast and the direction can be right or left. These are two tasks to be accomplished by the user. They can be combined into one choice with four possibilities: right-slow, right-fast, left-slow and left-fast.
Multiplayer interactionWhenever the game is multiplayer.The first player can control the speed and the second the direction.

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (
Computers EISSN 2073-431X Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top