Next Article in Journal
On the Use of ROMOT—A RObotized 3D-MOvie Theatre—To Enhance Romantic Movie Scenes
Next Article in Special Issue
Augmented Reality Video Games: New Possibilities and Implications for Children and Adolescents
Previous Article in Journal
Robotic Arts: Current Practices, Potentials, and Implications
Article Menu

Export Article

Multimodal Technologies and Interaction 2017, 1(2), 6; doi:10.3390/mti1020006

Article
On a First Evaluation of ROMOT—A RObotic 3D MOvie Theatre—For Driving Safety Awareness
Institute of Robotics and Information and Communication Technologies (IRTIC), Universitat de València, València 46980, Spain
*
Author to whom correspondence should be addressed.
Academic Editor: Carolina Cruz-Neira
Received: 13 February 2017 / Accepted: 23 March 2017 / Published: 27 March 2017

Abstract

:
In this paper, we introduce ROMOT, a RObotic 3D-MOvie Theatre, and present a case study related to driving safety. ROMOT is built with a robotic motion platform, includes multimodal devices, and supports audience-film interaction. We show the versatility of the system by means of different types of system setups and generated content that includes a first-person movie and others involving the technologies of virtual, augmented, and mixed realities. Finally, we present the results of some preliminary user tests made at the laboratory level, including the system usability scale. They give satisfactory scores for the usability of the system and the individual’s satisfaction.
Keywords:
virtual reality; serious games; driving safety awareness; multimodal; interaction

1. Introduction

Driving is a complex activity that requires various skills, such as fast reactions against environmental events or unpredicted risks. Driver training and driving safety awareness in such risky cases, or with special groups of drivers (e.g., racers), can be safely done with the use of driving simulators, usually supported by virtual reality environments and motion platforms. Driving simulators are convenient for different reasons, namely: safety and risk reduction; costs reduction; greater trial availability; they cause no damage for the drivers; availability of trainers and students; possibility of recreating a variety of situations and weather conditions; possibility of repeating the same tests under the same conditions; ability to evaluate tests objectively; and the transfer of skills and training [1].
The research community has long reported on the use of driving simulators and driving-oriented virtual reality applications. Although, many of them aim at assessing or promoting driving safety, the environmental conditions, targeted users, simulators’ setups, and case studies are varied. For instance, in [2] a study was conducted regarding the fatigue among aging drivers by making use of a fixed-base driving simulator, comprising a complete automobile, fully equipped with functional pedals and a dashboard. The developed virtual reality environment was projected on a large screen reproducing a drive on a monotonous country road. Another example is shown in [3], where the targeted users are novice drivers with autism spectrum disorder. The tests were performed with a commercial simulator that displays a 210 ° field of view on a curved screen inside an eight-foot cylinder. The simulator includes a seatbelt, dashboard, steering wheel, turn signal, gas and brake controls, right, left, side, and rear-view mirrors, as well as an adjustable seat. In [4] a study was carried out to predict motor vehicle collisions in young adults, by making use of a PC-based driving simulator. In [5] the patterns followed by drivers in adjusting the speed in curves is studied by carrying various tests in a motion-based driving simulator. In [6], how the simulator test conditions affect the severity of the simulator sickness symptoms was studied. For the tests, different simulator setups were considered, one of them considering a motion platform.
Most of the aforementioned works make use of advanced 3D graphics and sound; some also including motion platforms. However, many of them focus mainly on driving skills, not on safety awareness. In any case, the integration of other kind of multimodal stimuli, which might increase the user’s immersion, is usually not considered, and only a few works have been reported by the research community. In this regard, one of the earliest multimodal immersive systems was the Sensorama, a system patented in 1962 by Morton L. Heilig [7]. The technology integrated in the Sensorama allowed a single person to see a stereoscopic film enhanced with seat motion, vibration, stereo sound, wind, and aromas, which were triggered during the film. Recent research works dealing with immersive multimodal systems are reported in [8,9], also involving individual experiences. On the other hand, the rapid technological advancements of the last years have allowed the development of commercial solutions that integrate a variety of multimodal displays in movie theatres, such as in [10,11,12], where these systems are usually referred to as 4D or 5D cinemas or theatres. Some claim that this technology shifts the cinema experience from “watching the movie to almost living it” [13], also enhancing the cinematic experience while creating a new and contemporary version of storytelling, which can be conceptualized as a “reboot cinema” [14]. Although these systems are not centred in driving simulation, they are of interest because they include multimodal stimuli and account for a set of users rather for a single user.
In this paper, we present the outcomes of a usability test performed with ROMOT [15], a RObotized 3D-MOvie Theatre involving multimodal and multiuser experiences, for a case study in driving safety awareness. ROMOT follows the concept of a 3D movie theatre with a robotized motion platform and integrated multimodal devices, also supporting some level of audience-film interaction. ROMOT was initially conceived to be the central attraction of a driving safety awareness exhibition. However, ROMOT has been designed as a multi-purpose 3D interactive theatre, since it is highly versatile as it is prepared to support different types of setups and contexts, including films/animations or even simulations that could be related to a variety of contents, such as learning, entertainment, tourism, or even driving safety awareness, as shown in this paper. Currently, ROMOT supports and integrates various setups to fulfil different needs: first-person movies, mixed reality environments, virtual reality interactive environments, and augmented reality mirror-based scenes. The contents of all of the different setups are based on a storytelling and are seen stereoscopically, so they can be broadly referred to as 3D movies, mainly showing virtual generated graphics.
Due to the fact that ROMOT is a laboratory system that has been built from scratch (both hardware and software), differently from other commercial systems, it is highly versatile, being easily adapted to different kinds of public, purposes, contents, setups, etc. This opens new avenues in research related to e.g., HCI, robotics, learning, perception, etc. Additionally, the costs of such a laboratory system are significantly lower of that of similar commercial systems.
Despite ROMOT could be perfectly used as a classical driving simulator adding the vehicle controls (thanks to its versatility), the setups shown here do not represent a skill-oriented driving simulator to study how people perform while driving, but a 3D interactive theatre with simulated content, designed specifically to enhance and improve driving safety awareness. So far, similar systems have not been used for the purpose of driving safety awareness.
This paper is focused on the development and initial user evaluation of ROMOT, and is organized in the following way. First, we show the main technical aspects behind the construction of ROMOT and the integrated multimodal devices and interaction capabilities. It is worth mentioning that, differently from other existing commercial solutions, ROMOT accounts for a 180 ° curved screen to enhance user immersion. Then, we show the different kind of setups and contents that were created for the case study on driving safety awareness. Finally, we show the first outcomes regarding the usability of the system and the individual’s satisfaction. To the best of our knowledge, this is the first work reporting audience experiences in such a complete system.

2. Materials and Methods

2.1. Construction of ROMOT

The house (audience) was robotized by means of a 3-DOF motion platform with capacity for 12 people (Figure 1). The seats are distributed in two rows, where the first row has five seats and the second one, seven seats. This motion platform is equipped with three SEW Eurodrive 2.2 Kw electric motors coupled with a 58.34 reduction-drive. The parallel design of the robotic manipulator alongside with the powerful 880 N·m motor-reduction set, provide a total payload of 1500 Kg, enough to withstand and move the 12 people and their seats.
The design of the robotized motion platform allows for two rotational movements (pitch and roll tilt) and one translational displacement along the vertical axis (heave displacement). The motion platform is capable of featuring two pure rotational DOF, one pure translational DOF (the vertical displacement), plus two “simulated” translational DOF by making use of the tilt-coordination technique [16] (using pitch and roll tilt to simulate low-frequency forward and lateral accelerations). Thus, it is capable of working with five DOF, the yaw rotation being the only one completely missing. It is, therefore, a good compromise between performance and cost, since it is considerably cheaper to build than a 6-DOF Stewart motion platform [17], but its performance could be similar for some applications [18]. The motion platform is controlled by self-written software using the MODBUS/TCP protocol. The software includes not only the actuators’ control but also the classical washout algorithm [19], tuned with the method described in [20].
Figure 2 shows the kinematic design of the motion platform. The 12 seats and people are placed on the motion base, which is moved by three powerful rotational motors that actuate over the robot legs. The elements in yellow transmit the rotational motion of the motors to the motion base while ensuring that the robot does not turn around the vertical axis (yaw).
The motion envelope of parallel manipulators is always a complex hyper-volume. Therefore, only the maximum linear/angular displacements for each individual DOF can be shown (see Table 1). Combining different DOF results in a reduction of the amount of reachable linear/angular displacement of each DOF. Nevertheless, this parallel design allows for large payloads, which was one of the key needs for this project, and fast motion [21]. In fact, the robotized motion platform is capable of performing a whole excursion in less than 1 s.
In front of the motion platform, a curved 180 ° screen is placed, with 3 m height (and a 1.4 m high extension to display additional content) and with a radius of 3.4 m. Four projectors display a continuous scene on the screen, generated from two different camera positions to allow stereoscopy. Therefore, in order to properly see the 3D content, users need to wear 3D glasses.
Although some smaller setups introduce the display infrastructure on the motion-platform (so that they move together and inertial cues are correctly correlated with visual cues), the dimensions of ROMOT’s screen strongly recommend that the display infrastructure is kept fixed on the ground. Therefore, the visual parallax produced when the motion platform tilts, or is displaced with respect to the screen, needs to be corrected by reshaping the virtual camera properties so that the inertial and visual cues match. This introduces an additional complexity to the system, but allows the motion platform to be lighter and produce higher accelerations, increasing the motion fidelity [22].

2.2. Multimodal Displays Integrated in ROMOT

In order to enrich the experience of the users and make the filmic scenes more realistic, a set of multimodal displays was added to the robotized platform (Figure 3):
  • An olfactory display. We used the Olorama wireless aromatizer [23]. It features 12 scents arranged in 12 pre-charged channels, which can be chosen and triggered by means of a UDP packet. The device is equipped with a programmable fan that spreads the scent around. Both the intensity of the chosen scent (amount of time the scent valve is open) and the amount of fan time can be programmed.
  • A smoke generator. We used a Quarkpro QF-1200. It is equipped with a DMX interface, so it is possible to control and synchronize the amount of smoke from a computer, by using a DMX-USB interface such as the Enttec Open DMX USB [24].
  • Air and water dispensers. A total of 12 air and water dispensers (one for each set). The water and air system was built using an air compressor, a water recipient, 12 air electro-valves, 12 water electro-valves, 24 electric relays and two Arduino Uno to be able to control the relays from the PC and open the electro-valves to spray water or produce air.
  • An electric fan. This fan is controllable by means of a frequency inverter connected to one of the previous Arduino Uno devices.
  • Projectors. A total of four full HD 3D projectors.
  • Glasses. A total of 12 3D glasses (one for each person).
  • Loudspeaker. A 5.0 loudspeaker system to produce binaural sound.
  • Tablets. A total of 12 individual tablets (one for each person).
  • Webcam. A stereoscopic webcam to be able to construct an augmented reality mirror-based environment.
It is important to emphasize that all of the multimodal actuators can be controlled from a computer, so that they can be synchronized with the displayed content and with the motion platform. Therefore, users are able to feel the system’s response through five of their senses:
  • Sight and stereoscopy: users can see a 3D representation of the scenes on the curved screen and through the 3D glasses; they can see additional interactive content on the tablets; they can see the smoke.
  • Hearing: they can hear the sound synchronized with the 3D content.
  • Smell: they can smell essences. For instance, when a car crashes, they can smell the smoke. In fact, they can even feel the smoke around them.
  • Touch: they can feel the touch of air and water on their bodies; they can touch the tablets.
  • Kinaesthetic: they can feel the movement of the 3-DOF platform.
Apart from that, the audience can provide inputs to ROMOT through the provided tablets (one tablet per person). This interaction is integrated in the setup of the “3D-virtual reality interactive environment”, which is explained as part of the following section. It is worth noting that, instead of 3D glasses and a screen, we could have used head-mounted displays (HMD). However, the interaction with tablets could be awkward with the use of an HMD. Furthermore, the intention of ROMOT is not to isolate users but to make them feel committed to learn driving safety topics in a familiar and multi-user environment.

2.3. System Setups

Four different system setups and related stereoscopic content were elaborated, which are described in the following sub-sections. All of them were created using Unity, a powerful game engine that allows the developer to work easily with 3D elements, animations, videos, stereoscopic rendering, and even augmented reality.

2.3.1. First-Person Movie

A set of driving-related videos were recorded using two GoPro cameras to create a 3D movie set in the streets of a city. Most of the videos were filmed by attaching the GoPro cameras to a car’s hood (Figure 4) in order to create a journey with a first-person view and increase the audience’s immersive experience by locating them at the centre of the view, as if they were the protagonists of the journey.
In addition, the film includes audio consisting of ambient sounds and/or a locution that reinforces the filmed scenes. In some cases, synchronized soft platform movements or effects like a nice smell or a gentle breeze, help create the perfect ambience at each part of the movie, and make the experience more enjoyable for the audience, also providing a greater immersion in the recreated trip.

2.3.2. Mixed Reality Environment

3D video and 3D virtual content can be mixed creating a mixed reality (MR) movie that helps the audience perceive the virtual content as if it were real, making the transition from a real movie to a virtual situation easier.
In this setup, the created 3D virtual content—a 3D virtual character in the form of a traffic light—interacts with parts of the video by creating the virtual animation in such a way that it is synchronized with the contents of the recorded real scene. The virtual character talks to the audience at certain parts of the movie, giving them good advice related to correct driving in different city environments, such as crossroads. Virtual shadows of the synthetic character are also considered to make the whole scene more real (Figure 5).

2.3.3. Virtual Reality Interactive Environment

In this setup, a 3D model of a city was recreated in order to deal with situations, such as accidents, outrages, or serious violations of traffic safety norms that cannot be easily recreated in the real world. In this case, different buildings were created and merged to a map of the streets of a city. Street furniture, traffic signs, traffic lights, etc. were added too in order to create the virtual city as detailed as possible. Vehicles and pedestrians were further animated to create every situation as real as possible (Figure 6). Each driving and environmental situation was created using a storyboard that contains all of the contents, camera movements, special effects, locutions, etc. Thereby, at the end, a set of situations were derived that could be part of a movie.
In this case, the aim was not to make the audience just look at the screen and enjoy a movie, but to make them feel each situation, to be part of it, and to react to it. That is why platform movements and all of the other multimodal displays are so important.
When each situation takes place, the audience can feel that they are in the car driving thanks to the platform movements that simulate the behaviour of a real car (accelerations, decelerations, turns, etc.). In some of the scenes, the recreated movie pauses and asks the audience for their collaboration in two ways: either a question is shown to the audience for them to give an answer, or they are asked to contribute to driving, by accelerating or decelerating at certain points of the trip (Figure 7). For instance, in the first case, the individual tablets vibrate and a question appears, giving the users some time to answer it by selecting one of the possible options. When the time is up, they are prompted to report whether the answer was correct or not, and the virtual situation resumes, showing the consequences of choosing a right or a wrong decision. Crashes, outrages, rollovers, etc., the audience can feel, in first person, the consequences of having an accident thanks to the robotic platform movements and the rest of the multimodal feedback, such as smoke, smells, etc.
Each correct answer increases the individual score at each of the tablets. When the deployed situation finishes, the audience can see the final score on the large, curved screen. The people having the highest score are the winners, who are somehow rewarded by the system by receiving a special visit, a 3D virtual character that congratulates them for their safe driving (see the next sub-section).

2.3.4. Augmented Reality Mirror-Based Scene

This setup consists of a video-based augmented reality mirror (ARM) [25,26] scene, which is also seen stereoscopically. ARMs can bring a further step in user immersion, as the audience can actually see a real-time image of themselves and feel part of the created environment. The AR scenario is built of a mixture of a stereoscopic image of the audience, which is captured in real-time by the integrated stereoscopic webcam, blended with a 3D virtual character and a 3D virtual scenario (in this case a 3D model of the platform and the seats). The scene works as a mirror, as the audience sees their own image in front of them, correctly aligned as if it was an image reflected by a mirror. The 3D virtual scenario creates a strong integration of the stereoscopic image from the webcam and the 3D character by using occlusion (for example when the 3D virtual character is walking behind a seat). An example of the ROMOT’s ARM is shown in Figure 8, where the 3D virtual character is seen at the left bottom part of the image. In this case, only two users are sitting down (upper seats) as the image was taken at the laboratory level.
This ARM environment is used in the final scenes of the aforementioned virtual reality interactive environment (previous sub-section), where the user(s) with the highest score is/are rewarded by a virtual 3D character that walks towards him/her/them. Together with this action, virtual confetti and coloured stars appear on the environment, accompanied with winning music that includes applauses.

3. Results

In this section, we present the results of some preliminary user tests made at the laboratory level (Figure 9). A total of 14 people tested the system and participated in its evaluation, which consisted on filling out a pair of questionnaires that were related to the usability of the system and the individual’s satisfaction, where the system usability scale (SUS) [27] was chosen to measure usability. These kind of tests are commonly used to derive quantitative evaluation of technological systems regarding to usability, and SUS has become a standard. Some recent works using these tests can be found in [28,29,30].
The participants were some of the research staff of the IRTIC Institute, where seven of them were women and seven were men. Only three of the participants contributed to the present research work at some of their stages. Though this might not be considered as a full objective audience, it can give us a good first notion for a user-related evaluation of the system.
The results of the SUS questionnaire are listed in Table 2. In questions 1 to 10 the range 0–4 means: 0: strongly disagree, 4: strongly agree. The values of the SUS score, however, range from 0 to 100, meaning 100 is the best imaginable result. In the case of the ROMOT evaluation, this score reaches 84.25 points, which can be considered excellent on the scale of scores provided by the questionnaire and taking into account the fact that a minimum score of 68 would be deemed acceptable for a tool [31,32].
The results of the individuals’ satisfaction questionnaires are given in Table 3. The scores also range from 0 to 4, meaning: 0: strongly disagree, 4: strongly agree. As it can be seen, results are quite satisfactory as eight out of the 12 mean scores are over 3 points and none are under 2.5 points. The lowest mean score belongs to question 10: “I didn’t feel sick after using the ROMOT”, which is an issue difficult to tackle, as simulator sickness highly depends on the individual. On the other hand, the highest mean score belongs to question 12: “I would like to recommend others to use ROMOT”, meaning that, overall, the individuals are satisfied with the system.
In addition to the tests, we have asked the participants to give us any other kind of feedback in a section dedicated to personal comments. We collected feedback from three of the participants. Two of them coincided in that the system was “very awesome” and they enjoyed the multimodal experience. The third one also addressed that the system provided him/her with an impressive experience and the sense of immersion was very high, as he/she could really feel like driving a car through the different scenarios and situations. However, he/she felt a little bit sick after a while, due to the combination of the stereoscopic glasses and the motion platform movements. Nevertheless, he/she still recommended the experience to others, as it was “very thrilling”.

4. Discussion and Further Work

As explained in the previous section, the preliminary quantitative evaluation of ROMOT related to usability was very satisfactory. After the tests were performed in the lab environment, our system was sent to a driving safety awareness exhibition in the Middle East. The exhibition consisted of a set of interactive applications, which included ROMOT as the central attraction (Figure 10). ROMOT and the rest of applications were designed and implemented entirely by the IRTIC institute for the same purpose, i.e., to teach and train on basic driving safety rules and transfer awareness on the importance of driving safely. In this sense, the set of interactive applications can be considered as a single learning/training tool.
As a further work, we intend to evaluate the learning and training capabilities of the whole exhibition. In order to collect some quantitative indicators, the visitors of the exhibition are asked to fulfil an online questionnaire before (ex-ante) and after (ex-post) the visit. Additionally, we are measuring the time that each individual expends at each application, the individual performance and scores obtained at each application (if any), and the trail each individual follows in the exhibition (e.g., in what order they visit the applications, if they repeat the experience, etc.). We are able to collect these data as each individual is given a unique code at the ticket entrance, which is required to activate each of the applications.
From a technological standpoint, the authors also propose and suggest testing other possibilities, such as different types of stereoscopic displays, a 6-DOF motion platform, and explore other user interaction systems different from tactile tablets. However, these kinds of improvements could raise the cost of the solution.

5. Conclusions

In this paper, we have presented the construction and first user evaluation of ROMOT, a robotized 3D-movie theatre. The work shown in this paper relates to the enhancement of audience experiences when integrating multimodal stimuli and making it interactive. As well, we show the versatility of the system by means of the different kind of generated content.
Both the setups and the film content of ROMOT can be changed for different types of user experiences. Here we have shown different setups for content related to driving safety awareness, though other filmic contents could be used, including some related to learning, training, entertainment, etc. As for the different setups, we have shown a first-person movie and others related to the technologies of virtual, augmented and mixed realities.
The outcomes regarding the usability of the system and the individual’s satisfaction are very promising, though we are aware that the system has only been evaluated at the laboratory level. It is also worth mentioning that, although there exist different commercial solutions (e.g., 4D/5D cinemas), we have not found complete research works dealing with the construction and audience evaluation of such systems.
As a further work, we intend to evaluate the developed system in a real environment (the exhibition space) with the participation of hundreds of visitors. We would also like to create more film content in order to make use of ROMOT for other purposes.

Author Contributions

Sergio Casas and Inma García-Pereira contributed to the design and software development. Marcos Fernández contributed to the design, hardware development and data gathering. Cristina Portalés contributed to the design, documentation and user evaluation tests. All authors have contributed to the paper writing and reviews.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yrurzum, S.C. Mejoras en la Generación de Claves Gravito-Inerciales en Simuladores de Vehículos no Aéreos; Universitat de València: València, Spain, 2014. [Google Scholar]
  2. Vallières, É.; Ruer, P.; Bergeron, J.; McDuff, P.; Gouin-Vallerand, C.; Ait-Seddik, K.; Mezghani, N. Perceived fatigue among aging drivers: An examination of the impact of age and duration of driving time on a simulator. In Proceedings of the SOCIOINT15- 2nd International Conference on Education, Social Sciences and Humanities, Istanbul, Turkey, 8–10 June 2015; pp. 314–320.
  3. Cox, S.M.; Cox, D.J.; Kofler, M.J.; Moncrief, M.A.; Johnson, R.J.; Lambert, A.E.; Cain, S.A.; Reeve, R.E. Driving simulator performance in novice drivers with autism spectrum disorder: The role of executive functions and basic motor skills. J. Autism Dev. Disord. 2016, 46, 1379–1391. [Google Scholar] [CrossRef] [PubMed]
  4. McManus, B.; Cox, M.K.; Vance, D.E.; Stavrinos, D. Predicting motor vehicle collisions in a driving simulator in young adults using the useful field of view assessment. Traffic Inj. Prev. 2015, 16, 818–823. [Google Scholar] [CrossRef] [PubMed]
  5. Reymond, G.; Kemeny, A.; Droulez, J.; Berthoz, A. Role of lateral acceleration in curve driving: Driver model and experiments on a real vehicle and a driving simulator. Hum. Factors 2001, 43, 483–495. [Google Scholar] [CrossRef] [PubMed]
  6. Dziuda, Ł.; Biernacki, M.P.; Baran, P.M.; Truszczyński, O.E. The effects of simulated fog and motion on simulator sickness in a driving simulator and the duration of after-effects. Appl. Ergon. 2014, 45, 406–412. [Google Scholar] [CrossRef] [PubMed]
  7. Heilig, M.L. Sensorama Simulator. Available online: https://www.google.com/patents/US3050870 (accessed on 13 February 2017).
  8. Ikei, Y.; Okuya, Y.; Shimabukuro, S.; Abe, K.; Amemiya, T.; Hirota, K. To relive a valuable experience of the world at the digital museum. In Human Interface and the Management of Information, Proceedings of the Information and Knowledge in Applications and Services: 16th International Conference, Heraklion, Greece, 22–27 June 2014; Yamamoto, S., Ed.; Springer International Publishing: Cham, Germany, 2014; pp. 501–510. [Google Scholar]
  9. Matsukura, H.; Yoneda, T.; Ishida, H. Smelling screen: Development and evaluation of an olfactory display system for presenting a virtual odor source. IEEE Trans. Vis. Comput. Graph. 2013, 19, 606–615. [Google Scholar] [CrossRef] [PubMed]
  10. CJ 4DPLEX. 4dx. Get into the Action. Available online: http://www.cj4dx.com/about/about.asp (accessed on 13 February 2017).
  11. Express Avenue. Pix 5d Cinema. Available online: http://expressavenue.in/?q=store/pix-5d-cinema (accessed on 13 February 2017).
  12. 5D Cinema Extreme. Fedezze fel Most a Mozi új Dimenzióját! Available online: http://www.5dcinema.hu/ (accessed on 13 February 2017).
  13. Yecies, B. Transnational collaboration of the multisensory kind: Exploiting Korean 4d cinema in china. Media Int. Aust. 2016, 159, 22–31. [Google Scholar] [CrossRef]
  14. Tryon, C. Reboot cinema. Convergence 2013, 19, 432–437. [Google Scholar] [CrossRef]
  15. Casas, S.; Portalés, C.; Vidal-González, M.; García-Pereira, I.; Fernández, M. Romot: A robotic 3D-movie theater allowing interaction and multimodal experiences. In Proceedings of International Congress on Love and Sex with Robots, London, UK, 19–20 December 2016.
  16. Groen, E.L.; Bles, W. How to use body tilt for the simulation of linear self motion. J. Vestib. Res. 2004, 14, 375–385. [Google Scholar] [PubMed]
  17. Stewart, D. A platform with six degrees of freedom. Proc. Inst. Mech. Eng. 1965, 180, 371–386. [Google Scholar] [CrossRef]
  18. Casas, S.; Coma, I.; Riera, J.V.; Fernández, M. Motion-cuing algorithms: Characterization of users’ perception. Hum. Factors 2015, 57, 144–162. [Google Scholar] [CrossRef] [PubMed]
  19. Nahon, M.A.; Reid, L.D. Simulator motion-drive algorithms—A designer’s perspective. J. Guid. Control Dyn. 1990, 13, 356–362. [Google Scholar] [CrossRef]
  20. Casas, S.; Coma, I.; Portalés, C.; Fernández, M. Towards a simulation-based tuning of motion cueing algorithms. Simul. Model. Pract. Theory 2016, 67, 137–154. [Google Scholar] [CrossRef]
  21. Küçük, S. Serial and Parallel Robot Manipulators—Kinematics, Dynamics, Control and Optimization; InTech: Vienna, Austria, 2012; p. 468. [Google Scholar]
  22. Sinacori, J.B. The Determination of Some Requirements for a Helicopter Flight Research Simulation Facility; Moffet Field: Mountain View, CA, USA, 1977. [Google Scholar]
  23. Olorama Technology. Olorama. Available online: http://www.olorama.com/en/ (accessed on 13 February 2017).
  24. Enttec. Controls, Lights, Solutions. Available online: http://www.enttec.com/ (accessed on 13 February 2017).
  25. Portalés, C.; Gimeno, J.; Casas, S.; Olanda, R.; Giner, F. Interacting with augmented reality mirrors. In Handbook of Research on Human-Computer Interfaces, Developments, and Applications; Rodrigues, J., Cardoso, P., Monteiro, J., Figueiredo, M., Eds.; IGI-Global: Hershey, PA, USA, 2016; pp. 216–244. [Google Scholar]
  26. Giner Martínez, F.; Portalés Ricart, C. The augmented user: A wearable augmented reality interface. In Proceedings of the International Conference on Virtual Systems and Multimedia, Ghent, Belgium, 3–7 October 2005.
  27. Brooke, J. SUS-A quick and dirty usability scale. Usability Eval. Ind. 1996, 189, 4–7. [Google Scholar]
  28. Díaz, D.; Boj, C.; Portalés, C. Hybridplay: A new technology to foster outdoors physical activity, verbal communication and teamwork. Sensors 2016, 16, 586. [Google Scholar] [CrossRef] [PubMed]
  29. Peruri, A.; Borchert, O.; Cox, K.; Hokanson, G.; Slator, B.M. Using the system usability scale in a classification learning environment. In Proceedings of the 19th Interactive Collaborative Learning Conference, Belfast, UK, 21–23 September 2016.
  30. Kortum, P.T.; Bangor, A. Usability ratings for everyday products measured with the system usability scale. Int. J. Hum. Comput. Interact. 2013, 29, 67–76. [Google Scholar] [CrossRef]
  31. Bangor, A.; Kortum, P.; Miller, J. Determining what individual sus scores mean: Adding an adjective rating scale. J. Usability Stud. 2009, 4, 114–123. [Google Scholar]
  32. Brooke, J. Sus: A retrospective. J. Usability Stud. 2013, 8, 29–40. [Google Scholar]
Figure 1. Panoramic image of the robotized house.
Figure 1. Panoramic image of the robotized house.
Mti 01 00006 g001
Figure 2. 3-DOF parallel platform.
Figure 2. 3-DOF parallel platform.
Mti 01 00006 g002
Figure 3. An image showing some of the air and water dispensers to the back of the first-row of seats, facing the audience located in the second row of seats. Individual tables are also depicted.
Figure 3. An image showing some of the air and water dispensers to the back of the first-row of seats, facing the audience located in the second row of seats. Individual tables are also depicted.
Mti 01 00006 g003
Figure 4. Example of the GoPro cameras recording for the first-person movie setup.
Figure 4. Example of the GoPro cameras recording for the first-person movie setup.
Mti 01 00006 g004
Figure 5. Example of the mixed reality setup.
Figure 5. Example of the mixed reality setup.
Mti 01 00006 g005
Figure 6. Overall image of the created 3D city with vehicles and pedestrians (top) and recreated poor environmental conditions, which difficult driving (bottom).
Figure 6. Overall image of the created 3D city with vehicles and pedestrians (top) and recreated poor environmental conditions, which difficult driving (bottom).
Mti 01 00006 g006
Figure 7. Examples of tablet pause, where a question is made and then users are requested to choose one option from a list of different answers (top); and users are requested to use the car’s controls (bottom).
Figure 7. Examples of tablet pause, where a question is made and then users are requested to choose one option from a list of different answers (top); and users are requested to use the car’s controls (bottom).
Mti 01 00006 g007
Figure 8. Audience immersed in the augmented reality mirror-based scenario (in the laboratory environment). One person receives the visit of the virtual character that congratulates him for being the winner (note that the provided image is taken from the real scenario, and so the two stereoscopic images are depicted).
Figure 8. Audience immersed in the augmented reality mirror-based scenario (in the laboratory environment). One person receives the visit of the virtual character that congratulates him for being the winner (note that the provided image is taken from the real scenario, and so the two stereoscopic images are depicted).
Mti 01 00006 g008
Figure 9. Image of some of the research staff testing ROMOT.
Figure 9. Image of some of the research staff testing ROMOT.
Mti 01 00006 g009
Figure 10. Image of the exhibition, where ROMOT is the central attraction. ROMOT is inside the central cylinder depicted in the image, and is referred as a “5D Cinema” for marketing reasons.
Figure 10. Image of the exhibition, where ROMOT is the central attraction. ROMOT is inside the central cylinder depicted in the image, and is referred as a “5D Cinema” for marketing reasons.
Mti 01 00006 g010
Table 1. Motion platform excursions for each individual DOF.
Table 1. Motion platform excursions for each individual DOF.
Heave (m)Pitch (°)Roll (°)
Minimum−0.125−12.89−10.83
Maximum+0.125+12.89+10.83
Table 2. Results of the SUS questionnaire (mean, standard deviation, minimum, and maximum).
Table 2. Results of the SUS questionnaire (mean, standard deviation, minimum, and maximum).
QuestionsMeanS.d.MinMax
1. I think that I would like to use this system frequently3.000.8914
2. I found the system unnecessarily complex0.700.7802
3. I thought the system was easy to use3.500.534
4. I think that I would need the support of a technical person to be able to use this system1.30103
5. I found the various functions in this system were well integrated3.600.6624
6. I thought there was too much inconsistency in this system0.400.4901
7. I would imagine that most people would learn to use this system very quickly3.600.4934
8. I found the system very cumbersome to use0.600.802
9. I felt very confident using the system3.300.6424
10. I needed to learn a lot of things before I could get going with this system0.300.4601
SUS score84.25
Table 3. Results of the individuals’ satisfaction questionnaire (mean, standard deviation, minimum, and maximum).
Table 3. Results of the individuals’ satisfaction questionnaire (mean, standard deviation, minimum, and maximum).
QuestionsMeanS.d.MinMax
1. Overall, I liked very much using ROMOT3.140.7424
2. I find it very easy to engage with the multimodal content3.290.824
3. I enjoyed watching the 3D movies3.290.814
4. The audio was very well integrated with the 3D movies3.430.7324
5. The smoke was very well integrated in the virtual reality interactive environment2.710.8814
6. The smell was very well integrated in the virtual reality interactive environment2.931.104
7. The air and water were very well integrated in the virtual reality interactive environment2.791.0104
8. The movement of the platform was very well synchronized with the movies3.360.7224
9. The interaction with the tablet was very intuitive3.070.824
10. I didn’t feel sick after using ROMOT2.641.2304
11. I would like to use again ROMOT3.360.6124
12. I would like to recommend others to use ROMOT3.640.4834
Multimodal Technologies Interact. EISSN 2414-4088 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top