Next Article in Journal
Ultra-High-Resolution 1 m/pixel CaSSIS DTM Using Super-Resolution Restoration and Shape-from-Shading: Demonstration over Oxia Planum on Mars
Next Article in Special Issue
Towards a Multimodal Representation: Claudia Octavia’s Bequeathal
Previous Article in Journal
Mapping Panax Notoginseng Plantations by Using an Integrated Pixel- and Object-Based (IPOB) Approach and ZY-3 Imagery
Previous Article in Special Issue
The Use of Virtual Reality to Promote Sustainable Tourism: A Case Study of Wooden Churches Historical Monuments from Romania
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Augmented Virtuality Using Touch-Sensitive 3D-Printed Objects

1
Visual Computing Lab, Istituto di Scienza e Tecnologia dell’Informazione—CNR, 56124 Pisa, Italy
2
MOLA (Museum of London Archaeology), London N1 7ED, UK
*
Author to whom correspondence should be addressed.
Remote Sens. 2021, 13(11), 2186; https://doi.org/10.3390/rs13112186
Submission received: 15 March 2021 / Revised: 16 April 2021 / Accepted: 27 April 2021 / Published: 3 June 2021
(This article belongs to the Special Issue 3D Virtual Reconstruction for Cultural Heritage)

Abstract

:
Virtual reality (VR) technologies have become more and more affordable and popular in the last five years thanks to hardware and software advancements. A critical issue for these technologies is finding paradigms that allow user interactions in ways that are as similar as possible to the real world, bringing physicality into the experience. Current literature has shown, with different experiments, that the mapping of real objects in virtual reality alongside haptic feedback significantly increases the realism of the experience and user engagement, leading to augmented virtuality. In this paper, we present a system to improve engagement in a VR experience using inexpensive, physical, and sensorized copies of real artefacts made with cheap 3D fabrication technologies. Based on a combination of hardware and software components, the proposed system gives the user the possibility to interact with the physical replica in the virtual environment and to see the appearance of the original cultural heritage artefact. In this way, we overcome one of the main limitations of mainstream 3D fabrication technologies: a faithful appearance reproduction. Using a consumer device for the real-time hand tracking and a custom electronic controller for the capacitive touch sensing, the system permits the creation of augmented experiences where the user with their hands can change the virtual appearance of the real replica object using a set of personalization actions selectable from a physical 3D-printed palette.

1. Introduction

Virtual reality has long been a subject of investigation. From the first proposed devices (Telesphere Mask [1], The Sword of Damocles [2]), several aspects of this field have been subject to research, including hardware, rendering, haptic feedback, multisensory experiences, and collaborative interaction. The release in the last few years of affordable commercial devices (Oculus Rift, HTC Vive) has renewed and accelerated research interest in these technologies. One strand of these investigations is integrating objects and environments of the real world in the VR application itself to increase their realism. Such integration raises several challenges: the visual quality of the virtual representations of real objects, the tracking of the real object for a precise merging of the real and virtual worlds, the simulation of control of the object behaviors in the application in terms of interaction with the virtual environment, and interactivity with the user. In particular, several works show how the simple passive haptic feedback of a real object mapped in the VR experience increases its realism and the engagement of the user [3,4,5]. The emotional impact of real physical objects in relation to their virtual representations plays a significant role in the user experience.
VR technologies have also impacted the cultural heritage (CH) field, especially as tools for virtual tourism or the visualization, reconstruction, and interpretation of historic and prehistoric sites, landscapes, and objects. A knowledge gap exists around investigating how to map the physical reproduction of CH artefacts in VR while also taking advantage of object haptic feedback, thereby turning those objects into interactive tangible interfaces. The use of authentic artefacts themselves for interactive user experiences raises several problems about the integrity of—and care for—the heritage record. Fortunately, 3D printing technologies can help to overcome such problems. However, some practical constraints severely limit the wide adoption of 3D-printed replicas in the context of virtual and physical exhibition visits. The main issue is the quality/price ratio of the physical reproductions. Even high quality, expensive fabrication technologies produce reproductions of a quality that is often perceived as below the threshold of the expected CH standards; for consumer-accessible, cheaper fabrication technology, the results are even worse. Indeed, a high-quality appearance reproduction of an artefact is a complicated and costly process that cannot be used in most CH contexts for practical and economic reasons.
To face these issues, we explore the coupling of VR consumer technologies with the direct manipulation of 3D-printed artefacts to create an interactive tangible user interface in the virtual environment. The main objective is to build a low-cost VR system offering the user the possibility of a mixed virtual/real experience that can be perceived as more accurate and engaging. In this way, when interacting with a low-cost physical (3D-printed) copy whose appearance and finishing are far from the original one, the head-mounted display (HMD) provides the user with an enhanced visual experience. The feeling of touch over the replica, the replica interactivity, and the high-quality visuals of the HDM dramatically enhance the immersion and the user’s emotional impact. Using the reality–virtuality continuum taxonomy proposed in [6], what we propose is an augmented virtuality experience based on interactive and touch-sensitive 3D-printed objects.
In this paper, we present a system to improve user engagement in a VR experience using physical copies of 3D scanned authentic artefacts. The physical copies are made with low-cost 3D fabrication technologies, which allow the inexpensive production of multiple copies. The proposed VR system is based on a combination of off-the-shelf hardware components and custom electronic circuitry connected by specially developed software libraries. In particular, we use a commercial HMD (HTC Vive) and its extension for the tracking of real objects (ViveTracker) integrated with a consumer device for real-time hand tracking (LeapMotion) and a custom electronic controller for capacitive touch sensing to transform the 3D-printed replica into a tangible interactive user interface. The proposed setup gives the user the possibility to interact in a new way with the virtual environment and the physical replica. The approach allows seeing more faithfully the original artefact’s appearance, thanks to the rendering quality of the HMD, overcoming the current limitations of low-cost 3D fabrication technologies. Moreover, with its real-time hand tracking and capacitive touch sensing, the system detects when and where the user touches the replica. This system permits the creation of virtual experiences where the user, with their hands, can change the virtual appearance of the object using a set of personalization actions selectable from a physical 3D-printed palette (Figure 1), such as, for example, to paint over the object surface or to attach additional virtual objects. To summarize, the most important result in our work is the design and implementation of an innovative, touch-enabled 3D-printed interface, using off-the shelf-components and low-cost custom electronics, that is easy to integrate into VR applications. The paper describes the technical design of the system highlighting the strong points and the limitations. Up to now, no other devices allow physical interaction with accurate 3D printing replicas in VR, computing when and where the user touches the replica surface. Then, we present a simple cultural heritage application based on the virtual personalization of 3D printing replicas. Its goal is to create a feeling of the object being “theirs” to transfer the sense that these artefacts are creative and malleable pieces of material culture. The proposed system offers users new creative experiences that interact with physical 3D-printed artefacts while seeing the appearance of these artefacts true to their original form.
The remainder of the paper is structured as follows. Section 2 deals with the state of the art in the research fields related to the proposed system: real-world mapping in VR, touch-sensitive surface, and 3D printing for CH. In Section 3, we present the proposed system describing the hardware setup employed for the physical object tracking, the hand tracking, the object touch detection, and the software components necessary to connect the hardware. In Section 4, we describe the acquisition and processing of the data needed to print the 3D replicas and create their virtual copies. It presents how the implemented system works in practice, describing the experience offered to users. Section 5 discusses our results, presenting user feedback collected before the COVID-19 pandemic and commenting on the system’s advantages and limitations. In Section 6, which concludes the paper, we outline the important findings and key insights.

2. Related Works

Following the general trend of exploiting the most advanced digital technologies for enhancing the experience and the engagement of the fruition of cultural heritages [7,8], we pursue the use of emerging 3D printing and virtual reality technologies.
The proposed system is based on the collaboration of several technologies related to different research fields: the integration of real-world objects and environments in VR, the solutions to transform any surface in a touch-sensing interface, and the application of digital fabrication techniques in the CH context. This section presents the state of the art of these research topics.

2.1. Real World Mapping in VR

Researchers in the VR community have realized that to augment the realism of a VR experience, the simple focus on enhancing the visual aspect of that experience alone is not enough. A fundamental contribution is given by the sense of touch, which in [9] is described as the sense that contributes most to making things real: the sense that cannot be fooled and is most arousing for humans. To enrich the virtual environment with real-world content allows us to enhance the user experience, making it more emotional and engaging [10]. The results of the first user studies in this field [3,11] showed that physically touching a virtual object makes it more real for the user. The use of physical objects and toys tracked in a virtual playground for children was also investigated in [4]. The experiments compared the levels of enjoyment and the speed and accuracy of handling objects with tactile augmentation versus the case without tactile augmentation, that is, just using hands. The results showed that tactile augmentation improves accuracy in handling objects and increases the level of enjoyment. Similar results were also obtained in [5] studying if using real objects in VR is better than using controllers such as mouses or game controllers to manipulate objects shown in VR. The user tests showed that physical objects are more fun to use but not more straightforward due to tracking system limitations and failures to match the virtual object with the physical one. Recent works have proposed systems to map real objects into VR, such as to catch a physical ball while in a virtual environment [12], to interact with a tangible ball with the feet in real time by only seeing its virtual representation inside an HDM [13], to study the impact of the real objects mapped in VR in maintenance training simulations [14], to map and track real objects in VR using RGB cameras and neural networks [15], and to use a virtual polarized light microscope for archaeometric learning [16].
Related to our work is the concept of substitutional reality introduced in [17]. The main idea is to define a class of virtual environments where every physical object surrounding a user is paired, with some degree of discrepancy, to a virtual counterpart. The authors studied how far the mismatch between the virtual and the real object can be before the VR illusion is broken for the user. Following this idea, several works were conducted to map many differently shaped virtual objects onto one physical object by warping virtual space and exploiting the visual system dominance using a framework called haptic retargeting [18]. The main works are based on hemispherical sparse haptic proxy geometry [19], motorized turntables to rotate the correct haptic device to the right direction [20], robotic arms [21], motorized robots [22], and drones [23,24] that move the proxy to match the virtual geometry wherever the user is touching. In this context, an active research field is to map the virtual world and free walking inside that world to the real world where the user moves. Several works provide solutions for real-world walking while users explore and stay fully immersed inside large virtual environments. Examples include: a solution to avoid collisions with objects and people in the real world [25], systems to match a given pair of virtual and physical worlds and to explore large virtual environments from small physical environments [26,27,28,29], and a method to embed real-time 3D reconstruction of the real world in a virtual one [30]. A complete review of this topic can be read in [31].
Alternative solutions are based on multimodal haptic devices, an orthogonal direction to our method. In this case, the goal is to simulate a high-fidelity haptic interaction with virtual objects in virtual reality using devices able to produce multimodal haptic stimuli. Some of the proposed solutions are fingertip haptic devices to simulate different modalities (softness, roughness, temperature, shape, vibration, mass, force) [32,33], active surfaces based on pin [34] and multicell array [35], midair devices with ultrasounds [36], air jet [37], and laser [38]. For a complete survey on these technologies, read [39].
In the context of the aforementioned research, we propose the first system that, using 3D-printed objects as tangible interfaces, can infer when and where the user touches the replica with a precision that allows virtual painting over the surface.

2.2. Touch Sensing

Today, touch sensing is a popular technology thanks to its application in commercial touchscreen devices. Over the years, this technology has been adapted to different applications, such as interactive spaces [40,41], interactive objects [42], body interfaces [40,43,44], and augmented cloths [45] and fabric [46]. Different methods can be used to fabricate a custom touch sensor: crafting with conductive copper and gold leaf [43], silicone casting [47], inkjet printing [48], screen printing [44], and 3D printing [42]. Several technologies have been also proposed to sense touch: optical methods with depth cameras [49], acoustic methods [50], resistive methods [47], electric field sensing [51], impedance profiling [52], time domain reflectometry [53], and electric field tomography [54].
Today, the most used technology for touch sensors is capacitive sensing. Its basic intent is to measure capacitive coupling changes between the human body and the electrodes of sensors made of conductive materials, such as solid metal parts, foils, transparent films, textiles, inks, and paints. See [55] for a comprehensive analysis and review of the research on capacitive sensing.
In the proposed system, we use conductive paint and a low-cost electronic shield to transform the painted surface of the 3D replica into a capacitive sensing surface.

2.3. 3D Printing and Cultural Heritage

The last decade has shown a significant amount of research interest in 3D printing technologies and, more generally, digital fabrication techniques. While the main application domain of 3D printing has been the manufacturing industry, it has proved effective in multiple other contexts, such as CH, where it has been possible to test its flexibility and quality.
In [56], the authors investigated how people interact with 3D digital copies of CH artefacts compared to forms of traditional observation without manipulation. In particular, they tested three different interaction modes: visual examination of the real artefact, immersive visualization of a 3D reconstruction, and 3D-printed replica interaction. The first experiment showed that the user’s immersive experience and visual experience with original artefacts resulted in similar perceptions of color and weight. At the same time, these characteristics are difficult to perceive with 3D prints. The misinterpretation of weight and color might also lead to misinterpretation of the artefact’s function and other qualities (e.g., material). The second and the third experiments suggested that the traditional museum setting limits engagement with the artefacts, and that users are more engaged with a tactile experience with a replica. These results justify our choice to merge virtual reality technologies with 3D printing replicas. In this way, the user can visually perceive the main characteristics of the artefact, with a more engaging experience thanks to haptic feedback.
A study on the introduction of 3D-printed replicas inside a museum was conducted by [57]. The authors investigated how museum visitors consider using 3D replicas, how they understand them, and whether these surrogates are welcome within museums. An evaluation was presented, finding that visitors were enthusiastic about interacting with touchable 3D-printed replicas, highlighting potential educational benefits and enjoyment while visiting museums.
An analysis of the critical issues in the production process of the replicas in CH was presented in [58]. A complete overview of fabrication technologies applied in CH contexts, with a discussion of strengths, limitations and costs, can be read in [8].

3. Materials and Methods

The main intent of the proposed system is to allow an end user to interact and manipulate a low-cost physical copy of a CH artefact in a virtual environment. The user can see the original object’s appearance and apply a set of personalization actions to change this appearance, such as painting on its surface or attaching other objects. The three main goals of this research are:
  • to enhance the visual appearance of a low-cost physical copy of an artefact using a VR device, i.e., an HMD, to overlay virtually over it the faithful appearance of the original object;
  • to enhance the user’s emotional impact, giving them the possibility to manipulate in their hands the physical replica in the virtual environment, taking advantage of the touch feedback;
  • to enhance the user’s immersion and engagement, allowing the virtual personalization of the replica by changing its virtual appearance when touching the surface, using a physical personalization palette.
To reach the above goals, we need to satisfy the following requirements:
  • to compute the position and orientation of the physical object in the virtual environment using a robust tracking solution to guarantee an accurate and precise overlay of the virtual 3D model of the original object over the replica;
  • to show the user’s hands in the virtual environment, tracking the movement of each part of both hands to create virtual models that are as realistic as possible;
  • to detect when and where the user touches the replica to modify the virtual model at the right position and time;
  • to satisfy all the previous requirements in real time to guarantee a high-quality user experience
To meet all the requirements, we designed a system composed of custom hardware components (Section 3.1) and a software library (Section 3.2). Figure 2 shows a scheme of the hardware and the communication channels used to exchange data with the software library. The hardware component is in charge of tracking the replica in the virtual environment, tracking the user’s hands, and detecting the touch event over the replica surface. The software library, distributed on the replica hardware and the PC that runs the experience, gathers all the data produced by the hardware devices, computes the position on the surface when the user touches the replica, and visually returns all this information to the users.

3.1. Hardware Setup

The proposed hardware setup is composed of five parts:
  • the HDM for the visual VR experience and the tracking of the 3D-printed replica in the VR environment—in our case, the HTC Vive [59];
  • the LeapMotion [60], an active hand tracking device;
  • a 3D-printed support to mount the physical replica on the HDM tracking device, the Vive Tracker [61];
  • a physical palette attached to the 3D-printed support to give the user the possibility to select the type of personalization to apply on the surface of the virtual object;
  • an electronic controller to detect when the user touches the replica and the personalization palette.

3.1.1. Physical Object Tracking

The device was designed around the HMD HTC Vive [59], a commercial virtual reality headset. It comprises a head-mounted helmet, two or more input controllers, and two base stations that use an active optical infrared tracking system to compute the helmet and controller’s position and orientation in the play area with a millimeter’s accuracy [62]. The first requirement of our system is to track the position and orientation of the physical replica in the play area in real time with high accuracy to create the relative virtual object in the right place in the virtual environment. To reach this requirement, we used the Vive Tracker [61], an accessory of the HTC Vive, to track a real object in the play area using the infrared tracking system. We mounted the physical replica and the Vive Tracker together rigidly thanks to a custom 3D-printed support. In this way, the replica position and orientation are described by a simple rigid transformation with respect to the tracker. We take advantage of the tracking information of the tracker, extracted by the HTC Vive driver, to compute the actual replica position in the play area. The driver extracts this tracking data with a refresh rate of 250 Hz.
We designed the custom support to mount the replica and the tracker together. We made it with a consumer 3D printer based on fused deposition modeling (FDM) technology using polylactic acid (PLA). The support is built of several 3D modeled pieces (Figure 3). The central piece is the base (yellow in Figure 3) where the other elements connect. There is a palette at the left of the base to mount the touch-sensitive buttons (green in Figure 3), which allow the selection of personalization actions that can be performed over the replicas. These five buttons are joined with a screw and a bolt also needed for the electric connections. At the right of the base, we mount the Vive Tracker (black in Figure 3) using a standard camera mount screw ( 1 / 4 screw nut) and a pin to avoid ambiguity in orientation during assembly. At the bottom, there is a handle to hold the entire device. The handle (light blue in Figure 3) is composed of two pieces to allow the mount on the back of a sensor for the capacitive touch sensing discussed in Section 3.1.3 (Figure 4e). In front of the handle, a big touch-sensitive button (orange in Figure 3) is needed to solve the calibration issue in integrating the LeapMotion with the HTC Vive (discussed in Section 3.1.2). The base presents a female joint to mount the 3D-printed replica without ambiguity and a hole to fasten the replica with a metal screw in the upper part. The 3D replica presents the relative male joint to be fastened on the support (pink in Figure 3). The base is caved to host the electronic hardware needed to detect the touch events. Figure 5 shows the 3D-printed pieces of the support before the mounting. Figure 4 shows the photos of a real mounted device.

3.1.2. Hand Tracking

Hand tracking is fundamental to giving users the correct visual feedback in the virtual environment during their manipulation and interaction with the physical object. Visualizing the user’s hands faithfully with accurate animation increases emotional engagement in the experience and makes the interaction more natural. For the hand tracking, we used the specialized sensor device Leap Motion [60] mounted rigidly in front of the HTC Vive helmet (Figure 6a). This sensor allows real-time and accurate tracking of both hands. It returns the position and orientation of all the joints and bones used to model the hand. The refresh rate of the tracking data is 200 Hz. The accuracy of the tracking data is less than 3 mm [63]. Figure 6b shows a stylized model of the hands tracked by the Leap Motion in the VR environment. A first calibration of the Leap Motion space in the play area defined by the HTC Vive is provided directly by the Leap Motion. This calibration allows us to know each hand position and orientation in the coordinate space defined by the play area. Unfortunately, this automatic calibration has some accuracy issues due to the different accuracy of the two devices (LeapMotion and ViveTracker) and the mounting process of LeapMotion in front of the helmet: a manual process prone to errors. To overcome these problems, we introduced the calibration button over the handle of the support. At the beginning of the VR experience, we ask the user to touch the center of the button with their index finger. The touchpoint is visually highlighted with a red circle in the VR and augmented from the haptic point of view with a hole in the physical button. In this way, the recognition of the point to touch both in VR and the real world is easier. Every time the user touches the calibration button, we compute the translation to move the tip of the index finger in the virtual world in the center of the button according to the world coordinate system. We use this translation vector to adjust the calibration of the LeapMotion space in the HTC Vive tracking area. Even if this procedure is rough, the results are useful, proving easy to perform for the user.

3.1.3. Touch Detection

The third requirement of the proposed system is to detect in real time where and when the user touches the replica during the personalization action of its virtual appearance. The main intent is to have a device that can detect the right position touched by the user on the virtual object’s surface and the right moment to apply the personalization. In recent years, several solutions have been proposed to track the hand with active 3D depth sensors. However, they cannot be used for our task because they require huge computational resources, and the tracking accuracy highly degrades when the user manipulates an object in their hands. For this purpose, we designed a hybrid hardware–software solution. The touch event on the surface (when the user touches the surface) is detected with a custom electronic controller based on the capacitive touch sensor principle. The position on the surface touched by the user is computed by a software component using the ray casting procedure explained in Section 3.2. The electronic controller is based on a low-cost open-source Arduino-like microcontroller with WiFi connection (NodeMCU [64]), paired with a shield specialized for capacitive touch sensing (MPR121 [65]), with 12 different input channels. Each channel is connected to a different surface that must be touch-sensitive. In particular, we used seven input channels: one for the replica, five for the buttons of the personalization palette, and one for the calibration button. Since the replica and the buttons are printed with plastic material (PLA) using a classical FDM 3D printer, they cannot be used directly as capacitive sensors because PLA is not conductive. To solve this problem, we spray-painted the surface of the objects with a graphite-based coating that provides an electricity conductive layer (black surfaces in Figure 4). The controller is completed with a shield to connect and charge a LIPO battery that powers the device. All the hardware is mounted on a 3D-printed support to keep the controller as compact as possible and to facilitate hosting in the bottom part of the 3D-printed base component (Figure 7).
Moreover, to make the sensing of the MPR121 more robust, we added a copper sensor, connected to the negative pole of the battery, on the back of the handle of the 3D-printed support (Figure 4e). In this way, by keeping the support (so the user is touching the copper sensor), the user shares a common ground with the MPR121, the local reference of the human body. The role of the ground is critical since it refers to a common potential to which all of the objects relevant to the system are electrically coupled. Without this common ground, the capacitive sensing system does not have a shared reference, which is responsible in many cases for manifesting inconsistent behaviors. The controller communicates the state changes (touch or release) of the different input touch channels at the software component via WiFi. The refresh rate of the touch state is 100 Hz.

3.2. Software Component

The developed software library is in charge of gathering all the information from the hardware devices and returning it to the end user visually. In particular, it receives the information about the position and orientation of the user’s head (the helmet) and of the Vive Trackers, the tracking data of the hands from the Leap Motion, and the touch status on the replica and the personalization palette from the electronic controller described in Section 3.1.3. The head data is used to set the main camera in the virtual experience. The Vive Tracker data are used for rendering the virtual 3D model with the original appearance of the object in the right position in the virtual environment. Since the physical replica and the Vive Tracker are mounted together rigidly, the replica position and orientation are described by a simple rigid transformation with respect to the tracker position data. The touch status is used during the virtual personalization of the replica.
The software was developed inside the real-time game engine Unity [66] with a custom Unity script and the relative node. For the head, the Vive Tracker, and the hands, we use the prefab nodes provided by SteamVR and LeapMotion. For the management of the touch status, we implemented a custom script that manages the WiFi connection with the electronic controller (Section 3.1.3), then analyzes the received touch status and translates this status into a visual action. In particular, for communication with the electronic controller, we use a client–server architecture where the server runs on the NodeMCU controller, and the client runs in the Unity application inside an asynchronous thread. The server waits for an input connection from a client. When this connection is open (at the startup of the Unity application), the server starts sending messages every time there is a change of the touch status over the replica or the palette buttons. Each message contains the ID of the touched/untouched area and the touch status (touched/released). The client locally stores the touch status. During the rendering process, at each frame, the custom Unity script queries the touch status inside the client thread and the ID of the interested area. The ID and the status discriminate between the action to perform: to apply the personalization on the replica or to change the personalization action. The action to associate to each button of the palette can be changed inside the developed Unity node. In the case of a touch event over the replica surface, the script must compute the surface position to change the appearance. For this task, the script integrates all the data coming from the different hardware devices. In particular, it uses a simple ray casting procedure of a set of rays against the 3D model of the physical replica after applying the transformation returned by the Vive Tracker. Each ray starts from the position of the index fingertip returned by the Leap Motion. We use only the index because it is the most used finger for the interaction with a touch device. Using some simple observations on how we use a finger to draw over a surface, we identified three main directions to cast these rays (Figure 8a). The first direction is defined by the index distal phalanx (the blue line in Figure 8a). With this direction, we better detect the contact point when the user touches the surface with the index direction close to the surface normal vector (Figure 8b). The second direction is defined by the line of view that connects the index fingertip to the user’s head, that is, the helmet position and the position of the main camera in the Unity application (the red line in Figure 8a). In this way, detection is more robust when the user touches the surface with the hand palm in an orthogonal position with respect to the view direction (Figure 8c). In this case, using only the first direction can produce no hits or hits on the wrong points of the surface. The hand palm defines the third direction (the green line in Figure 8a) to be more robust when the user touches the replicas on the silhouette with respect to the view direction. These are points on the surface that have a surface normal orthogonal or almost orthogonal to the view direction (Figure 8d). We cast ten rays distributed in two different cones of direction with an aperture of 10 degrees for each case. The first cone is centered around the main direction and the second one around the opposite direction. The opposite direction permits detection to be more robust against the calibration error between the LeapMotion and HTC Vive and the accuracy error in the hand tracking. Due to these errors, when the user touches the surface of the replica, it is possible that the position of the index fingertip in VR is not exactly in contact with the surface of the virtual object, but could be slightly inside or outside the virtual 3D model. An example of these two cases is shown in Figure 9. In total, for every frame in which the user is performing a personalization action over the replica surface, we cast 30 rays. If during the ray tracing more than one ray returns a hit with the virtual model, we select the intersection closer to the last point touched during the same action over the replica or, if it is the first touch, the intersection closer to the fingertip. If the distance hit is too large (more than 1 cm), we discard the hit to be more robust against accidental touch with a different finger from the index. Since the ray casting procedure can be computationally expensive, it could be a bottleneck for the real-time performance required by our system. To avoid this problem, we do not use the original 3D model of the object, but a simplified version with fewer triangles (around 3000 triangles) obtained by quadric edge collapse decimation [67]. This decimation reduces the number of triangles in a controlled way, preserving all the main geometric features and at the same time introducing a small approximation error, consistently below 1 mm (Figure 10).

3.3. 3D Model Processing

The system was tested with the 3D-printed replicas of three objects from the archaeological site of Çatalhöyük in Turkey dated to the Neolithic period: a female figurine, a horse, and a hand stamp. The 3D models of each object were obtained using 3D reconstructions from photos. For each object, a set of photos from several points of view was acquired using a photo lightbox (88 photos for the female figurine, 153 photos for the horse, and 203 for the hand stamp). The lightbox helped obtain as diffuse as possible lighting, improving the 3D reconstruction quality and the approximation of the surface appearance with a simple texture map. For the 3D reconstruction, we used Agisoft Metashape [68]. Each 3D model has around 200,000 triangles with a texture map of 4096 × 4096 pixels. For each 3D model, a simplified version of about 3000 triangles was obtained by quadric edge collapse with MeshLab [69] to use in the ray casting procedure described in Section 3.2. These simplified versions allow a fast ray casting of 30 rays that takes less than 10 ms, preserving the real-time requirement of the application. Figure 10 shows the reconstructed 3D models with the reduced version and a color map visualization of the error introduced by the decimation procedure. The decimation error is computed as the distance between a vertex of the original model and the surface of the reduced model. This error is low, as demonstrated by the color map and the relative error statistics (mean and root mean square).

4. Results

Using the system, we designed a virtual experience where the end user can see the original appearance of the physical replica and change this appearance by touching its surface. Figure 11 shows the three devices and their relative VR counterparts. In particular, via the designed personalization palette, the user can paint over the surface of the replica virtually, using the index finger as a brush, or can attach some additional virtual objects over the surface. With the palette buttons, the user can select a color in which to paint (in Figure 11 red, black and white), can choose an object to attach, or can opt to undo their last action. Note that by using more MPR121 shields with a redesign of the palette, we can add more buttons; in particular, by simply using the free channels on the existing MPR121 shield, we can add five more buttons. Moreover, the associated action to each button (the color or the object to attach) can be personalized inside the Unity Node. After the selection of the action, when the user touches the replica, the system paints on the surface of the virtual object using the selected color, or places the selected object on the point they have touched, also using the normal vector to orient it coherently. The painting is simulated by creating a new geometry as a double triangle strip, shown as a half-cylinder over the surface using the normal data. For every point touched on the surface, we added four triangles at the strip.
The proposed cultural heritage application was developed inside the framework designed in the EU project EMOTIVE [70] that successfully proposed new technologies for engaging museum visitors in a deeper way. In particular, the nature of artefactual finds and the typical form of a museum display of these artefacts means that visitors tend not to have a good sense that these are creative and malleable pieces of material culture. For this reason, the proposed system was designed to be integrated into a wider VR experience whose focus was on personalization and sense of belonging or ownership over items. Painting on the replica can create a feeling of the object being “theirs”, as opposed to a lifeless and disconnected thing from the distant past.
A first user test of the system was performed on 30 October 2019, during the final demo of the technologies developed for the EU project EMOTIVE [70] at the Hunterian Museum at the University of Glasgow. The system was tested by 16 people, mainly CH experts. Their feedback was collected verbally and via open-ended comment cards, which invited users to reflect on what the experience asked of them (i.e., “The experience made me ⋯”). A small group (four people) did not have any previous practice with VR HMD. After a short training session, all participants were able to add some drawings to the object’s surface with an adequate level of ability. The same results were found for those participants who were using the HMD for the first time. For all users, the experience was reported as interesting and engaging thanks to the integration of VR and interactive 3D prints. Several expressed their awe at what the system enabled, while others remarked specifically on its kinesthetic possibilities and, in the case of one respondent, its potential to inform their own research. Figure 12 shows a user during the experience. Figure 13 displays some of the visual results obtained through personalization. The drawings are quite rough, but they offer a sense of what can be done with the proposed system in real time. Unfortunately, the COVID-19 pandemic prevented us from instigating a more in-depth and formal user study. The video in the Supplementary Material shows a personalization session with all three tested objects.

5. Discussion

From our initial test with users, the general impression is that the proposed system offers an engaging experience for them within the VR, thanks to the combined tactile feedback of the physical replica and visual feedback of the virtual replica offered via the personalization actions. For the users, their interactions were reported as natural and fascinating, especially once they felt they had accustomed themselves to navigating the system. In particular, the tracking and visualization of the user’s hands in VR were recognized as enabling degrees of interaction and accuracy during their personalization actions that otherwise would have been impossible to achieve.
Nevertheless, the hand tracking, and the device used for it, were simultaneously the main weaknesses of the proposed system. In particular, the simple calibration procedure of the LeapMotion in the space of the play area of the HTC Vive described in Section 3.1.2 permits a reduction of misalignment between the real hand and its virtual model, but not entirely. Some users needed to perform the procedure multiple times during the personalization experience to have better multisensory feedback between what they see in the VR and what they sense via real touching of the replica. Even if this calibration procedure is straightforward and fast (the user must touch a point on the calibration button on the handle of the 3D-printed support), a better and more accurate method could certainly improve the sense of immersion and engagement of users. A future research direction might be to investigate a better method for this calibration step or the use of a more accurate hand tracking solution based on deep learning. An alternative approach is to apply more evolved capacitive touch sensing technologies to allow the 3D-printed replica to also detect the touched position on the surface, without software ray-tracing.
Another weakness of the system is related to the 3D-printed replicas. We focus our attention mainly on object geometry and its visual appearance, ignoring other characteristics of the object that are important to replicate, such as weight, its distribution through the object, and the tactile feedback of different materials in terms of temperature and roughness. The user might reasonably expect that the replica has a weight close to the original object and that it returns tactile sensations similar to the materials they can see and recognize in the VR. These characteristics are difficult to satisfy with a simple FDM 3D printer using PLA, thus requiring a more in-depth investigation of advanced and expensive 3D printing technologies. An alternative research direction might be to use VR wearable haptic devices with force and haptic feedback to improve these aspects of the experiences.

6. Conclusions

The designed system described in this paper is a combination of hardware and software components that permit the user to interact and manipulate a 3D-printed copy of an artefact in a virtual environment using a physical replica as a tangible user interface, thereby leading to an augmented virtuality experience. The physical replicas are made with a 3D printer based on fused deposition modeling technologies. Beyond the technical design of the system, the paper presented a cultural heritage application. Its goal is to engage users in exploring these artefacts as creative and malleable pieces of material culture. Thanks to the VR personalization, the user increases their sense of attachment to items, engaging more during the cultural site visit. With the proposed system, the end user can see the authentic appearance of the original object virtually projected onto the physical replica thanks to a virtual reality head-mounted display. Then, they can apply a set of personalization actions to change the appearance of the artefact. This act of personalization is critical because the degradation of the original artefacts over time means that most personal or creative touches that may have been applied to the objects in the past (e.g., paint or other decorative additions) have faded or entirely disappeared. The personalization actions thus enable users to experience a more faithful and embodied set of engagements with the objects, turning them from colorless, often seemingly lifeless things into dynamic and malleable testaments to past human existence. Thanks to the integration of different hardware devices (HTC Vive, Leap Motion, and a custom electronic controller), the system can detect the touch events of the user over the replica and compute the touched position in the software component. This position is used to modify the virtual appearance of the replica on the right surface point. Then, the design of a personalization palette allows the user to select the personalization action to perform over the virtual appearance of the object. The system was tested with the replicas of three Neolithic-aged objects from the site of Çatalhöyük in Turkey during a public event with 16 people. From this test result, we saw that the system engages the user in the experience thanks to the touch interaction with the physical replica. Future research directions could entail the development of a better calibration procedure for all the tracking hardware used in the system, especially the HTC Vive and LeapMotion, and the improvement of other aspects of the 3D replica related to the object’s weight and material roughness, to return better tactile feedback to the user. Furthermore, robust user testing of the system awaits a postpandemic world, at which point we hope to investigate in greater detail its specific emotional and cognitive impacts on individuals.

Supplementary Materials

The following are available online at https://www.mdpi.com/article/10.3390/rs13112186/s1, Video S1: Augmented Virtuality using Touch-Sensitive 3D-Printed Objects.

Author Contributions

Conceptualization, G.P., S.P. and P.C.; methodology, G.P. and P.C.; software, G.P.; validation, G.P.; formal analysis, G.P. and P.C.; investigation, G.P.; resources, G.P.; data curation, G.P. and S.P.; writing—original draft preparation, G.P.; writing—review and editing, G.P., S.P. and P.C.; visualization, G.P.; supervision, S.P. and P.C.; project administration, P.C.; funding acquisition, S.P. and P.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the EU H2020 Programme EMOTIVE: EMOTIve Virtual cultural Experiences through personalized storytelling (grant No. 727188).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Heilig, M.L. Stereoscopic-Television Apparatus for Individual Use. U.S. Patent 2,955,156, 4 October 1960. [Google Scholar]
  2. Sutherland, I.E. A Head-Mounted Three Dimensional Display. In Proceedings of the Fall Joint Computer Conference (AFIPS ’68), San Francisco, CA, USA, 9–11 December 1968; ACM: New York, NY, USA, 1968; pp. 757–764. [Google Scholar] [CrossRef] [Green Version]
  3. Hoffman, H.G. Physically touching virtual objects using tactile augmentation enhances the realism of virtual environments. In Proceedings of the IEEE 1998 Virtual Reality Annual International Symposium, Atlanta, GA, USA, 14–18 March 1998; pp. 59–63. [Google Scholar] [CrossRef]
  4. Shapira, L.; Amores, J.; Benavides, X. TactileVR: Integrating Physical Toys into Learn and Play Virtual Reality Experiences. In Proceedings of the 2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Merida, Mexico, 19–23 September 2016; pp. 100–106. [Google Scholar] [CrossRef]
  5. Yoshimoto, R.; Sasakura, M. Using Real Objects for Interaction in Virtual Reality. In Proceedings of the 2017 21st International Conference Information Visualisation (IV), London, UK, 11–14 July 2017; pp. 440–443. [Google Scholar] [CrossRef]
  6. Milgram, P.; Kishino, F. A taxonomy of mixed reality visual displays. IEICE Trans. Inf. Syst. 1994, 77, 1321–1329. [Google Scholar]
  7. Liritzis, I.; Al-Otaibi, F.; Volonakis, P.; Drivaliari, A. Digital technologies and trends in cultural heritage. Mediterr. Archaeol. Archaeom. 2015, 15, 313–332. [Google Scholar]
  8. Scopigno, R.; Cignoni, P.; Pietroni, N.; Callieri, M.; Dellepiane, M. Digital Fabrication Techniques for Cultural Heritage: A Survey. Comput. Graph. Forum 2017, 36, 6–21. [Google Scholar] [CrossRef]
  9. Gallace, A.; Spence, C. In Touch with the Future: The Sense of Touch from Cognitive Neuroscience to Virtual Reality; OUP: Oxford, UK, 2014. [Google Scholar]
  10. Hoffman, H.; Groen, J.; Rousseau, S.; Hollander, A.; Winn, W.; Wells, M.; Furness, T., III. Tactile augmentation: Enhancing presence in inclusive VR with tactile feedback from real objects. In Proceedings of the Meeting of the American Psychological Sciences, San Francisco, CA, USA, 29 June–2 July 1996. [Google Scholar]
  11. Hoffman, H.G.; Hollander, A.; Schroder, K.; Rousseau, S.; Furness, T. Physically touching and tasting virtual objects enhances the realism of virtual experiences. Virtual Real. 1998, 3, 226–234. [Google Scholar] [CrossRef]
  12. Pan, M.K.X.J.; Niemeyer, G. Catching a real ball in virtual reality. In Proceedings of the 2017 IEEE Virtual Reality (VR), Los Angeles, CA, USA, 18–22 March 2017; pp. 269–270. [Google Scholar] [CrossRef]
  13. Bozgeyikli, L.; Bozgeyikli, E. Tangiball: Dynamic Embodied Tangible Interaction with a Ball in Virtual Reality. In Proceedings of the Companion Publication of the 2019 on Designing Interactive Systems Conference 2019 Companion (DIS ’19 Companion), San Diego, CA, USA, 24–28 June 2019; Association for Computing Machinery: New York, NY, USA, 2019; pp. 135–140. [Google Scholar] [CrossRef]
  14. Neges, M.; Adwernat, S.; Abramovici, M. Augmented Virtuality for maintenance training simulation under various stress conditions. Procedia Manuf. 2018, 19, 171–178. [Google Scholar] [CrossRef]
  15. Taylor, C.; Mullany, C.; McNicholas, R.; Cosker, D. VR Props: An End-to-End Pipeline for Transporting Real Objects Into Virtual and Augmented Environments. In Proceedings of the 2019 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Beijing, China, 14–18 October 2019; IEEE Computer Society: Los Alamitos, CA, USA, 2019; pp. 83–92. [Google Scholar] [CrossRef]
  16. Liritzis, I.; Volonakis, P. Cyber-Archaeometry: Novel Research and Learning Subject Overview. Educ. Sci. 2021, 11, 86. [Google Scholar] [CrossRef]
  17. Simeone, A.L.; Velloso, E.; Gellersen, H. Substitutional Reality: Using the Physical Environment to Design Virtual Reality Experiences. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI ’15), Seoul, Korea, 18–23 April 2015; Association for Computing Machinery: New York, NY, USA, 2015; pp. 3307–3316. [Google Scholar] [CrossRef] [Green Version]
  18. Azmandian, M.; Hancock, M.; Benko, H.; Ofek, E.; Wilson, A.D. Haptic Retargeting: Dynamic Repurposing of Passive Haptics for Enhanced Virtual Reality Experiences. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16), San Jose, CA, USA, 7–12 May 2016; Association for Computing Machinery: New York, NY, USA, 2016; pp. 1968–1979. [Google Scholar] [CrossRef]
  19. Cheng, L.P.; Ofek, E.; Holz, C.; Benko, H.; Wilson, A.D. Sparse Haptic Proxy: Touch Feedback in Virtual Environments Using a General Passive Prop. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17), Denver, CO, USA, 6–11 May 2017; Association for Computing Machinery: New York, NY, USA, 2017; pp. 3718–3728. [Google Scholar] [CrossRef]
  20. Huang, H.Y.; Ning, C.W.; Wang, P.Y.; Cheng, J.H.; Cheng, L.P. Haptic-Go-Round: A Surrounding Platform for Encounter-Type Haptics in Virtual Reality Experiences. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI ’20), Honolulu, HI, USA, 25–30 April 2020; Association for Computing Machinery: New York, NY, USA, 2020; pp. 1–10. [Google Scholar] [CrossRef]
  21. McNeely, W.A. Robotic graphics: A new approach to force feedback for virtual reality. In Proceedings of the IEEE Virtual Reality Annual International Symposium, Seattle, WA, USA, 18–22 September 1993; pp. 336–341. [Google Scholar] [CrossRef]
  22. Siu, A.F.; Gonzalez, E.J.; Yuan, S.; Ginsberg, J.B.; Follmer, S. ShapeShift: 2D Spatial Manipulation and Self-Actuation of Tabletop Shape Displays for Tangible and Haptic Interaction. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18), Montreal, QC, Canada, 21–26 April 2018; Association for Computing Machinery: New York, NY, USA, 2018; pp. 1–13. [Google Scholar] [CrossRef]
  23. Hoppe, M.; Knierim, P.; Kosch, T.; Funk, M.; Futami, L.; Schneegass, S.; Henze, N.; Schmidt, A.; Machulla, T. VRHapticDrones: Providing Haptics in Virtual Reality through Quadcopters. In Proceedings of the 17th International Conference on Mobile and Ubiquitous Multimedia (MUM 2018), Cairo, Egypt, 25–28 November 2018; Association for Computing Machinery: New York, NY, USA, 2018; pp. 7–18. [Google Scholar] [CrossRef]
  24. Abtahi, P.; Landry, B.; Yang, J.J.; Pavone, M.; Follmer, S.; Landay, J.A. Beyond the Force: Using Quadcopters to Appropriate Objects and the Environment for Haptics in Virtual Reality. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ’19), Glasgow, UK, 4–9 May 2019; Association for Computing Machinery: New York, NY, USA, 2019; pp. 1–13. [Google Scholar] [CrossRef] [Green Version]
  25. Yang, J.J.; Holz, C.; Ofek, E.; Wilson, A.D. DreamWalker: Substituting Real-World Walking Experiences with a Virtual Reality. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology (UIST ’19), New Orleans, LA, USA, 20–23 October 2019; Association for Computing Machinery: New York, NY, USA, 2019; pp. 1093–1107. [Google Scholar] [CrossRef]
  26. Sun, Q.; Wei, L.Y.; Kaufman, A. Mapping Virtual and Physical Reality. ACM Trans. Graph. 2016, 35. [Google Scholar] [CrossRef] [Green Version]
  27. Dong, Z.C.; Fu, X.M.; Zhang, C.; Wu, K.; Liu, L. Smooth Assembled Mappings for Large-Scale Real Walking. ACM Trans. Graph. 2017, 36. [Google Scholar] [CrossRef]
  28. Sun, Q.; Patney, A.; Wei, L.Y.; Shapira, O.; Lu, J.; Asente, P.; Zhu, S.; Mcguire, M.; Luebke, D.; Kaufman, A. Towards Virtual Reality Infinite Walking: Dynamic Saccadic Redirection. ACM Trans. Graph. 2018, 37. [Google Scholar] [CrossRef]
  29. Langbehn, E.; Steinicke, F.; Lappe, M.; Welch, G.F.; Bruder, G. In the Blink of an Eye: Leveraging Blink-Induced Suppression for Imperceptible Position and Orientation Redirection in Virtual Reality. ACM Trans. Graph. 2018, 37. [Google Scholar] [CrossRef]
  30. Hartmann, J.; Holz, C.; Ofek, E.; Wilson, A.D. RealityCheck: Blending Virtual Environments with Situated Physical Reality. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ’19), Glasgow, UK, 4–9 May 2019; Association for Computing Machinery: New York, NY, USA, 2019; pp. 1–12. [Google Scholar] [CrossRef]
  31. Nilsson, N.C.; Serafin, S.; Steinicke, F.; Nordahl, R. Natural Walking in Virtual Reality: A Review. Comput. Entertain. 2018, 16. [Google Scholar] [CrossRef]
  32. Murakami, T.; Person, T.; Fernando, C.L.; Minamizawa, K. Altered Touch: Miniature Haptic Display with Force, Thermal and Tactile Feedback for Augmented Haptics. In Proceedings of the ACM SIGGRAPH 2017 Emerging Technologies (SIGGRAPH ’17), Brisbane, Australia, 17–20 November 2017; Association for Computing Machinery: New York, NY, USA, 2017. [Google Scholar] [CrossRef]
  33. Gabardi, M.; Leonardis, D.; Solazzi, M.; Frisoli, A. Development of a miniaturized thermal module designed for integration in a wearable haptic device. In Proceedings of the 2018 IEEE Haptics Symposium (HAPTICS), San Francisco, CA, USA, 25–28 March 2018; pp. 100–105. [Google Scholar] [CrossRef]
  34. Jang, S.; Kim, L.H.; Tanner, K.; Ishii, H.; Follmer, S. Haptic Edge Display for Mobile Tactile Interaction. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16), San Jose, CA, USA, 7–12 May 2016; Association for Computing Machinery: New York, NY, USA, 2016; pp. 3706–3716. [Google Scholar] [CrossRef]
  35. Stanley, A.A.; Gwilliam, J.C.; Okamura, A.M. Haptic jamming: A deformable geometry, variable stiffness tactile display using pneumatics and particle jamming. In Proceedings of the 2013 World Haptics Conference (WHC), Daejeon, Korea, 14–17 April 2013; pp. 25–30. [Google Scholar] [CrossRef]
  36. Carter, T.; Seah, S.A.; Long, B.; Drinkwater, B.; Subramanian, S. UltraHaptics: Multi-Point Mid-Air Haptic Feedback for Touch Surfaces. In Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology (UIST ’13), St. Andrews, UK, 8–11 October 2013; Association for Computing Machinery: New York, NY, USA, 2013; pp. 505–514. [Google Scholar] [CrossRef] [Green Version]
  37. Sodhi, R.; Poupyrev, I.; Glisson, M.; Israr, A. AIREAL: Interactive Tactile Experiences in Free Air. ACM Trans. Graph. 2013, 32. [Google Scholar] [CrossRef]
  38. Lee, H.; Cha, H.; Park, J.; Choi, S.; Kim, H.S.; Chung, S.C. LaserStroke: Mid-Air Tactile Experiences on Contours Using Indirect Laser Radiation. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology (UIST ’16 Adjunct), Tokyo, Japan, 16–19 October 2016; Association for Computing Machinery: New York, NY, USA, 2016; pp. 73–74. [Google Scholar] [CrossRef]
  39. Wang, D.; Ohnishi, K.; Xu, W. Multimodal Haptic Display for Virtual Reality: A Survey. IEEE Trans. Ind. Electron. 2020, 67, 610–623. [Google Scholar] [CrossRef]
  40. Harrison, C.; Tan, D.; Morris, D. Skinput: Appropriating the Body as an Input Surface. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’10), Atlanta, GA, USA, 10–15 April 2010; Association for Computing Machinery: New York, NY, USA, 2010; pp. 453–462. [Google Scholar] [CrossRef]
  41. Zhang, Y.; Yang, C.J.; Hudson, S.E.; Harrison, C.; Sample, A. Wall++: Room-Scale Interactive and Context-Aware Sensing. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18), Montreal, QC, Canada, 21–26 April 2018; Association for Computing Machinery: New York, NY, USA, 2018; pp. 1–15. [Google Scholar] [CrossRef]
  42. Schmitz, M.; Khalilbeigi, M.; Balwierz, M.; Lissermann, R.; Mühlhäuser, M.; Steimle, J. Capricate: A Fabrication Pipeline to Design and 3D Print Capacitive Touch Sensors for Interactive Objects. In Proceedings of the 28th Annual ACM Symposium on User Interface Software and Technology (UIST ’15), Charlotte, NC, USA, 8–11 November 2015; Association for Computing Machinery: New York, NY, USA, 2015; pp. 253–258. [Google Scholar] [CrossRef]
  43. Kao, H.L.C.; Holz, C.; Roseway, A.; Calvo, A.; Schmandt, C. DuoSkin: Rapidly Prototyping on-Skin User Interfaces Using Skin-Friendly Materials. In Proceedings of the 2016 ACM International Symposium on Wearable Computers (ISWC ’16), Heidelberg, Germany, 12–16 September 2016; Association for Computing Machinery: New York, NY, USA, 2016; pp. 16–23. [Google Scholar] [CrossRef]
  44. Nittala, A.S.; Withana, A.; Pourjafarian, N.; Steimle, J. Multi-Touch Skin: A Thin and Flexible Multi-Touch Sensor for On-Skin Input. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18), Montreal, QC, Canada, 21–26 April 2018; Association for Computing Machinery: New York, NY, USA, 2018; pp. 1–12. [Google Scholar] [CrossRef]
  45. Hagan, M.; Teodorescu, H. Intelligent clothes with a network of painted sensors. In Proceedings of the 2013 E-Health and Bioengineering Conference (EHB), Iasi, Romania, 21–23 November 2013; pp. 1–4. [Google Scholar] [CrossRef]
  46. Poupyrev, I.; Gong, N.W.; Fukuhara, S.; Karagozler, M.E.; Schwesig, C.; Robinson, K.E. Project Jacquard: Interactive Digital Textiles at Scale. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16), San Jose, CA, USA, 7–12 May 2016; Association for Computing Machinery: New York, NY, USA, 2016; pp. 4216–4227. [Google Scholar] [CrossRef] [Green Version]
  47. Weigel, M.; Lu, T.; Bailly, G.; Oulasvirta, A.; Majidi, C.; Steimle, J. ISkin: Flexible, Stretchable and Visually Customizable On-Body Touch Sensors for Mobile Computing. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI ’15), Seoul, Korea, 18–23 April 2015; Association for Computing Machinery: New York, NY, USA, 2015; pp. 2991–3000. [Google Scholar] [CrossRef]
  48. Kawahara, Y.; Hodges, S.; Cook, B.S.; Zhang, C.; Abowd, G.D. Instant Inkjet Circuits: Lab-Based Inkjet Printing to Support Rapid Prototyping of UbiComp Devices. In Proceedings of the 2013 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp ’13), Zurich, Switzerland, 8–12 September 2013; Association for Computing Machinery: New York, NY, USA, 2013; pp. 363–372. [Google Scholar] [CrossRef] [Green Version]
  49. Harrison, C.; Benko, H.; Wilson, A.D. OmniTouch: Wearable Multitouch Interaction Everywhere. In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology (UIST ’11), Santa Barbara, CA, USA, 16–19 October 2011; Association for Computing Machinery: New York, NY, USA, 2011; pp. 441–450. [Google Scholar] [CrossRef]
  50. Harrison, C.; Schwarz, J.; Hudson, S.E. TapSense: Enhancing Finger Interaction on Touch Surfaces. In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology (UIST ’11), Santa Barbara, CA, USA, 16–19 October 2011; Association for Computing Machinery: New York, NY, USA, 2011; pp. 627–636. [Google Scholar] [CrossRef]
  51. Zimmerman, T.G.; Smith, J.R.; Paradiso, J.A.; Allport, D.; Gershenfeld, N. Applying Electric Field Sensing to Human-Computer Interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’95), Denver, CO, USA, 7–11 May 1995; pp. 280–287. [Google Scholar] [CrossRef]
  52. Sato, M.; Poupyrev, I.; Harrison, C. Touché: Enhancing Touch Interaction on Humans, Screens, Liquids, and Everyday Objects. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’12), Austin, TX, USA, 5–10 May 2012; Association for Computing Machinery: New York, NY, USA, 2012; pp. 483–492. [Google Scholar] [CrossRef]
  53. Wimmer, R.; Baudisch, P. Modular and Deformable Touch-Sensitive Surfaces Based on Time Domain Reflectometry. In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology (UIST ’11), Santa Barbara, CA, USA, 16–19 October 2011; Association for Computing Machinery: New York, NY, USA, 2011; pp. 517–526. [Google Scholar] [CrossRef] [Green Version]
  54. Zhang, Y.; Laput, G.; Harrison, C. Electrick: Low-Cost Touch Sensing Using Electric Field Tomography. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17), Denver, CO, USA, 6–11 May 2017; Association for Computing Machinery: New York, NY, USA, 2017; pp. 1–14. [Google Scholar] [CrossRef]
  55. Grosse-Puppendahl, T.; Holz, C.; Cohn, G.; Wimmer, R.; Bechtold, O.; Hodges, S.; Reynolds, M.S.; Smith, J.R. Finding Common Ground: A Survey of Capacitive Sensing in Human-Computer Interaction. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17), Denver, CO, USA, 6–11 May 2017; Association for Computing Machinery: New York, NY, USA, 2017; pp. 3293–3315. [Google Scholar] [CrossRef] [Green Version]
  56. Di Franco, P.D.G.; Camporesi, C.; Galeazzi, F.; Kallmann, M. 3d Printing and Immersive Visualization for Improved Perception of Ancient Artifacts. Presence Teleoper. Virtual Environ. 2015, 24, 243–264. [Google Scholar] [CrossRef]
  57. Wilson, P.F.; Stott, J.; Warnett, J.M.; Attridge, A.; Smith, M.P.; Williams, M.A. Evaluation of Touchable 3D-Printed Replicas in Museums. Curator Mus. J. 2017, 60, 445–465. [Google Scholar] [CrossRef] [Green Version]
  58. Balletti, C.; Ballarin, M. An Application of Integrated 3D Technologies for Replicas in Cultural Heritage. ISPRS Int. J. Geo Inf. 2019, 8, 285. [Google Scholar] [CrossRef] [Green Version]
  59. HTC Vive Technical Specification. Available online: https://www.vive.com/eu/product/vive (accessed on 15 March 2021).
  60. Leap Motion Techical Specification. Available online: https://www.ultraleap.com/product/leap-motion-controller/ (accessed on 15 March 2021).
  61. Vive Tracker Technical Specification. Available online: https://dl.vive.com/Tracker/Guideline/HTC_Vive_Tracker(2018)_%20Developer+Guidelines_v1.0.pdf (accessed on 15 March 2021).
  62. Niehorster, D.C.; Li, L.; Lappe, M. The accuracy and precision of position and orientation tracking in the HTC vive virtual reality system for scientific research. i-Perception 2017, 8, 2041669517708205. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  63. Weichert, F.; Bachmann, D.; Rudak, B.; Fisseler, D. Analysis of the accuracy and robustness of the leap motion controller. Sensors 2013, 13, 6380–6393. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  64. NodeMCU Technical Specification. Available online: https://www.nodemcu.com/index_en.html (accessed on 15 March 2021).
  65. MPR121 Technical Specification. Available online: https://www.nxp.com/docs/en/data-sheet/MPR121.pdf (accessed on 15 March 2021).
  66. Unity. Available online: https://unity.com/ (accessed on 15 March 2021).
  67. Garland, M.; Heckbert, P.S. Surface Simplification Using Quadric Error Metrics. In Proceedings of the 24th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH ’97), Los Angeles, CA, USA, 3–8 August 1997; pp. 209–216. [Google Scholar] [CrossRef] [Green Version]
  68. Agisoft Metashape. Available online: https://www.agisoft.com/ (accessed on 15 March 2021).
  69. Cignoni, P.; Callieri, M.; Corsini, M.; Dellepiane, M.; Ganovelli, F.; Ranzuglia, G. MeshLab: An Open-Source Mesh Processing Tool. In Proceedings of the Eurographics Italian Chapter Conference, Salerno, Italy, 2–4 July 2008; Scarano, V., Chiara, R.D., Erra, U., Eds.; The Eurographics Association: Aire-la-Ville, Switzerland, 2008. [Google Scholar] [CrossRef]
  70. Katifori, A.; Roussou, M.; Perry, S.; Cignoni, P.; Malomo, L.; Palma, G.; Dretakis, G.; Vizcay, S. The EMOTIVE project—Emotive virtual cultural experiences through personalized storytelling. In Proceedings of the CEUR Workshop Proceedings, Hotel Plejsy, Slovakia, 21–25 September 2018; Volume 2235. [Google Scholar]
Figure 1. (Left) Photos of the hardware of the system: HTC Vive, LeapMotion, Vive Tracker, the 3D-printed support with the replica that hosts the controller for capacitive touch sensing. (Right) Example of a painting session.
Figure 1. (Left) Photos of the hardware of the system: HTC Vive, LeapMotion, Vive Tracker, the 3D-printed support with the replica that hosts the controller for capacitive touch sensing. (Right) Example of a painting session.
Remotesensing 13 02186 g001
Figure 2. Scheme of the hardware and software components of the proposed system with the channel used to exchange the data (touch, tracking, and visual data).
Figure 2. Scheme of the hardware and software components of the proposed system with the channel used to exchange the data (touch, tracking, and visual data).
Remotesensing 13 02186 g002
Figure 3. 3D models of the 3D-printed support. (a) Exploded view of the different pieces of the support: base (yellow); handle (light blue); Vive Tracker (black); palette buttons (green); calibration button (orange); replica with joint (pink). (b) View of the mounted model.
Figure 3. 3D models of the 3D-printed support. (a) Exploded view of the different pieces of the support: base (yellow); handle (light blue); Vive Tracker (black); palette buttons (green); calibration button (orange); replica with joint (pink). (b) View of the mounted model.
Remotesensing 13 02186 g003
Figure 4. Photos from different point of view of a mounted device. The black surface is painted with a conductive coating. (a) Front view. (b) Right view. (c) Back view. (d) Left view. (e) Bottom view.
Figure 4. Photos from different point of view of a mounted device. The black surface is painted with a conductive coating. (a) Front view. (b) Right view. (c) Back view. (d) Left view. (e) Bottom view.
Remotesensing 13 02186 g004
Figure 5. Photos of the 3D-printed pieces of the support. (Left) Base and handle. (Right) Buttons painted with the conductive coating.
Figure 5. Photos of the 3D-printed pieces of the support. (Left) Base and handle. (Right) Buttons painted with the conductive coating.
Remotesensing 13 02186 g005
Figure 6. (a) Leap Motion device mounted in front of the helmet of the HTC Vive. (b) Stylized model of the hands tracked by the Leap Motion.
Figure 6. (a) Leap Motion device mounted in front of the helmet of the HTC Vive. (b) Stylized model of the hands tracked by the Leap Motion.
Remotesensing 13 02186 g006
Figure 7. Photo of the 3D-printed support to host the shields to detect and communicate touches on the conductive surfaces: (left) mounted device; (right) photo of the different pieces.
Figure 7. Photo of the 3D-printed support to host the shields to detect and communicate touches on the conductive surfaces: (left) mounted device; (right) photo of the different pieces.
Remotesensing 13 02186 g007
Figure 8. Ray tracing directions used to identify the point touched by the user. The lines show the three directions, while the cones show the starting points and the direction of the rays. The blue line is the direction of the index. The red line is the view direction that connects the index fingertip with the helmet position. The green line is the palm’s direction. The images on the bottom show a real example where each ray tracing direction obtains the best results: (b) case for the index direction; (c) case for view direction; (d) case for the palm direction.
Figure 8. Ray tracing directions used to identify the point touched by the user. The lines show the three directions, while the cones show the starting points and the direction of the rays. The blue line is the direction of the index. The red line is the view direction that connects the index fingertip with the helmet position. The green line is the palm’s direction. The images on the bottom show a real example where each ray tracing direction obtains the best results: (b) case for the index direction; (c) case for view direction; (d) case for the palm direction.
Remotesensing 13 02186 g008
Figure 9. Calibration issues due to the hand tracking. In both cases, in the real world, the user is touching the object, but in VR, the position of the fingertip is slightly inside (left) or outside (right) the virtual 3D model.
Figure 9. Calibration issues due to the hand tracking. In both cases, in the real world, the user is touching the object, but in VR, the position of the fingertip is slightly inside (left) or outside (right) the virtual 3D model.
Remotesensing 13 02186 g009
Figure 10. Objects used to test the proposed system. For each object, the figure shows the reconstructed 3D model with Agisoft Metashape, the reduced version computed by the quadric edge collapse needed for the ray-tracing procedure described in Section 3.2, and the color mapping of the error introduced by the decimation methods with the relative mean and root mean square errors. The bar on the side shows the mapping of the color over the distance between the original 3D model and the simplified version.
Figure 10. Objects used to test the proposed system. For each object, the figure shows the reconstructed 3D model with Agisoft Metashape, the reduced version computed by the quadric edge collapse needed for the ray-tracing procedure described in Section 3.2, and the color mapping of the error introduced by the decimation methods with the relative mean and root mean square errors. The bar on the side shows the mapping of the color over the distance between the original 3D model and the simplified version.
Remotesensing 13 02186 g010
Figure 11. Photo of the devices made for the three test objects and rendering of their VR counterparts.
Figure 11. Photo of the devices made for the three test objects and rendering of their VR counterparts.
Remotesensing 13 02186 g011
Figure 12. Photos of a user during the VR experience.
Figure 12. Photos of a user during the VR experience.
Remotesensing 13 02186 g012
Figure 13. Examples of personalization created with the proposed system.
Figure 13. Examples of personalization created with the proposed system.
Remotesensing 13 02186 g013
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Palma, G.; Perry, S.; Cignoni, P. Augmented Virtuality Using Touch-Sensitive 3D-Printed Objects. Remote Sens. 2021, 13, 2186. https://doi.org/10.3390/rs13112186

AMA Style

Palma G, Perry S, Cignoni P. Augmented Virtuality Using Touch-Sensitive 3D-Printed Objects. Remote Sensing. 2021; 13(11):2186. https://doi.org/10.3390/rs13112186

Chicago/Turabian Style

Palma, Gianpaolo, Sara Perry, and Paolo Cignoni. 2021. "Augmented Virtuality Using Touch-Sensitive 3D-Printed Objects" Remote Sensing 13, no. 11: 2186. https://doi.org/10.3390/rs13112186

APA Style

Palma, G., Perry, S., & Cignoni, P. (2021). Augmented Virtuality Using Touch-Sensitive 3D-Printed Objects. Remote Sensing, 13(11), 2186. https://doi.org/10.3390/rs13112186

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop