Tangible Visualization Table for Intuitive Data Display
- We develop a new tangible visualization table that supports intuitive and realistic visualization of terrain data transferred from a remote server in real time.
- We propose a sophisticated technique for projection mapping while minimizing mapping distortion of the texture image onto a display surface generated by linear actuators.
- Our system provides an intuitive and efficient control mechanism for visualization of remote terrain data using gesture-based user interfaces.
2. Related Work
3. System Architecture
3.1. Hardware Structure
3.2. System Software
4. Projection Mapping
4.1. Calibration of the Kinect with the Projector
4.2. Texture Deformation
5. User Interfaces/Experiences
5.1. Gesture Recognition
- Smooth: Smooth the image for effective processing.
- Find Palm: Find the radius of the largest circle and its center point surrounding the palm in the image (refer to (, ) in Figure 13a). Here, the MinMaxLoc function of OpenCV is used.
- Clip Image: Eliminate the part of the image above the wrist in the system gesture recognition process; there is no need for this part, and this reduces the amount of calculation and number of errors at a later stage. The center point and the radius found in the above Find Palm step are used in this calculation.
- Find Contours: Find the outline of the hand using the FindContours function of OpenCV with various options. All of the outline coordinates of the hand for gesture recognition are obtained using the ApproxNone option.
- Find Convex Hull: Find the convex hull including the contour. This is to obtain information to find defects later.
- Find Contour Center Position and Radius: This step finds the radius of the entire hand and its center point . The information found here is used to specify gestures.
- Find Convexity Defects: Find the depth point sitting between the fingertips and the fingers using the contour and convex hull (refer to (, , ) in Figure 13a).
- Find Fingers: Find the position of the finger with the information obtained in the above Find Convexity Defects step. Convexity defects contain the coordinates of the depth point between the fingertips and the fingers and the endpoint of the fingertip. The fingertip position can be found using biological features; i.e., that fingers other than the thumb cannot be spread over 90 degrees and the length of fingers is greater than the radius of the palm.
- Find Finger Direction: Compare the position of the end point of each finger found in the Find Fingers step with the coordinates found in the contour. Once the corresponding coordinates are found, the coordinates of the contour are found in the index before and after the appropriate interval around the corresponding index, in the array storing the contour. If the center point of the two identified points and the fingertip point are subtracted, a vector representing the direction of the finger is generated.
- Find Thumb: Even though it is not used in this system, the thumb is searched for to distinguish between the right and left hands. The main features of the thumb are that it is significantly shorter than the other fingers (refer to () in Figure 13a), the space between the thumb and the index finger is the largest (refer to (c) in Figure 13a) and there is a considerable difference between the length from the tip of the thumb to the depth point and that from the tip of the index finger to the depth point (refer to (, ) in Figure 13a).
- Find Gesture: Specify the gesture types based on the data found so far. The gestures used in this paper are shown in Table 3.
5.2. UI/UX Design
5.2.1. Menu Open Interface
5.2.2. Menu Selection Interface
5.2.3. Data Request
5.2.4. Data Manipulation
6. Experimental Results
- Question 1: Is it intuitive to use the gesture-based interfaces?
- Question 2: Is the visualization table more intuitive and effective than a 2D screen for understanding terrain data?
- Question 3: Do you see any distortions of the projection mapping?
- All interfaces are easy to learn and use even for non-experts.
- There exists a small delay between user interaction and the reaction of the display surface.
- It would be better to use more actuators for enhancing expressive power.
- It would be better to provide a larger work envelope.
Conflicts of Interest
- Boring, E.; Pang, A. Directional flow visualization of vector fields. In Proceedings of the 7th Conference on Visualization 96, San Francisco, CA, USA, 28–29 October 1996; pp. 389–392. [Google Scholar]
- Cabral, B.; Leedom, L.C. Imaging vector fields using line integral convolution. In Proceedings of the 20th Annual Conference (SIGGRAPH ’93), Anaheim, CA, USA, 2–6 August 1993; pp. 263–270. [Google Scholar]
- Lee, D.H.; Kang, M.K.; Yun, T.S. Development of a tangible interface using multi-touch display on an irregular surface. J. Korea Ind. Inf. Syst. Soc. 2011, 16, 65–72. [Google Scholar] [CrossRef]
- Leithinger, D.; Follmer, S.; Olwal, A.; Ishii, H. Shape displays: spatial interaction with dynamic physical form. IEEE Comput. Graph. Appl. 2015, 35, 5–11. [Google Scholar] [CrossRef] [PubMed]
- Leithinger, D.; Lakatos, D.; De Vincenzi, A.; Blackshaw, M.; Ishii, H. Direct and gestural interaction with Relief: A 2.5D shape display. In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, Santa Barbara, CA, USA, 16–19 October 2011; pp. 541–548. [Google Scholar]
- Levoy, M. Efficient ray tracing of volume data. ACM Trans. Graph. (TOG) 1990, 9, 245–261. [Google Scholar] [CrossRef]
- Poupyrev, I.; Nashida, T.; Okabe, M. Actuation and tangible user interfaces: The vaucanson duck, robots, and shape displays. In Proceedings of the 1st International Conference on Tangible and Embedded Interaction, Baton Rouge, LA, USA, 15–17 February 2007; pp. 205–212. [Google Scholar]
- Lorensen, W.E.; Cline, H.E. Marching cubes: A high resolution 3D surface construction algorithm. In Proceedings of the 14th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH ’87), Anaheim, CA, USA, 27–31 July 1987; pp. 163–169. [Google Scholar]
- Reed, S.; Kreylos, O.; Hsi, S.; Kellogg, L.; Schladow, G.; Yikilmaz, M.; Segale, H.; Silverman, J.; Yalowitz, S.; Sato, E. Shaping watersheds exhibit: An interactive, augmented reality sandbox for advancing earth science education. In Proceedings of the AGU Fall Meeting Abstracts, San Francisco, CA, USA, 15–19 December 2014. [Google Scholar]
- Wang, C.; Yu, H.; Ma, K.L. Importance-driven time-varying data visualization. IEEE Comput. Graph. Appl. 2008, 14, 1547–1554. [Google Scholar]
- Woodring, J.; Wang, C.; Shen, H.W. High dimensional direct rendering of time-varying volumetric data. In Proceedings of the 14th IEEE Visualization 2003 (VIS’03), Seattle, WA, USA, 19–24 October 2003; pp. 417–424. [Google Scholar]
- Arduino. Arduino IDE. Available online: https://www.arduino.cc/ (accessed on 10 December 2017).
- Unity3d. Unity3d Game Engine. Available online: https://unity3d.com/ (accessed on 10 December 2017).
- OpenCV. OpenCV Library. Available online: http://opencv.org/ (accessed on 10 December 2017).
- Leithinger, D.; Ishii, H. Relief: A scalable actuated shape display. In Proceedings of the Fourth International Conference on Tangible, Embedded, and Embodied Interaction, Cambridge, MA, USA, 25–27 January 2010; pp. 221–222. [Google Scholar]
- Piper, B.; Ratti, C.; Ishii, H. Illuminating clay: A 3D tangible interface for landscape analysis. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Minneapolis, MN, USA, 20–25 April 2002; pp. 355–362. [Google Scholar]
- Hilliges, O.; Izadi, S.; Wilson, A.D.; Hodges, S.; Garcia-Mendoza, A.; Butz, A. Interactions in the air: Adding further depth to interactive tabletops. In Proceedings of the 22nd Annual ACM Symposium on User Interface Software and Technology, Victoria, BC, Canada, 4–7 October 2009; pp. 139–148. [Google Scholar]
- Follmer, S.; Leithinger, D.; Olwal, A.; Hogge, A.; Ishii, H. inFORM: Dynamic physical affordances and constraints through shape and object actuation. In Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology, St. Andrews, UK, 8–11 October 2013; pp. 417–426. [Google Scholar]
- Yeo, H.S.; Lee, B.G.; Lim, H. Hand tracking and gesture recognition system for human-computer interaction using low-cost hardware. Multimed. Tools Appl. 2015, 74, 2687–2715. [Google Scholar] [CrossRef]
- Mine, M.R.; van Baar, J.; Grundhofer, A.; Rose, D.; Yang, B. Projection-based augmented reality in Disney Theme Parks. Computer 2012, 45, 32–40. [Google Scholar] [CrossRef]
- Herrera, D.; Kannala, J.; Heikkilä, J. Accurate and practical calibration of a depth and color camera pair. In Proceedings of the International Conference on Computer Analysis of Images and Patterns, Seville, Spain, 29–31 August 2011; pp. 437–445. [Google Scholar]
- Weng, J.; Cohen, P.; Herniou, M. Camera calibration with distortion models and accuracy evaluation. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 965–980. [Google Scholar] [CrossRef]
- Velizhev, A. GML C++ Camera Calibration Toolbox. Available online: http://graphics.cs.msu.ru/en/node/909 (accessed on 10 December 2017).
- Shah, S.; Aggarwal, J. Intrinsic parameter calibration procedure for a (high-distortion) fish-eye lens camera with distortion model and accuracy estimation. Pattern Recognit. 1996, 29, 1775–1788. [Google Scholar] [CrossRef]
- Press, W.H.; Teukolsky, S.A.; Vetterling, W.T.; Flannery, B.P. Numerical Recipes in C; Cambridge University Press: Cambridge, UK, 1992. [Google Scholar]
- Malvar, S.; He, L.W.; Cutler, R. High-quality linear interpolation for demosaicing of bayer-patterned color images. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, Montreal, QB, Canada, 17–21 May 2004; pp. 485–488. [Google Scholar]
- Pavlovic, V.I.; Sharma, R.; Huang, T.S. Visual interpretation of hand gestures for human-computer interaction: A review. IEEE Trans. Pattern Anal. Mach. Intell. 1997, 19, 677–695. [Google Scholar] [CrossRef]
- Bretzner, L.; Laptev, I.; Lindeberg, T. Hand gesture recognition using multi-scale colour features, hierarchical models and particle filtering. In Proceedings of the 5th IEEE International Conference on Automatic Face and Gesture Recognition, Washington, DC, USA, 21 May 2002; pp. 423–428. [Google Scholar]
- Bradski, G.; Kaehler, A. Computer Vision with the OpenCV Library; O’Reilly Media, Inc.: Sebastopol, CA, USA, 2008. [Google Scholar]
|System ID||Division instructions for LED and actuator|
|Command ID||Promised instructions between Arduino and PC|
|State||State of Arduino and PC|
|Data Length||Length of transferred data|
|Data||Actual transferred data|
|Intrinsic Parameters||Calculated Values|
|Distortion coefficient||(0.014551, −0.003946)|
|Gesture Shapes||# of Fingers||Feature||Meaning|
|Open Palm||4–5||The distance between and is long||A gesture to cancel opening of the menu|
|Close Palm||0–1||The distance between and is close||A gesture to open the menu|
|Ok Sign||3||The number of contours is 2||A gesture to transform data displayed on the system|
|Pointing||1||No thumb||A gesture to select the menu or request data from the server|
|CPU||Intel i7-4790 @ 3.6 GHz|
|GPU||NVIDIA GeForce GTX 770|
© 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Kim, J.; Lee, C.; Yoon, S.-H.; Park, S. Tangible Visualization Table for Intuitive Data Display. Symmetry 2017, 9, 316. https://doi.org/10.3390/sym9120316
Kim J, Lee C, Yoon S-H, Park S. Tangible Visualization Table for Intuitive Data Display. Symmetry. 2017; 9(12):316. https://doi.org/10.3390/sym9120316Chicago/Turabian Style
Kim, Jongyong, Cheongun Lee, Seung-Hyun Yoon, and Sanghun Park. 2017. "Tangible Visualization Table for Intuitive Data Display" Symmetry 9, no. 12: 316. https://doi.org/10.3390/sym9120316