Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (6)

Search Parameters:
Keywords = distant object manipulation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 888 KB  
Article
The ‘Conceptual Distance Effect’ in the Causal Effects Under Experimental Manipulation Between Attitude and Stereotype
by Yang Yang, Xue Bai, Jiejie Liao, Yujie Chen and Lei Mo
Behav. Sci. 2026, 16(2), 287; https://doi.org/10.3390/bs16020287 - 17 Feb 2026
Viewed by 428
Abstract
Understanding how attitudes and stereotypes influence each other is central to social cognition, yet prior findings have been inconsistent, with some indicating strong connections and others suggesting separation. To help explain these discrepancies, we introduce the construct of conceptual distance, defined as the [...] Read more.
Understanding how attitudes and stereotypes influence each other is central to social cognition, yet prior findings have been inconsistent, with some indicating strong connections and others suggesting separation. To help explain these discrepancies, we introduce the construct of conceptual distance, defined as the evaluative proximity between attitude objects and stereotypical trait dimensions (e.g., warmth, morality, competence). Across four experiments, we first measured conceptual distance using a forced-choice task that estimated how closely each trait dimension aligns with positive or negative valence. We then tested whether the strength of causal effects between attitudes and stereotypes corresponds to these distances. Attitudes or stereotypes were manipulated using evaluative conditioning (EC), and their effects were measured through either explicit self-report ratings or Implicit Association Tests (IATs). Results consistently showed stronger causal effects for stereotype dimensions that were evaluatively closer to attitudes (warmth and morality) than for more distant ones (competence). These findings offer initial evidence for a correspondence between conceptual distance and the strength of experimentally induced influence. The study contributes to theories of causal cognition and social representation, and offers implications for designing interventions that aim to reduce stereotype-based bias and promote more flexible social inferences. Full article
Show Figures

Figure 1

21 pages, 2610 KB  
Article
GazeRayHand: Combining Gaze Ray and Hand Interaction for Distant Object Manipulation
by Sei Kang, Jaejoon Jeong, Soo-Hyung Kim, Hyung-Jeong Yang, Gun A. Lee and Seungwon Kim
Appl. Sci. 2025, 15(13), 7065; https://doi.org/10.3390/app15137065 - 23 Jun 2025
Cited by 1 | Viewed by 1612
Abstract
In this paper, we introduce novel techniques for distant object manipulation, named GazeRayHand and GazeRayHand2. The two techniques translate and rotate objects by using the gaze ray as a reference line and the hand position relative to the gaze ray. The GazeRayHand2 additionally [...] Read more.
In this paper, we introduce novel techniques for distant object manipulation, named GazeRayHand and GazeRayHand2. The two techniques translate and rotate objects by using the gaze ray as a reference line and the hand position relative to the gaze ray. The GazeRayHand2 additionally supports gaze control for quick and long-distance object translation. We evaluate these techniques by comparing them with two other recent techniques: Gaze&Pinch and modified Gaze Beam Guided. In a user study, the results showed that GazeRayHand and GazeRayHand2 not only performed similarly to Gaze&Pinch, known for high performance, but also showed notable benefits. Both GazeRayHand and GazeRayHand2 significantly reduced unnecessary translation compared to other techniques and were rated highly by users for intuitiveness and ease of use. In contrast, Gaze&Pinch had an issue of requiring two hand interactions for rotation, causing inconvenience. The modified Gaze Beam Guided was the worst among the four techniques by compulsorily requiring unnecessary interaction. Full article
(This article belongs to the Special Issue Emerging Technologies in Innovative Human–Computer Interactions)
Show Figures

Figure 1

28 pages, 5168 KB  
Article
GazeHand2: A Gaze-Driven Virtual Hand Interface with Improved Gaze Depth Control for Distant Object Interaction
by Jaejoon Jeong, Soo-Hyung Kim, Hyung-Jeong Yang, Gun Lee and Seungwon Kim
Electronics 2025, 14(13), 2530; https://doi.org/10.3390/electronics14132530 - 22 Jun 2025
Viewed by 2368
Abstract
Research on Virtual Reality (VR) interfaces for distant object interaction has been carried out to improve user experience. Since hand-only interfaces and gaze-only interfaces have limitations such as physical fatigue or restricted usage, VR interaction interfaces using both gaze and hand input have [...] Read more.
Research on Virtual Reality (VR) interfaces for distant object interaction has been carried out to improve user experience. Since hand-only interfaces and gaze-only interfaces have limitations such as physical fatigue or restricted usage, VR interaction interfaces using both gaze and hand input have been proposed. However, current gaze + hand interfaces still have restrictions such as difficulty in translating along the gaze ray direction, using less realistic methods, or limited rotation support. This study aims to design a new distant object interaction technique that supports hand-based interaction with high freedom of object interaction in immersive VR. In this study, we developed GazeHand2, a hand-based object interaction technique, which features a new depth control that enables free object manipulation in VR. Building on the strength of the original GazeHand, GazeHand2 can control the change rate of the gaze depth by using the relative position of the hand, allowing users to translate the object to any position. To validate our design, we conducted a user study on object manipulation, which compares it with other gaze + hand interfaces (Gaze+Pinch and ImplicitGaze). Result showed that, compared to other conditions, GazeHand2 reduced 39.3% to 54.3% of hand movements and 27.8% to 47.1% of head movements under 3 m and 5 m tasks. It also significantly increased overall user experiences (0.69 to 1.12 pt higher than Gaze+Pinch and 1.18 to 1.62 pt higher than ImplicitGaze). Furthermore, over half of the participants preferred GazeHand2 because it supports convenient and efficient object translation and hand-based realistic object manipulation. We concluded that GazeHand2 can support simple and effective distant object interaction with reduced physical fatigue and higher user experiences compared to other interfaces in immersive VR. We suggested future designs to improve interaction accuracy and user convenience for future works. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

21 pages, 3610 KB  
Article
Comparing and Contrasting Near-Field, Object Space, and a Novel Hybrid Interaction Technique for Distant Object Manipulation in VR
by Wei-An Hsieh, Hsin-Yi Chien, David Brickler, Sabarish V. Babu and Jung-Hong Chuang
Virtual Worlds 2024, 3(1), 94-114; https://doi.org/10.3390/virtualworlds3010005 - 21 Feb 2024
Cited by 3 | Viewed by 2451
Abstract
In this contribution, we propose a hybrid interaction technique that integrates near-field and object-space interaction techniques for manipulating objects at a distance in virtual reality (VR). The objective of the hybrid interaction technique was to seamlessly leverage the strengths of both the near-field [...] Read more.
In this contribution, we propose a hybrid interaction technique that integrates near-field and object-space interaction techniques for manipulating objects at a distance in virtual reality (VR). The objective of the hybrid interaction technique was to seamlessly leverage the strengths of both the near-field and object-space manipulation techniques. We employed bimanual near-field metaphor with scaled replica (BMSR) as our near-field interaction technique, which enabled us to perform multilevel degrees-of-freedom (DoF) separation transformations, such as 1~3DoF translation, 1~3DoF uniform and anchored scaling, 1DoF and 3DoF rotation, and 6DoF simultaneous translation and rotation, with enhanced depth perception and fine motor control provided by near-field manipulation techniques. The object-space interaction technique we utilized was the classic Scaled HOMER, which is known to be effective and appropriate for coarse transformations in distant object manipulation. In a repeated measures within-subjects evaluation, we empirically evaluated the three interaction techniques for their accuracy, efficiency, and economy of movement in pick-and-place, docking, and tunneling tasks in VR. Our findings revealed that the near-field BMSR technique outperformed the object space Scaled HOMER technique in terms of accuracy and economy of movement, but the participants performed more slowly overall with BMSR. Additionally, our results revealed that the participants preferred to use the hybrid interaction technique, as it allowed them to switch and transition seamlessly between the constituent BMSR and Scaled HOMER interaction techniques, depending on the level of accuracy, precision and efficiency required. Full article
Show Figures

Figure 1

17 pages, 2775 KB  
Article
Mixed-Reality-Enhanced Human–Robot Interaction with an Imitation-Based Mapping Approach for Intuitive Teleoperation of a Robotic Arm-Hand System
by Yun-Peng Su, Xiao-Qi Chen, Tony Zhou, Christopher Pretty and Geoffrey Chase
Appl. Sci. 2022, 12(9), 4740; https://doi.org/10.3390/app12094740 - 8 May 2022
Cited by 37 | Viewed by 13161
Abstract
This paper presents an integrated mapping of motion and visualization scheme based on a Mixed Reality (MR) subspace approach for the intuitive and immersive telemanipulation of robotic arm-hand systems. The effectiveness of different control-feedback methods for the teleoperation system is validated and compared. [...] Read more.
This paper presents an integrated mapping of motion and visualization scheme based on a Mixed Reality (MR) subspace approach for the intuitive and immersive telemanipulation of robotic arm-hand systems. The effectiveness of different control-feedback methods for the teleoperation system is validated and compared. The robotic arm-hand system consists of a 6 Degrees-of-Freedom (DOF) industrial manipulator and a low-cost 2-finger gripper, which can be manipulated in a natural manner by novice users physically distant from the working site. By incorporating MR technology, the user is fully immersed in a virtual operating space augmented by real-time 3D visual feedback from the robot working site. Imitation-based velocity-centric motion mapping is implemented via the MR subspace to accurately track operator hand movements for robot motion control and enables spatial velocity-based control of the robot Tool Center Point (TCP). The user control space and robot working space are overlaid through the MR subspace, and the local user and a digital twin of the remote robot share the same environment in the MR subspace. The MR-based motion and visualization mapping scheme for telerobotics is compared to conventional 2D Baseline and MR tele-control paradigms over two tabletop object manipulation experiments. A user survey of 24 participants was conducted to demonstrate the effectiveness and performance enhancements enabled by the proposed system. The MR-subspace-integrated 3D mapping of motion and visualization scheme reduced the aggregate task completion time by 48% compared to the 2D Baseline module and 29%, compared to the MR SpaceMouse module. The perceived workload decreased by 32% and 22%, compared to the 2D Baseline and MR SpaceMouse approaches. Full article
(This article belongs to the Topic Virtual Reality, Digital Twins, the Metaverse)
Show Figures

Figure 1

18 pages, 942 KB  
Article
Sustainable Multi-Modal Sensing by a Single Sensor Utilizing the Passivity of an Elastic Actuator
by Takashi Takuma, Ken Takamine and Tatsuya Masuda
Actuators 2014, 3(2), 66-83; https://doi.org/10.3390/act3020066 - 12 May 2014
Cited by 2 | Viewed by 8521
Abstract
When a robot equipped with compliant joints driven by elastic actuators contacts an object and its joints are deformed, multi-modal information, including the magnitude and direction of the applied force and the deformation of the joint, is used to enhance the performance of [...] Read more.
When a robot equipped with compliant joints driven by elastic actuators contacts an object and its joints are deformed, multi-modal information, including the magnitude and direction of the applied force and the deformation of the joint, is used to enhance the performance of the robot such as dexterous manipulation. In conventional approaches, some types of sensors used to obtain the multi-modal information are attached to the point of contact where the force is applied and at the joint. However, this approach is not sustainable for daily use in robots, i.e., not durable or robust, because the sensors can undergo damage due to the application of excessive force and wear due to repeated contacts. Further, multiple types of sensors are required to measure such physical values, which add to the complexity of the device system of the robot. In our approach, a single type of sensor is used and it is located at a point distant from the contact point and the joint, and the information is obtained indirectly by the measurement of certain physical parameters that are influenced by the applied force and the joint deformation. In this study, we employ the McKibben pneumatic actuator whose inner pressure changes passively when a force is applied to the actuator. We derive the relationships between information and the pressures of a two-degrees-of-freedom (2-DoF) joint mechanism driven by four pneumatic actuators. Experimental results show that the multi-modal information can be obtained by using the set of pressures measured before and after the force is applied. Further, we apply our principle to obtain the stiffness values of certain contacting objects that can subsequently be categorized by using the aforementioned relationships. Full article
(This article belongs to the Special Issue Soft Actuators)
Show Figures

Figure 1

Back to TopTop