Next Article in Journal
An Improved RODNet for Object Detection Based on Radar and Camera Fusion
Previous Article in Journal
Markov Chain Wave Generative Adversarial Network for Bee Bioacoustic Signal Synthesis
Previous Article in Special Issue
Predicting Empathy and Other Mental States During VR Sessions Using Sensor Data and Machine Learning
error_outline You can access the new MDPI.com website here. Explore and share your feedback with us.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Waist-Mounted Interface for Mobile Viewpoint-Height Transformation Affecting Spatial Perception

1
Empowerment Informatics Program, University of Tsukuba, Tsukuba 3058573, Japan
2
Department of Cybernics Medicine, Institute of Medicine, University of Tsukuba, Tsukuba 3058575, Japan
3
Institute of Systems and Information Engineering, University of Tsukuba, Tsukuba 3058573, Japan
*
Author to whom correspondence should be addressed.
Sensors 2026, 26(2), 372; https://doi.org/10.3390/s26020372
Submission received: 30 November 2025 / Revised: 22 December 2025 / Accepted: 26 December 2025 / Published: 6 January 2026
(This article belongs to the Special Issue Sensors and Wearables for AR/VR Applications)

Abstract

Visual information shapes spatial perception and body representation in human augmentation. However, the perceptual consequences of viewpoint-height changes produced by sensor–display geometry are not well understood. To address this gap, we developed an interface that maps a waist-mounted stereo fisheye camera to an eye-level viewpoint on a head-mounted display in real time. Geometric and timing calibration kept latency low enough to preserve a sense of agency and enable stable untethered walking. In a within-subject study comparing head- and waist-level viewpoints, participants approached adjustable gaps, rated passability confidence (1–7), and attempted passage when confident. We also recorded walking speed and assessed post-task body representation using a questionnaire. High gaps were judged passable and low gaps were not, irrespective of viewpoint. At the middle gap, confidence decreased with a head-level viewpoint and increased with a waist-level viewpoint, and walking speed decreased when a waist-level viewpoint was combined with a chest-height gap, consistent with added caution near the decision boundary. Body image reports most often indicated a lowered head position relative to the torso, consistent with visually driven rescaling rather than morphological change. These findings show that a waist-mounted interface for mobile viewpoint-height transformation can reliably shift spatial perception.

1. Introduction

Human augmentation aims to extend or support human perception and action in everyday contexts, and it is most effective when devices couple tightly to the user’s body and environment. This perspective follows broader accounts of embodiment and self-organization, where sensorimotor couplings can reorganize perception and action [1], and recent surveys outline how augmentations—ranging from wearables to extended reality (XR) interfaces—shift what actions feel possible [2]. Scoping reviews on the design for wearability further emphasize that attachment stability, comfort, and unobtrusive integration into everyday routines are critical but still under-specified targets for wearable devices [3]. Concrete domains such as exoskeletons, orthoses, and advanced prostheses make these links explicit by showing how design choices alter control policies and embodiment reports [4,5].
A large body of work shows that multisensory consistency can shift the felt boundaries of the body: the rubber hand illusion and full-body variants demonstrate that synchronized visuotactile cues induce ownership and alter perceived self-location [6,7,8]. Psychometric and theoretical accounts organize embodiment into ownership, agency, and self-location [9,10]. Beyond these classics, interactive and added limb systems reveal plasticity in body representation and control, including homuncular flexibility and changes in neural/body maps with extra robotic hands and arms [11,12,13,14,15], and prosthetic feedback has shown that illusory movement can improve control and modulate embodiment [16,17].
In addition to multisensory ownership, embodiment can also be shaped by purely visual scaling cues that relate the body to the environment—most notably eye height, which provides a strong metric for judging size and distance. Among sensory routes to augmentation, vision is especially potent because viewpoint height acts as a scaling cue that links the body to the world. Manipulating apparent body size or viewpoint height shifts perceived scale and size judgments in otherwise unchanged environments [18,19], and child-like or height-transforming embodiments in virtual reality (VR) can modulate object size perception, social evaluation, and affect [20,21,22]. For example, in immersive VR, conflicts between visually specified and posturally specified eye height produce predictable biases in egocentric distance estimates [23], and simulated eye-height manipulations can rescale perceived object size [24]. Similarly, body-size illusions causally rescale perceived size and distance in otherwise unchanged scenes [18]. Avatar and perspective choices also influence body part and self-localization and interpersonal behavior [25,26,27], with third-person viewpoints yielding distinct perceptual and attitudinal effects from first-person settings [28,29].
Moving from avatars to the real world, several systems have altered viewpoint height during natural locomotion. BigRobot elevated the first-person view to evoke a “giant” experience, whereas CHILDHOOD lowered it to a child-like vantage; both studies documented changes in subjective experience and interpersonal distance [30,31]. Related telexistence-style platforms combine first-person video with externalized or collaborative viewpoints for copresent tasks [32,33], and head-worn multisensory augmentation can further tune spatial awareness [34]. Low, predictable latency remains critical to preserve ownership and agency [35].
Ecological accounts make a specific prediction about scaling: passability judgments depend on the eye height ratio, so shifting eye height shifts the judged critical barrier height while the ratio remains roughly constant [36]. Locomotion work likewise shows that strategies for negotiating narrow gaps scale with body dimensions (e.g., shoulder width), including the onset of shoulder rotation near gaps [37,38]. Gap-affordance judgments in immersive virtual environments have also been examined across development (children, teens, and adults), highlighting age-related differences in conservative decision criteria under risk [39]. These ideas suggest that changing viewpoint height alone could update the decision criteria for gap passability, even when physical body size and kinematics remain unchanged.
Building on this background and our prior prototype exploring viewpoint transformation during walking [40], we target a visual route to augmentation—manipulating viewpoint height with a waist-mounted stereo camera to test whether scaling cues alone can update gap passability judgments, walking speed, and body image reports in real-world tasks.
Beyond basic science, altering perceived viewpoint height may support training and accessibility. Lowering the vantage can let caregivers and parents experience child-level spatial scale, while wheelchair level viewpoints can help clinicians and designers assess how easily spaces can be passed through and communicate body–environment scaling. These considerations motivate testing how viewpoint height alone reshapes gap passability judgment, body image, and walking speed reports during real-world walking (Figure 1).

1.1. Research Questions

We articulate three research questions concerning passability judgments, body representation, and walking speeds:
RQ1:
To what extent does the perceived viewpoint height influence judgments about the passability of gaps at different heights?
RQ2:
In what ways does the perceived viewpoint height shape subjective body representation and spatial-awareness reports?
RQ3:
How does the perceived viewpoint height relate to walking speed near gaps of different heights?

1.2. Contributions

We present novel sensors and wearable technology for augmented reality (AR) simulation. Specifically, we propose a waist-mounted interface for mobile viewpoint-height transformation that supports untethered, natural walking while remapping a waist-mounted fisheye stereo view to a smartphone-shelled HMD with low latency. Stability is achieved by projecting the fisheye image onto a hemispherical mesh in Three.js, regularizing peripheral distortion, and maintaining a coherent virtual height view without desktop-class rendering. Even with standard (non-high-end) resolution and field of view (FOV), a simple interface with computational demands low enough to be handled by a portable computer altered spatial perception during real walking, indicating that high-grade optics and full-room tracking are not required.

1.3. Terminology

We use “body image” for conscious, reportable representations of one’s body size and appearance; “body schema” for action-oriented, largely implicit sensorimotor representations used for walking or moving; and “body representation” as an umbrella term covering both. To avoid ambiguity, we reserve body image for self-reports and body schema for behavioral indices derived from action.

2. Methods

Prior work achieved viewpoint transformation with pan–tilt mechanisms [40], but such systems suffer from control delay, backlash, overshoot, jitter, cable load, and fragility, limiting prolonged use during walking. We therefore set three design goals and developed a system, as shown in Figure 2:
  • No moving parts: stable and robust operation during prolonged walking.
  • End-to-end delay <150 ms: preserves body ownership and presence [35].
  • Backpack integration: onboard computing, power, and sensors for rapid setup and mobility.

2.1. Visual Presentation

A waist-mounted stereo rig with two USB 3.1 fisheye cameras captures left/right views, and a six-inch display in smartphone-style VR goggles (VRG-S01BK, ELECOM) presents the images. Onboard processing runs on an NVIDIA Jetson Orin Nano Super housed in a backpack with a battery and wiring. To present the live streams in a web browser, we launch a lightweight local Python HTTP server on the Jetson to serve the browser-based rendering application; the browser (Google Chrome) acquires the stereo camera streams via getUserMedia and uses them as VideoTexture inputs. Stereo frames are rendered in real time using the Three.js 3D library (r128): each eye stream is applied as a video texture and mapped to the interior of a hemispherical surface.
Two PerspectiveCameras are used to render the left/right views to separate viewports in a side-by-side (SBS) format. The renderer uses a vertical field of view of 60 with near/far planes of 0.1/100, and each eye stream is presented on an inward-facing hemispherical mesh implemented with SphereGeometry (radius R mesh = 30 in scene units, 64 × 64 segments) with a fixed pre-rotation to align the optical axis (defined as the +Z direction in Three.js in the pre-rotated coordinate frame). To reduce latency, we disable anti-aliasing, fix the pixel ratio to 1.0, and use a VideoTexture with mipmaps disabled and linear filtering. Because stereo disparity is already contained in the captured left/right fisheye images, the two virtual cameras share the same pose and are used solely for SBS viewport rendering.
For the fisheye-to-hemisphere warp, we adopt an equidistant approximation ( r θ ). Let ( θ , ϕ ) denote the polar and azimuth angles of a vertex on the unit hemisphere, and let ( c x , c y ) be the center of the circular fisheye image with usable radius r max (all in pixel units). In practice, we compute ( θ , ϕ ) from the unit direction d = ( x , y , z ) expressed in the pre-rotated mesh coordinate frame (i.e., after applying the same fixed pre-rotation used for the hemispherical mesh) as θ = arccos ( z ) and ϕ = atan 2 ( y , x ) (angles in radians). We compute the corresponding fisheye sampling coordinate as
ρ = k θ , k = r max θ max , u = c x + ρ cos ϕ , v = c y + ρ sin ϕ ,
where θ max = π / 2 for a hemisphere. The fisheye texture is sampled at ( u , v ) (normalized to ( u / W , v / H ) in the implementation) and mapped onto the hemispherical surface; pixels outside the usable circle appear black in our camera output, yielding the same effect as masking.
Because fisheye projection models mainly diverge in the periphery, perceptual inconsistencies are expected to be smaller near the optical axis. We use the same fixed warp across all conditions; thus, any residual distortion remains constant within-subject and is unlikely to confound the viewpoint-height transformation.

2.2. Head Rotation

Two nine-axis inertial measurement units (IMUs; LPMS-B2) are attached to the head-mounted display (HMD) and the waist. Posture data are streamed over Robot Operating System 2 (ROS 2) and received in the browser via roslibjs. To preserve natural head scanning while suppressing whole-body turns during locomotion, we use the head IMU roll and pitch directly and correct only yaw by subtracting the waist yaw. This decoupling stabilizes the rendered heading and helps attribute downstream perceptual effects primarily to viewpoint height rather than to body rotation. This rotation compensation was enabled only in the Waist condition, where the camera was mounted on the waist and thus did not physically follow head turns. In the Head condition, because the camera was mounted on the HMD, head turns were inherently reflected in the captured video; therefore, no additional IMU-based camera rotation was applied.
Let ( α h ( t ) , β h ( t ) , ψ h ( t ) ) denote the head IMU roll, pitch, and yaw, and let ψ w ( t ) denote the waist IMU yaw. We remove whole-body turns by using only the relative yaw
Δ ψ ( t ) = wrap ψ h ( t ) ψ w ( t ) ,
where wrap ( · ) maps angles to ( π , π ] .
In our implementation, a ROS node publishes a geometry_msgs/Vector3 message on /processed containing the angles (in radians) used by the Three.js camera: m x ( t ) = Δ ψ ( t ) (yaw), m y ( t ) = α h ( t ) (roll), and  m z ( t ) = β h ( t ) (pitch). Following the Three.js XYZ Euler convention, we set
α ( t ) = m y ( t ) , β ( t ) = m z ( t ) , γ ( t ) = m x ( t ) ,
and update the virtual camera orientation as
R cam ( t ) = R x α ( t ) R y β ( t ) R z γ ( t ) ,
which corresponds to the browser-side call rotation.set ( m y , m z , m x , "XYZ"). The same rotation is applied to both left and right virtual cameras.
In the implementation, head and waist IMU orientations were streamed via ROS 2 to the browser. We computed the yaw difference Δ ψ and applied the same head-only rotation to both the left and right virtual cameras, ensuring that whole-body turns (captured by the waist IMU) did not rotate the rendered viewpoint while head scanning was preserved.

2.3. Timing and Latency Measurement

All subsystems (capture, rendering, IMU I/O, and fusion) ran concurrently during latency measurements, so reported delays reflect the full pipeline rather than video projection alone.

3. User Study

This study used a within-subject 2 × 3 design (viewpoint height: head vs. waist; gap height: high/middle/low) to test the perceptual hypotheses stated in the Introduction. The primary dependent variables were (i) passability judgment, assessed by a 7-point confidence rating and the decision to attempt passage; (ii) locomotor behavior during passage attempts, quantified by the approach walking speed immediately before the gap; and (iii) post-condition subjective responses related to body image and spatial awareness.

3.1. Participants and Ethics

Nine healthy Japanese adults in their 20s (seven men, two women) participated. Eligibility required standing height within two standard deviations of the national average, normal ambulation, and no self-reported visual or balance impairments when using an HMD. All participants were naïve to the study purpose and provided written informed consent; they received small monetary compensation. The protocol was approved by the Internal Ethics Review Board of the Institute of Systems and Information Engineering, University of Tsukuba (Approval No. 2024R933, 14 November 2024 approval). Participants were recruited from 6 March 2025 to 9 March 2025. The study adhered to the Declaration of Helsinki. No identifiable information was collected; potentially identifying features in images were removed or obscured.

3.2. Experimental Procedure

Before data collection, participants familiarized themselves with the system for several minutes. At each trial, we recorded (i) the confidence rating and pass/no-pass decision, (ii) whether a passage attempt was performed, and (iii) the approach walking speed immediately before the gap during passage attempts. Each trial began 3.5 m from an adjustable gap, as shown in Figure 3. Participants viewed the gap through the live feed and rated their confidence in passing on a seven-point scale (1 = no confidence, 7 = definitely passable). If the rating was ≥4, participants attempted to pass through the gap at a comfortable, self-selected speed; otherwise, the trial ended with a verbal response only. During passage, walking speed was measured over a 1–2 m segment immediately before the gap using an OptiTrack motion capture system, with markers on the backpack. System timestamps and motion capture time were logged on the same host for synchronization. Participants were instructed to avoid posture changes (e.g., squatting or sitting) and to stop if they felt unsafe. Viewpoint height and gap height were manipulated in a 2 × 3 within-subject design. The camera was positioned at either the participant’s eye level (head viewpoint) or waist level (waist viewpoint). In the waist viewpoint, IMUs were used to render head-only rotations in the displayed view. For yaw, we separated head rotation from whole-body turning by applying only the relative yaw (head minus waist). For the remaining axes (pitch and roll), head rotations were applied directly. In the head viewpoint, because the camera was mounted on the HMD, head rotations were already embedded in the incoming video stream; therefore, no additional IMU-based rotation was applied.
The adjustable gap was formed by a white horizontal bar, whose height from the floor was set to one of three levels based on each participant’s standing height: high (above the participant’s standing height, a height allowing the participant to consistently pass under the bar without collision), middle (approximately chest height), and low (below the waist). The gap width (distance between the two stands) was approximately 1.5 m, and the bar height was adjustable in 30 cm increments; therefore, the realized bar heights were discretized to the nearest achievable level. Each participant experienced all six combinations of viewpoint (head, waist) and gap height (high, middle, low) in a pseudo-randomized order with a fixed constraint at the middle gap: head–middle was always performed before waist–middle.
After completing six conditions, participants provided free-text feedback on comfort and spatial awareness and completed a body image questionnaire (Figure 4), selecting the option that best matched their experience. For the body image questionnaire, participants chose one schematic from six options: (a) usual body, indicating no perceived change in dimensions; (b) grounded look, in which overall dimensions are preserved but the body feels lower or more anchored to the ground; (c) head near the waist, depicting a lowered head position relative to the torso (repositioning only, segment lengths unchanged); (d) isotropic (proportional) shrinkage, where all linear dimensions are scaled by the same factor; (e) uniform vertical shrinkage, where the overall body height decreases while the relative proportions between the upper and lower body are preserved, and the torso thickness in the sagittal plane remains unchanged; and (f) disproportionate leg shortening, in which the legs appear shorter relative to the upper body compared with (e). To minimize ambiguity, (d) preserves the original aspect ratio, and (f) shortens only the leg segment.

3.3. Measurements and Analysis

We analyzed three outcomes: (1) passability confidence ratings, (2) body image questionnaire choices, and (3) walking speed (only in trials with attempted passage). Given n = 9 , we emphasize descriptive statistics and within-subject comparisons. Confidence ratings were compared between viewpoint conditions using the Wilcoxon signed-rank test across all gap heights. Walking speed was compared between viewpoint conditions using paired-sample t-tests (two-tailed). The significance level was set at α = 0.05 . All analyses were performed in Python 3.10.4 (pandas 2.2.3, numpy 1.26.4). Some scripts are provided in the Supplementary Materials; the remaining data and scripts are available from the corresponding author on reasonable request.

3.4. Statistical Notes

All statistical tests were two-tailed with α = 0.05 . We report effect sizes alongside p-values for the primary contrast of interest (head–middle vs. waist–middle), while other contrasts are reported with p-values to keep the main text concise.
For passability confidence ratings, we used Wilcoxon signed-rank tests for paired contrasts at each gap height. We emphasize the viewpoint-height effect at the middle gap height. For the head–middle vs. waist–middle comparison, we additionally report an effect size as r = Z / N . For interpretability, we additionally report a standardized mean difference (Cohen’s dz) and a post hoc power estimate for the primary paired contrast, treating the 7-point ratings as quasi-interval; these are reported as descriptive indices only.
For walking speed, we treated speed as a continuous variable and exploratively compared them using paired t-tests (two-tailed; N = 7 pairs for speed analyses) for comparisons between conditions. For paired t-tests, the effect size is reported as Cohen’s dz, computed as the mean of paired differences divided by the standard deviation of paired differences.

4. Results

4.1. System Latency

Prior to the user study, we quantified the end-to-end video delay for multiple resolution frame-rate settings with all subsystems active (including IMU processing), thus reflecting computational load beyond raw video projection. Delay was measured by imaging a digital stopwatch displayed on a PC monitor with the system’s stereo camera, while a third camera simultaneously captured both the source monitor and the HMD screen; latency (ms) was computed from the difference between the two time readouts. For each setting, six frames were sampled at ∼4 s intervals starting 1 min after system startup.
Across the tested conditions ( 1920 × 1080 @30 fps, 1280 × 720 @60 fps, 1024 × 768 @30 fps, 640 × 480 @120 fps, 800 × 600 @60 fps, 1280 × 1024 @30 fps, 320 × 240 @120 fps), the 800 × 600 @60 fps mode consistently maintained latency below 150 ms with low variance, whereas higher resolution settings (e.g., 1280 × 1024 , 1920 × 1080 ) exhibited larger means and variability, with several samples exceeding 150 ms. Based on this profile, 800 × 600 @60 fps was selected for the user study; at this operating point, all recorded samples remained <150 ms with low variance (Figure 5).

4.2. User Study

4.2.1. Passability Judgments

We first summarize the key pattern observed in passability judgments (Figure 6). The largest viewpoint-dependent change appeared for the borderline (middle) gap, where confidence was lower in head–middle than in waist–middle. Meanwhile, confidence was consistently high at the high gap and consistently low at the low gap across participants, regardless of viewpoint height.
Participants rated their confidence in passing through gaps on a 7-point scale under a 2 (viewpoint height: head vs. waist) × 3 (gap height: high/middle/low) design (Figure 6). Given n = 9 , Wilcoxon signed-rank tests were used for paired contrasts on the confidence ratings. Head–middle vs. waist–middle differed significantly (normal approximation z = 2.5205 ; p = 0.0129 ; r = 0.84 ). For reference, when treating the 7-point ratings, the standardized mean difference was large ( d z = 1.38 ); the corresponding post hoc power estimate for this contrast was 1 β = 0.907 .
Within the same viewpoint (head), head–high exceeded head–middle and head–low ( p = 0.0136 and p = 0.008 , respectively), whereas head–middle vs. head–low was not significant ( p = 0.52 ). With the viewpoint at the waist, waist–high exceeded waist–low and waist–middle exceeded waist–low ( p = 0.007 and p = 0.008 ), and waist–high exceeded waist–middle ( p = 0.02 ). Holding gap height constant, head–high vs. waist–high and head–low vs. waist–low were not significant (both p = 0.52 ), consistent with the observations that viewpoint-dependent changes were most evident at the middle gap. Eight of nine participants showed higher judgment values in waist–middle than head–middle (Figure 7).

4.2.2. Body Image and Spatial Awareness Responses

After completing all conditions, eight of nine participants selected a body image deviating from the baseline figure, with option (c) (head perceived near waist level) most frequent (Figure 4 and Figure 8). Free-text responses commonly included “felt smaller,” “head felt lower,” and “spatial scale changed.” Some participants also reported initial apprehension immediately after donning the device, citing the narrow field of view and unusual perspective. These responses are consistent with the shift in conscious body image and spatial awareness that accompanies viewpoint-height manipulation. Table 1 summarizes, for the waist–middle condition, how many participants endorsed each combination of passability judgment and body image option. One participant who selected the usual body image (a) reported a low passability judgment, whereas all other participants reported passability judgments of 4 or higher.

4.2.3. Walking Speed

Walking speed (measured over the final 1–2 m before the gap) was analyzed only for trials where passability confidence 4 (Figure 9). Because one participant’s recordings were missing and another did not attempt the pass-through (passability confidence < 4 ), the walking speed analysis included 7 participants. Within-subject paired two-tailed t-tests showed head–high > waist–middle ( t ( 6 ) = 2.8755 , p = 0.0379 , 1 β = 0.601 , C o h e n s   d z = 1.00 ); head–high vs. waist–high ( t ( 6 ) = 1.3086 , p = 0.3112 , 1 β = 0.154 , C o h e n s   d z = 0.416 ); and waist–middle vs. waist–high ( t ( 6 ) = 2.0635 , p = 0.0995 , 1 β = 0.378 , C o h e n s   d z = 0.74 ) were not significant. Descriptively, “high” gaps elicited slightly faster approaches, whereas waist–middle exhibited greater variability. The apparent difference between head–high and waist–middle should be interpreted cautiously, given the reduced sample size and the fact that speed was only observed for attempted passages (confidence 4 ), which censors low-confidence trials (especially at low gap heights).

5. Discussion

5.1. Answers to Research Questions

The key effect was confined to the middle (chest-level) gap, where passability confidence increased under the lowered viewpoint (waist–middle > head–middle).
Regarding the research questions, our discussion is summarized as follows:
  • RQ1 (passability judgment): Viewpoint-height transformation affected passability judgments primarily at the chest-level (middle) gap. Lowering the viewpoint tended to increase judged passability, whereas clearly high or low gaps were largely insensitive to viewpoint.
  • RQ2 (body representation): Self-reports were more consistent with a recalibration of body–environment scaling (altered perceived eye height) than with an explicit change in body morphology.
  • RQ3 (walking speed): Changing viewpoint alone did not systematically alter walking speed at high gaps. In contrast, slowing emerged when approaching the middle gap under the lowered viewpoint, suggesting a dissociation between the passability judgment and cautious execution during approach.

5.2. System Implications

This prototype shows that natural, untethered walking with free mobility can be achieved with a lightweight, PC-driven pipeline: a waist-mounted fisheye stereo rig feeds a small on-body PC, which renders a virtual height view on a smartphone-shelled HMD (phone form factor shell with an embedded display, PC-driven). We stabilize the image by mapping fisheye frames onto a hemispherical mesh in Three.js. This warp regularizes peripheral distortion and preserves a coherent scene at modest computational cost, which in our indoor walking tests produced predictable behavior and stable perception without cloud offloading or desktop-class graphics. The design is intentionally pragmatic: resolution and field of view are limited, yet the assembly remained light, portable, and robust enough to support the within-subject experiment reported here.
Practically, two implications follow. First, eliminating the large desktop computer simplifies setup and improves repeatability during don/doff and corridor length walking. Video is rendered locally by the on-body PC to the embedded display in the smartphone-shelled HMD; a local server on the same PC hosts the browser-based warp, and no external network is used for video delivery. Second, hemispherical mapping provides sufficient stabilization for this use case: a lightweight browser-based warp preserved geometric plausibility without SLAM or heavy rendering.
Most importantly, even with standard resolution and field of view, the waist-mounted interface altered space-related judgments during real walking, suggesting that high-end optics or full-room tracking are not prerequisites for training and accessibility-oriented deployments. The present study foregrounds this perceptual impact under free mobility; as a practical next step toward deployment, we will examine longer sessions and varied lighting conditions.

5.3. User Study

As shown in Figure 6, the high gap height was associated with high passability judgment and the low gap height with low judgment, regardless of viewpoint height. For example, at head–high and waist–high, all participants reported confidence 4 , whereas at head–low and waist–low, all reported 1. By contrast, at the middle (chest-level) gap, passability confidence depended on viewpoint height: it was higher in waist–middle and lower in head–middle. Thus, the middle gap height constitutes a borderline region (i.e., near the decision boundary) for passability. In everyday life, a lowered viewpoint typically co-occurs with a change in posture (e.g., squatting or sitting); in our manipulation, however, the perceived viewpoint was lowered while participants remained standing, making the experience unusual compared to everyday posture changes. Even so, pass-through judgments appeared anchored primarily to the currently seen viewpoint rather than to a preexisting body representation (i.e., an internal estimate of body size and eye height independent of the current visual input), suggesting that small changes in perceived scale have their greatest impact around this boundary, especially for the middle gap height.
Walking speed results, while underpowered for firm conclusions (speed analyses n = 7 ), are consistent with a dissociation between a static decision made at the starting point and walking behavior during approach. When only the viewpoint changed, and the gap was high (head–high vs. waist–high), speed did not differ, implying that viewpoint change in itself did not slow locomotion. By contrast, under waist–middle, participants judged the gap passable yet approached more slowly in waist–middle than in head–high.
As summarized in Table 1, post-task body image selections bifurcated: several participants chose depictions of a smaller-seeming body (b or d; grounded or proportional shrinkage), and others chose a head-near-waist depiction (c). Selections of b/d were often accompanied by higher passability judgments, consistent with a visual scaling interpretation tied to the current viewpoint. Even so, we interpret this pattern as being more consistent with reliance on the currently seen viewpoint (visual scaling cues) than with a fixed, viewpoint-independent internal estimate. Importantly, choosing c did not imply low passability; at waist–middle, confidence remained high. Options implying leg-length change or other segment-specific alterations (e,f) were not endorsed, and one participant selected the usual body representation (a) with lower confidence at middle. Taken together, these patterns indicate a dissociation: the smaller self-depictions (b/d) align with the representation that guided pass-through judgments from the current viewpoint, whereas the head-near-waist depiction (c) can coexist with high confidence yet implies the upper torso traveling closer to the gap’s top edge. These selections index subjective internal representations and do not imply morphological change or guarantee objective pass-through feasibility.
The underlying mechanism remains uncertain, and the following explanation is speculative. One possible explanation for the selective slowdown at waist–middle is increased caution near the borderline region. One possibility is that participants reweighted visual scale cues near the borderline region, so caution emerged during approach even when the initial judgment was “passable.” This mechanistic account should be tested in future work by counterbalancing order and sampling multiple near-threshold gap heights to estimate psychometric functions.

5.4. Relation to Prior Work

Prior work shows that manipulating apparent body size or eye height shifts perceived scale and decision criteria in physically unchanged environments [18,19]. Ecological accounts formalize this as an eye-height ratio for passability judgments: when eye height is altered, the judged critical barrier height shifts while the ratio remains approximately constant [36]. Related work has also shown that manipulations of visual eye height shift perceived affordances, such as the judged critical aperture of a doorway-like opening [41]. Body-worn telepresence systems have also examined how camera placement and height affect remote observers’ experience for remote viewing [42]. Extending these ideas, we evaluate a first-person, freely mobile implementation and observe viewpoint-height effects concentrated at chest-level (middle) gaps, where small-scale changes most strongly impact the decision boundary.
Locomotion studies indicate that strategies for negotiating gaps scale with body dimensions (e.g., shoulder width) [37]. In our data, viewpoint-height manipulation shifted confidence only near the boundary and produced selective slowing at waist–middle, while speed at high gaps was unchanged. Limited endorsement of proportional shrinkage and the absence of leg-length changes, together with frequent head-near-waist selections, are consistent with a visual scaling account in which the current viewpoint repositions the decision boundary; our data do not provide evidence for morphological reparameterization. Related augmented reality (AR) and virtual reality (VR) studies have reported experiential or social distance shifts under raised or lowered viewpoints [25,26,27,30,31]; here, we extend such experiential shifts to concrete passability judgments during natural locomotion with head–body contingencies preserved, and we show that cautious approach emerges specifically where the decision boundary is repositioned by the viewpoint change.

5.5. Applications to Training and Accessibility

Our results align with a broader trend in which wearable devices are used to shape physical activity and everyday behavior. Umbrella reviews report that activity trackers and similar wearables can increase physical activity and reduce sedentary behavior in adults [43]. Low-cost VR-based learning systems have also been explored for procedural training across diverse age groups [44]. Our findings indicate two practical use cases:
  • Child-height perspective training. Lowering the viewpoint allows caregivers and parents to experience child-level spatial scale. High shelves and objects on desks become harder to reach, and adults may appear larger and more imposing, informing room layout and day-to-day safety practices.
  • Wheelchair-level perspective. Presenting a seated viewpoint helps clinicians and designers judge how easily doors and corridors can be passed through and communicate body–environment scaling during posture transitions. This includes evaluating standing-capable mobility solutions (e.g., Qolo [45]), where perceived viewpoint-height changes as users switch between seated and upright modes.
For targeted evaluations, we recommend measuring changes in gap passability judgment, walking speed, and route choice in realistic mock-ups, as well as decision-making; when posture transitions are involved, include pre/post-comparisons across seated and upright modes.

5.6. Limitations

Several limitations should be noted. First, the sample size ( n = 9 ) limits generalizability and reduces the robustness of inferential testing. Participants were young, healthy adults from a single context, with a male-skewed sample and limited balance in gender, age, and cultural diversity. Future studies should recruit larger and more diverse cohorts and analyze the data with mixed-effects models to better account for subject-level variability. In addition, because head–middle always preceded waist–middle, sequence effects cannot be fully excluded; counterbalancing the order (e.g., Latin-square blocking) would strengthen causal interpretation.
Second, our walking-speed analyses were conditioned on attempted passages (confidence 4 ), which censors low-confidence judgments and may bias the behavioral sample. Moreover, the analyzable sample for speed was reduced (n = 7), so null effects may reflect limited statistical power. A more comprehensive design would include multiple near-threshold gap heights and estimate psychometric functions for passability criteria rather than relying on discretized middle/low settings that were not calibrated to a specific threshold.
Third, we transformed viewpoint height while holding other visual parameters constant. Improvements in optics and sensors may change the subjective experience (e.g., comfort) and potentially the strength of the observed effects. Body representation was assessed primarily via conscious self-reports; adding implicit indices (e.g., shoulder rotation at gaps) would help test whether body representation changes occurred. Furthermore, the schematic body image questionnaire is study-specific and has not been psychometrically validated, so the reliability with which participants distinguish the categories remains uncertain. Finally, we recorded only a few latency samples per mode, so short-term jitter may be underrepresented. In future work, we will sample latency more densely and report standardized effect sizes with confidence intervals, while controlling for multiple comparisons.

6. Conclusions

In this study, we developed a waist-mounted interface for mobile viewpoint-height transformation by combining a waist-mounted fisheye camera with a smartphone HMD case. The primary contribution is demonstrating that this lightweight, on-body configuration can reliably shift spatial passability judgments during real walking, without relying on high-end optics or external tracking. Lowering the viewpoint to waist height affected decision-making, and in a subset of conditions, approach behavior (walking speed) also changed. In particular, a waist-level viewpoint increased passability judgment, most clearly for the borderline (middle) gap height.
Body image reports changed but did not align with these judgments and rarely endorsed proportional shrinkage or leg-length change, pointing to an update of visually driven decision criteria tied to viewpoint height rather than morphological change. Behavioral and subjective measures were not always concordant. These results support the idea that viewpoint height serves as a criterion for determining the scale relationship between the body and the environment in real-world judgment tasks, and they inform training scenarios (e.g., caregivers/parents experiencing the child’s viewpoint) and accessibility-oriented design.
This study is exploratory and has important limitations, including a small sample size, incomplete counterbalancing with potential sequence effects (e.g., fixed ordering at the middle gap), and task-specific constraints (e.g., walking speed analyses only for attempted passages). As next steps, we will increase the sample size and fully counterbalance conditions to confirm robustness and assess generalizability across tasks and settings.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/s26020372/s1, Table S1: Raw data (Excel spreadsheet, .xlsx); File S1: sample.html (minimal implementation for the fisheye SBS rendering).

Author Contributions

Conceptualization, J.A. and K.S.; methodology, J.A.; software, J.A.; validation, J.A. and K.S.; formal analysis, J.A.; investigation, J.A.; resources, J.A.; data curation, J.A. and K.S.; writing—original draft preparation, J.A.; writing—review and editing, H.K. and K.S.; visualization, J.A.; supervision, K.S.; project administration, K.S.; funding acquisition, K.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Japan Science and Technology Agency (JST) for the Core Research for the Evolutional Science and Technology (CREST) research project on Social Signaling (JPMJCR19A2), and the JST-SPRING (JPMJSP2124) of the Ministry of Education, Culture, Sports, Science, and Technology (MEXT) of Japan.

Institutional Review Board Statement

The study was approved by the Internal Ethics Review Board of the Institute of Systems and Information Engineering, University of Tsukuba (2024R933, 14 November 2024).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The minimal browser-based rendering code is provided in the Supplementary Materials. The remaining data and analysis scripts are available from the corresponding author upon reasonable request and are not publicly available due to privacy and ethical restrictions.

Acknowledgments

Thanks to the staff and students at the Artificial Intelligence Laboratory at the University of Tsukuba for their support in preparing for and conducting the experiment.

Conflicts of Interest

The authors declare no conflicts of interest. The funding organization had no role in the design of the study; in the collection, analysis, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
XRExtended Reality
VRVirtual Reality
ARAugmented Reality
HMDHead-Mounted Display
IMUInertial Measurement Unit
ROS 2Robot Operating System 2
SBSSide-by-Side (stereo presentation)
FOVField of View
msMilliseconds
fpsFrames Per Second

References

  1. Pfeifer, R.; Lungarella, M.; Iida, F. Self-organization, embodiment, and biologically inspired robotics. Science 2007, 318, 1088–1093. [Google Scholar] [CrossRef]
  2. Raisamo, R.; Rakkolainen, I.; Majaranta, P.; Salminen, K.; Rantala, J.; Farooq, A. Human augmentation: Past, present and future. Int. J. Hum.-Comput. Stud. 2019, 131, 131–143. [Google Scholar] [CrossRef]
  3. Seo, Y.W.; La Marca, V.; Tandon, A.; Chiao, J.-C.; Drummond, C.K. Exploring the Design for Wearability of Wearable Devices: A Scoping Review. Computers 2024, 13, 326. [Google Scholar] [CrossRef]
  4. Herr, H. Exoskeletons and orthoses: Classification, design challenges and future directions. J. Neuroeng. Rehabil 2009, 6, 21. [Google Scholar] [CrossRef]
  5. Marasco, P.D.; Kim, K.; Colgate, J.E.; Peshkin, M.A.; Kuiken, T.A. Robotic touch shifts perception of embodiment to a prosthesis in targeted reinnervation amputees. Brain 2011, 134, 747–758. [Google Scholar] [CrossRef] [PubMed]
  6. Botvinick, M.; Cohen, J. Rubber hands ’feel’ touch that eyes see. Nature 1998, 391, 756. [Google Scholar] [CrossRef] [PubMed]
  7. Ehrsson, H.H.; Holmes, N.P.; Passingham, R.E. Touching a Rubber Hand: Feeling of Body Ownership Is Associated with Activity in Multisensory Brain Areas. J. Neurosci. 2005, 25, 10564–10573. [Google Scholar] [CrossRef]
  8. Petkova, V.I.; Ehrsson, H.H. If I Were You: Perceptual Illusion of Body Swapping. PLoS ONE 2008, 3, e3832. [Google Scholar] [CrossRef] [PubMed]
  9. Longo, M.R.; Schuur, F.; Kammers, M.P.; Tsakiris, M.; Haggard, P. What is embodiment? A psychometric approach. Cognition 2008, 107, 978–988. [Google Scholar] [CrossRef]
  10. de Vignemont, F. Embodiment, ownership and disownership. Conscious. Cogn. 2011, 20, 82–93. [Google Scholar] [CrossRef]
  11. Won, A.S.; Bailenson, J.; Lee, J.; Lanier, J. Homuncular Flexibility in Virtual Reality. J. Comp.-Med. Commun. 2015, 20, 241–259. [Google Scholar] [CrossRef]
  12. Kieliba, P.; Clode, D.; Maimon-Mor, R.O.; Makin, T.R. Robotic hand augmentation drives changes in neural body representation. Sci. Robot. 2021, 6, eabd7935. [Google Scholar] [CrossRef] [PubMed]
  13. Dominijanni, G.; Pinheiro, D.L.; Pollina, L.; Orset, B.; Gini, M.; Anselmino, E.; Pierella, C.; Olivier, J.; Shokur, S.; Micera, S. Human motor augmentation with an extra robotic arm without functional interference. Sci. Robot. 2023, 8, eadh1438. [Google Scholar] [CrossRef] [PubMed]
  14. Saraiji, M.Y.; Sasaki, T.; Kunze, K.; Minamizawa, K.; Inami, M. MetaArms: Body Remapping Using Feet-Controlled Artificial Arms. In Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology (UIST ’18), Berlin, Germany, 14 October 2018; Association for Computing Machinery: New York, NY, USA, 2018; pp. 65–74. [Google Scholar] [CrossRef]
  15. Guterstam, A.; Petkova, V.I.; Ehrsson, H.H. The Illusion of Owning a Third Arm. PLoS ONE 2011, 6, e17208. [Google Scholar] [CrossRef]
  16. Marasco, P.D.; Hebert, J.S.; Sensinger, J.W.; Shell, C.E.; Schofield, J.S.; Thumser, Z.C.; Nataraj, R.; Beckler, D.T.; Dawson, M.R.; Blustein, D.H.; et al. Illusory movement perception improves motor control for prosthetic hands. Sci. Transl. Med. 2018, 10, eaao6990. [Google Scholar] [CrossRef]
  17. Giummarra, M.J.; Georgiou-Karistianis, N.; Nicholls, M.E.; Gibson, S.J.; Bradshaw, J.L. The phantom in the mirror: A modified rubber-hand illusion in amputees and normals. Perception 2010, 39, 103–118. [Google Scholar] [CrossRef]
  18. van der Hoort, B.; Guterstam, A.; Ehrsson, H.H. Being Barbie: The Size of One’s Own Body Determines the Perceived Size of the World. PLoS ONE 2011, 6, e20195. [Google Scholar] [CrossRef]
  19. Stefanucci, J.K.; Geuss, M.N. Big people, little world: The body influences size perception. Perception 2009, 38, 1782–1795. [Google Scholar] [CrossRef] [PubMed]
  20. Banakou, D.; Groten, R.; Slater, M. Illusory ownership of a virtual child body causes overestimation of object sizes and implicit attitude changes. Proc. Natl. Acad. Sci. USA 2013, 110, 12846–12851. [Google Scholar] [CrossRef]
  21. Freeman, D.; Evans, N.; Lister, R.; Antley, A.; Dunn, G.; Slater, M. Height, social comparison, and paranoia: An immersive virtual reality experimental study. Psychiatry Res. 2014, 218, 348–352. [Google Scholar] [CrossRef]
  22. Tajadura-Jiménez, A.; Banakou, D.; Bianchi-Berthouze, N.; Slater, M. Embodiment in a Child-Like Talking Virtual Body Influences Object Size Perception, Self-Identification, and Subsequent Real Speaking. Sci. Rep. 2017, 7, 9637. [Google Scholar] [CrossRef]
  23. Leyrer, M.; Linkenauger, S.A.; Bülthoff, H.H.; Mohler, B.J. The Importance of Postural Cues for Determining Eye Height in Immersive Virtual Reality. PLoS ONE 2015, 10, e0127000. [Google Scholar] [CrossRef]
  24. Ghasemi, F.; Harris, L.R.; Jörges, B. Simulated eye height impacts size perception differently depending on real-world posture. Sci. Rep. 2023, 13, 20075. [Google Scholar] [CrossRef]
  25. van der Veer, A.; Alsmith, A.; Longo, M.; Wong, H.Y.; Diers, D.; Bues, M.; Giron, A.P.; Mohler, B.J. The Influence of the Viewpoint in a Self-Avatar on Body Part and Self-Localization. In Proceedings of the ACM Symposium on Applied Perception (SAP), Barcelona, Spain, 19–20 September 2019; Article 3. Association for Computing Machinery: New York, NY, USA, 2019. [Google Scholar] [CrossRef]
  26. Yee, N.; Bailenson, J. The Proteus Effect: The Effect of Transformed Self-Representation on Behavior. Hum. Commun. Res. 2007, 33, 271–290. [Google Scholar] [CrossRef]
  27. Banakou, D.; Hanumanthu, P.D.; Slater, M. Virtual embodiment of White people in a Black virtual body leads to a sustained reduction in implicit racial bias. Front. Hum. Neurosci. 2016, 10, 601. [Google Scholar] [CrossRef]
  28. Salamin, P.; Thalmann, D.; Vexo, F. The benefits of third-person perspective in virtual and augmented reality? In Proceedings of the ACM Symposium on Virtual Reality Software and Technology (VRST ’06), Limassol, Cyprus, 1–3 November 2006; Association for Computing Machinery: New York, NY, USA, 2006; pp. 27–30. [Google Scholar] [CrossRef]
  29. Choi, Y.; Leshner, G.; Choi, J. Third-Person Effects of Idealized Body Image in Magazine Advertisements. Am. Behav. Sci. 2008, 52, 147–164. [Google Scholar] [CrossRef]
  30. Iwata, H.; Kimura, Y.; Takatori, H.; Enzaki, Y. Big Robot Mk.1A. In Proceedings of the ACM SIGGRAPH 2016 Emerging Technologies (SIGGRAPH ’16), Anaheim, CA, USA, 24–28 July 2016; Association for Computing Machinery: New York, NY, USA, 2016; pp. 1–2. [Google Scholar] [CrossRef]
  31. Nishida, J.; Matsuda, S.; Oki, M.; Takatori, H.; Sato, K.; Suzuki, K. Egocentric Smaller-person Experience through a Change in Visual Perspective. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ’19), Glasgow, Scotland, UK, 4–9 May 2019; Association for Computing Machinery: New York, NY, USA, 2019; pp. 1–12. [Google Scholar] [CrossRef]
  32. Kasahara, S.; Rekimoto, J. JackIn: Integrating first-person view with out-of-body vision generation for human-human augmentation. In Proceedings of the 5th Augmented Human International Conference (AH ’14), Kobe, Japan, 7–8 March 2014; Association for Computing Machinery: New York, NY, USA, 2014; pp. 1–8. [Google Scholar] [CrossRef]
  33. Iskandar, A.; Al-Sada, M.; Miyake, T.; Saraiji, Y.; Halabi, O.; Nakajima, T. Piton: Investigating the Controllability of a Wearable Telexistence Robot. Sensors 2022, 22, 8574. [Google Scholar] [CrossRef] [PubMed]
  34. Chu, S.Y.; Cheng, Y.T.; Lin, S.C.; Huang, Y.W.; Chen, Y.; Chen, M.Y. MotionRing: Creating Illusory Tactile Motion around the Head using 360-degree Vibrotactile Headbands. In Proceedings of the 34th Annual ACM Symposium on User Interface Software and Technology (UIST), Virtual, 10–14 October 2021; Association for Computing Machinery: New York, NY, USA, 2021; pp. 724–731. [Google Scholar] [CrossRef]
  35. Waltemate, T.; Senna, I.; Hülsmann, F.; Rohde, M.; Kopp, S.; Ernst, M.; Botsch, M. The Impact of Latency on Perceptual Judgments and Motor Performance in Closed-Loop Interaction in Virtual Reality. In Proceedings of the 22nd ACM Conference on Virtual Reality Software and Technology, Munich, Germany, 2–4 November 2016. [Google Scholar] [CrossRef]
  36. Marcilly, R.; Luyat, M. The role of eye height in judgment of an affordance of passage under a barrier. Curr. Psychol. Lett. Behav. Brain Cogn. 2008, 24, 12–24. [Google Scholar] [CrossRef]
  37. Warren, W.H., Jr.; Whang, S. Visual guidance of walking through apertures: Body-scaled information for affordances. J. Exp. Psychol. Hum. Percept. Perform. 1987, 13, 371–383. [Google Scholar] [CrossRef] [PubMed]
  38. Higuchi, T.; Seya, Y.; Imanaka, K. Rule for Scaling Shoulder Rotation Angles while Walking through Apertures. PLoS ONE 2012, 7, e48123. [Google Scholar] [CrossRef]
  39. Creem-Regehr, S.H.; Gill, D.M.; Pointon, G.D.; Bodenheimer, B.; Stefanucci, J.K. Mind the Gap: Gap Affordance Judgments of Children, Teens, and Adults in an Immersive Virtual Environment. Front. Robot. AI 2019, 6, 96. [Google Scholar] [CrossRef] [PubMed]
  40. Aoki, J.; Hassan, M.; Suzuki, K. Transformation of spatial perception in gate passage by using a wearable interface with changeable vertical viewpoint. In Proceedings of the 8th International Conference on Virtual and Augmented Reality Simulations (ICVARS ’24), Melbourne, Australia, 14–16 March 2024; Association for Computing Machinery: New York, NY, USA, 2024; pp. 97–103. [Google Scholar] [CrossRef]
  41. Bourrelly, A.; McIntyre, J.; Morio, C.; Despretz, P.; Luyat, M. Perception of Affordance during Short-Term Exposure to Weightlessness in Parabolic Flight. PLoS ONE 2016, 11, e0153598. [Google Scholar] [CrossRef] [PubMed]
  42. Pfeil, K.P.; Wisniewski, P.J.; LaViola, J.J. An analysis of user perception regarding body-worn 360° camera placements and heights for telepresence. In Proceedings of the SAP 2019: ACM Conference on Applied Perception, Barcelona, Spain, 19–20 September 2019; Association for Computing Machinery: New York, NY, USA, 2019. [Google Scholar] [CrossRef]
  43. Longhini, J.; Marzaro, C.; Bargeri, S.; Palese, A.; Dell’Isola, A.; Turolla, A.; Pillastrini, P.; Battista, S.; Castellini, G.; Cook, C.; et al. Wearable Devices to Improve Physical Activity and Reduce Sedentary Behaviour: An Umbrella Review. Sport. Med. Open 2024, 10, 9. [Google Scholar] [CrossRef]
  44. Nuanmeesri, S. The Affordable Virtual Learning Technology of Sea Salt Farming across Multigenerational Users through Improving Fitts’ Law. Sustainability 2024, 16, 7864. [Google Scholar] [CrossRef]
  45. Eguchi, Y.; Kadone, H.; Suzuki, K. Standing Mobility Device With Passive Lower Limb Exoskeleton for Upright Locomotion. IEEE/ASME Trans. Mechatronics 2018, 23, 1608–1618. [Google Scholar] [CrossRef]
Figure 1. Overview of passing through the gap with viewpoint transformation.
Figure 1. Overview of passing through the gap with viewpoint transformation.
Sensors 26 00372 g001
Figure 2. System Overview.
Figure 2. System Overview.
Sensors 26 00372 g002
Figure 3. Making a judgment call on passing through a gap.
Figure 3. Making a judgment call on passing through a gap.
Sensors 26 00372 g003
Figure 4. Body image questionnaire schematics: (a) usual; (b) grounded; (c) head-near-waist; (d) isotropic shrinkage; (e) vertical shrinkage; (f) leg shortening.
Figure 4. Body image questionnaire schematics: (a) usual; (b) grounded; (c) head-near-waist; (d) isotropic shrinkage; (e) vertical shrinkage; (f) leg shortening.
Sensors 26 00372 g004
Figure 5. End-to-end latency across resolution frame-rate settings with all subsystems active; 800 × 600 @60 fps remained <150 ms in all samples. Box plots show the median (center line).
Figure 5. End-to-end latency across resolution frame-rate settings with all subsystems active; 800 × 600 @60 fps remained <150 ms in all samples. Box plots show the median (center line).
Sensors 26 00372 g005
Figure 6. Judgment of passability across conditions (* p < 0.05, ** p < 0.01).
Figure 6. Judgment of passability across conditions (* p < 0.05, ** p < 0.01).
Sensors 26 00372 g006
Figure 7. Judgment of passability in head–middle and waist–middle conditions (* p < 0.05).
Figure 7. Judgment of passability in head–middle and waist–middle conditions (* p < 0.05).
Sensors 26 00372 g007
Figure 8. Selected body image: (a) usual; (b) grounded; (c) head-near-waist; (d) isotropic shrinkage; (e) vertical shrinkage; (f) leg shortening.
Figure 8. Selected body image: (a) usual; (b) grounded; (c) head-near-waist; (d) isotropic shrinkage; (e) vertical shrinkage; (f) leg shortening.
Sensors 26 00372 g008
Figure 9. Walking speed during passing through attempts (* p < 0.05).
Figure 9. Walking speed during passing through attempts (* p < 0.05).
Sensors 26 00372 g009
Table 1. Counts by passability judgment (rows) and body image (columns).
Table 1. Counts by passability judgment (rows) and body image (columns).
Body Image
abcdef
Passability judgment1000000
2100000
3000000
4012100
5000000
6001100
7011000
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Aoki, J.; Kadone, H.; Suzuki, K. A Waist-Mounted Interface for Mobile Viewpoint-Height Transformation Affecting Spatial Perception. Sensors 2026, 26, 372. https://doi.org/10.3390/s26020372

AMA Style

Aoki J, Kadone H, Suzuki K. A Waist-Mounted Interface for Mobile Viewpoint-Height Transformation Affecting Spatial Perception. Sensors. 2026; 26(2):372. https://doi.org/10.3390/s26020372

Chicago/Turabian Style

Aoki, Jun, Hideki Kadone, and Kenji Suzuki. 2026. "A Waist-Mounted Interface for Mobile Viewpoint-Height Transformation Affecting Spatial Perception" Sensors 26, no. 2: 372. https://doi.org/10.3390/s26020372

APA Style

Aoki, J., Kadone, H., & Suzuki, K. (2026). A Waist-Mounted Interface for Mobile Viewpoint-Height Transformation Affecting Spatial Perception. Sensors, 26(2), 372. https://doi.org/10.3390/s26020372

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop