1. Introduction
Mobility is broadly defined as the ability to move oneself within community environments. It is integral to active aging and intimately linked to health status and quality of life [
1,
2]. Human movement conditions, such as stroke, osteoarthritis, and Parkinson’s disease, affect approximately one in four people globally by impairing joint function [
3]. This can significantly impact physical and mental health by lowering independence and restricting employment and community involvement [
4]. Older adults with mobility conditions are particularly susceptible to injuries, especially within the home environment, where most unintentional falls and accidents occur [
5]. The ability to measure and model human movement impairment is thought to be critical in preventative care, such as exercise therapy and rehabilitation.
Traditional approaches to assessing mobility impairment require repeated visits to a clinic, which is time-consuming and costly. Furthermore, standard mobility assessments such as clinical gait analysis are typically performed over a short time span in a controlled clinical setting. These conditions bear little resemblance to the real-world environment that one interacts with. The way a person moves in their own physical living space over time, interacting with objects such as doors, stairs, and furniture, is a vital determinate of their mobility and behavior [
6]. The environment in which a person lives may facilitate active or sedentary behavior, present or mitigate risk of falls and injury, and ultimately affect a person’s psychological state and quality of life [
7]. Measurement of motion in the home setting has the potential to encapsulate movement variability between living spaces and rare events that may not otherwise be captured in the lab setting [
8]. Yet, 3D motion analysis of the whole body throughout the home setting is rarely performed, and the way in which people interact with the internal living environment remains poorly understood.
Human mobility has previously been quantified using markerless motion analysis cameras. However, occlusion of body movement, constraints on capture volume, and privacy concerns have limited their use beyond the laboratory setting [
9,
10]. In contrast, wireless, wearable devices can achieve unobtrusive motion joint analysis without restriction in capture volume. This includes optical fiber–based wearable sensors, which detect mechanical deformation and strain by measuring changes in transmitted or reflected light intensity or wavelength. These devices have shown high sensitivity to joint motion and muscle activity [
11]. Smart photonic wearable sensors can also measure biomechanical signals, such as micro-strains, through resonance-wavelength shifts [
12]. Among wearable technologies, inertial measurement units (IMUs) have been most widely used to directly quantify the relative angular motion of segments and joints [
13]; however, none of these systems can delineate the absolute position of the body. Real-time indoor spatial tracking of human locomotion has been achieved using geo-location data from ultra-wideband (UWB) tracking systems [
14,
15], but this technology has yet to be employed for human mobility measurement in the home setting [
16,
17].
The aims of this exploratory pilot study were twofold. Firstly, to develop a digital twin of a fully furnished apartment that integrates real-time joint motion and spatial location tracking of subjects into a 3D digital reconstruction of the entire internal living environment; and secondly, to use this model to classify and evaluate joint motion during a diverse range of activities of daily living performed by healthy adults throughout the home setting. It was hypothesized that both lower limb joint motion during walking and upper limb joint motion during activities of daily living would vary significantly by room throughout the home setting, even for the same given task. The findings of this study will be useful for mobility monitoring and planning home-based rehabilitation protocols, and in developing fall prevention strategies, for example, in aged care facilities.
4. Discussion
The objective of this study was to develop a digital twin model for the measurement, modeling, and visualization of human mobility in the home setting, and to use this to classify tasks and measure joint motion during activities of daily living that were performed over continuous data collection periods. In support of the study hypotheses, upper and lower limb joint motion was dependent on the home’s internal environment and how participants chose to interact with it, and this varied markedly between rooms. For example, significant differences were observed in gait speed and lower limb joint kinematics while walking in different rooms, as well as in upper limb joint angles for the same reaching tasks performed in different rooms.
In the present study, walking speed in the home setting for healthy adults averaged 0.9 m/s, while overground walking speed in the laboratory setting has been recorded between 1.3 m/s to 1.4 m/s [
24,
25,
26]. This lower gait speed is likely due to the limited straight, unobstructed walking spaces and the presence of furniture (obstacles), which requires visual awareness and may increase attentional resources and cognitive demand. This finding aligns with that of Roth et al., who found that gait in the home setting is generally slower than that in controlled outdoor or laboratory environments [
27]. We also observed that walking speed and lower limb joint motion varied within the home setting, with the lowest walking speeds in the bathroom (0.79 ± 0.13 m/s) and the highest in the living room (1.00 ± 0.20 m/s). Consequently, maximum ankle plantarflexion in the bathroom was 5.9° lower than that in the living room. These findings, which highlight marked variability of joint motion within the home setting, underscore the value of continuous remote monitoring for evaluating mobility over extended periods and may ultimately be useful in early detection of sedentary behavior or in assessing fall risk. The observed differences in joint motion between rooms suggest that interior design elements, including doorway width, furniture spacing, and fixture height, can influence movement strategies. For instance, confined spaces such as bathrooms induce more cautious gait patterns in older adults, characterized by slower walking speeds and reduced ankle range of motion [
28]— a strategy that may ultimately reduce the risk of falling [
28,
29]. Future studies using our framework ought to explore the influence of obstacles, walking surfaces, and dual task conditions on walking patterns in the home setting.
The present study showed that habitual upper limb movements in the home setting produced a wide range of shoulder elevation, ranging from 37.8° (food chopping in the kitchen) to 124.2° (hair combing in the bathroom). In contrast, the humeral plane of elevation was generally invariant to the task, and the humerus remained close to the sagittal plane for most activities. The sagittal plane may be mechanically advantageous in upper limb elevation due to the recruitment of prime movers such as the anterior deltoid and clavicular sub-region of the pectoralis major in forward elevation. These findings indicate the clinical relevance of flexion in executing forward lifting and pushing tasks, suggesting the sagittal plane as a key target movement for shoulder muscle strengthening in rehabilitation and exercise-based therapy.
Different activities of daily living that shared similar movement features or poses resulted in higher classification errors. For instance, sagittal plane joint angles were not significantly different between lying down while playing with a phone and lying only, resulting in a 21.3% mislabeling rate between these activities. The reason for this mislabelling is that the machine learning model was unable to distinguish key features of phone use, which had a high dependence on small hand and wrist movements. This finding is similar to that of Leutheuser et al., who reported a percentage of confusion up to 46% between indoor ergometer cycling at two resistance levels using a hierarchical machine learning classifier due to undistinguishable joint motion patterns [
30]. The use of feature selection techniques such as the Uniform Manifold Approximation and Projection (UMAP) method prior to model training could further enhance classification accuracy. These techniques reduce the feature space to the most relevant dimensions and improve the performance of KNN classifiers, which are more effective with lower-dimensional features [
31]. Nonetheless, our 10-fold subject-independent model cross-validation accuracy of 82.3% across 19 activities of daily living was comparable to the 85.8% classification accuracy achieved by Leutheuser et al. for thirteen repetitive tasks [
30]. Our classification accuracy was also robust to variants of tasks, for example, the ‘reaching’ class included opening a small, floor-mounted bar fridge, picking up a book from a high shelf, or grasping a cup from a table. However, the choice of KNN over SVM was based on marginally higher classification accuracy. Future studies ought to explore multi-modal data analytics and deep neural network approaches to consolidating data from different sources for more robust feature detection and task classification. This might include the use of heart rate and blood oxygen saturation monitors, skin conductance measurements, and eye tracking.
The digital twin presented in this study is an extension of recent research exploring smart-home digital twins for human biomechanics. A previous modeling framework was used to create a virtual home environment containing furniture that individuals in the home could interact with [
32]. Other studies have introduced digital twins of human body segments for continuous measurement of joint kinematics and kinetics using wearable sensors [
33,
34]. Our digital twin performed reliably in both task recognition and joint motion measurement throughout the home environment. The mean classification accuracy of our framework (82.3%) exceeded that of a recently proposed framework that reconstructed a virtual apartment and classified human activities using synthetic ambient-sensor data such as binary motion, contact, and light sensors (81.6%). Moreover, we observed no significant differences between the predicted and measured maximum shoulder, hip, knee, and ankle flexion angles across diverse daily tasks. This result is comparable to the findings of Zhou et al., who developed an IMU-driven digital twin for human motion capture and reported errors below 6° in predicted knee flexion angles during static poses and walking [
34].
A limitation of this study arose from the process of supervised learning, where a finite number of activity classes were predefined prior to model training [
35]. This limitation presents a challenge in future applications where individuals may engage in diverse activities, undertake new tasks, and adapt or evolve movement patterns to changes in their living environment. Future research ought to explore open-set recognition frameworks in which a model is capable of detecting activities not part of model training [
36], and continual learning approaches that incorporate new activity classes over time. The dataset employed in the present study was based on joint motion data obtained from 12 IMUs, a quantity impractical for daily life applications. Reducing the number of sensors required for activity recognition ought to be a focus of future research [
37]. Finally, this was an exploratory pilot study of ten healthy participants, and the findings may not represent the broader population, including older adults or individuals with neuromuscular conditions. Nonetheless, the dataset comprised continuous and diverse activities of daily living over extended periods in a realistic setting, providing a sufficient basis for model-based activity recognition and mobility assessment.