Next Article in Journal
Development of a Sustainability-Oriented KPI Selection Model for Manufacturing Processes
Previous Article in Journal
Fingerprinting Agro-Industrial Waste: Using Polysaccharides from Cell Walls to Biomaterials
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

AI-Driven Arm Movement Estimation for Sustainable Wearable Systems in Industry 4.0

by
Emanuel Muntean
1,
Monica Leba
2,* and
Andreea Cristina Ionica
3
1
Doctoral School, University of Petroșani, 332006 Petrosani, Romania
2
System Control and Computer Engineering Department, University of Petroșani, 332006 Petrosani, Romania
3
Management and Industrial Engineering Department, University of Petroșani, 332006 Petrosani, Romania
*
Author to whom correspondence should be addressed.
Sustainability 2025, 17(14), 6372; https://doi.org/10.3390/su17146372
Submission received: 24 May 2025 / Revised: 28 June 2025 / Accepted: 10 July 2025 / Published: 11 July 2025
(This article belongs to the Special Issue Sustainable Engineering Trends and Challenges Toward Industry 4.0)

Abstract

In an era defined by rapid technological advancements, the intersection of artificial intelligence and industrial innovation has garnered significant attention from both academic and industry stakeholders. The emergence of Industry 4.0, characterized by the integration of cyber–physical systems, the Internet of Things, and smart manufacturing, demands the evolution of operational methodologies to ensure processes’ sustainability. One area of focus is the development of wearable systems that utilize artificial intelligence for the estimation of arm movements, which can enhance the ergonomics and efficiency of labor-intensive tasks. This study proposes a Random Forest-based regression model to estimate upper arm kinematics using only shoulder orientation data, reducing the need for multiple sensors and thereby lowering hardware complexity and energy demands. The model was trained on biomechanical data collected via a minimal three-IMU wearable configuration and demonstrated high predictive performance across all motion axes, achieving R2 > 0.99 and low RMSE scores on training (1.14, 0.71, and 0.73), test (3.37, 1.97, and 2.04), and unseen datasets (2.77, 0.78, and 0.63). Statistical analysis confirmed strong biomechanical coupling between shoulder and upper arm motion, justifying the feasibility of a simplified sensor approach. The findings highlight the relevance of our method for sustainable wearable technology design and its potential applications in rehabilitation robotics, industrial exoskeletons, and human–robot collaboration systems.

1. Introduction

The emergence of smart wearables and human-centric automation is reshaping the industrial landscape under the transformative paradigm of Industry 4.0. As manufacturing systems become increasingly interconnected and intelligent, there is a growing emphasis on integrating the human worker into these digital ecosystems through technologies that augment physical performance, monitor biomechanical behavior, and enhance safety. Wearable systems, particularly those that track human motion in real time, play a central role in facilitating seamless human–robot collaboration, adaptive interfaces, and ergonomic workplace optimization. In this context, motion estimation technologies have become critical, offering insight into operator posture and behavior, which is essential for automation that is both intelligent and intuitively responsive to human activity.
However, achieving accurate and real-time motion tracking often requires complex sensor configurations involving multiple inertial measurement units (IMUs) placed along various limb segments. While such setups can yield precise kinematic data, they introduce challenges in terms of hardware redundancy, energy consumption, user comfort, and system maintenance. This trade-off between sensor fidelity and system simplicity creates a challenge, especially in scenarios requiring long-term deployment, lightweight construction, and minimal energy footprints—characteristics increasingly aligned with sustainable engineering principles.
From a sustainability perspective, the need to reduce electronic waste, optimize battery life, and simplify wearable system architecture has become an urgent priority. The development of sensor-efficient models that rely on fewer inputs without compromising prediction accuracy directly contributes to reducing environmental impact. Such approaches also support ergonomic design by decreasing the number of components worn by the user, thereby improving comfort and usability. Moreover, these methods hold significant promise for applications in occupational health, rehabilitation robotics, and assistive exoskeletons, where minimal hardware burden is important to user acceptance and operational feasibility.
In this context, the present study aims to develop and validate an AI-based approach for estimating upper arm orientation using only shoulder kinematic data. Specifically, a Random Forest Regression model is employed to learn the nonlinear mapping between shoulder and arm motion across three degrees of freedom. By demonstrating high prediction accuracy with a reduced sensor setup, this work contributes to the development of lightweight, sustainable, and intelligent wearable systems for Industry 4.0 environments. The proposed solution offers a practical pathway toward more eco-efficient, user-centric, and scalable human–machine interaction technologies.
After the Introduction section, the remainder of the paper is organized as follows: Section 2 reviews the current literature on AI-driven motion estimation and sustainable wearable systems in the context of Industry 4.0. Section 3 describes the materials and methods, detailing the experimental setup, data collection process, and model development. Section 4 presents the results of the model performance evaluation, including training, testing, and validation. Section 5 discusses the implications, advantages, limitations, and future directions of the proposed approach. Section 6 concludes the paper by summarizing the key findings and their significance for sustainable human–machine collaboration.

2. Literature Review

The exploration of AI-driven arm movement estimation within the context of sustainable wearable systems highlights significant advancements in the integration of artificial intelligence and industrial applications. A central theme is the enhancement of efficiency and accuracy in motion tracking, which several studies underscore as essential for optimizing operational workflows in Industry 4.0.
The investigation into AI-driven arm movement estimation within the scope of sustainable wearable systems in Industry 4.0 has unveiled an array of profound insights that hold significant implications for both academia and industry. The integration of artificial intelligence into arm movement estimation processes has proven to enhance operational efficiency, optimize productivity, and foreshadow a much-needed evolution in labor ergonomics. Findings from various studies [1,2] reinforce the notion that advances in machine learning algorithms markedly improve the precision of motion tracking. Such improvements are both beneficial for individual performance and can also lead to systemic enhancements in workforce safety and satisfaction [3], marking an advancement in industrial operations. The synergy between AI and sustainable practices in Industry 4.0 is promising for future exploration. As technology continues to evolve, the heightened focus on sustainability within the workforce landscape emphasizes the need for innovations that align with ecological and economic imperatives [4]. Moreover, ref. [5] accentuates the necessity of integrating user-centric design in these wearable systems, thereby fostering greater acceptance and adherence among workers, which plays the main role for successful deployment. Nonetheless, it is important to acknowledge the limitations within the existing body of literature. A significant gap remains concerning comprehensive methodologies that consider the broader environmental impacts and resource implications of implementing AI technologies in industrial contexts [6]. The lack of extensive research addressing scalability issues and workforce dynamics further underscores an urgent need for deeper understanding [7]. This is particularly relevant given the diverse organizational structures and cultural settings found within the industry, which can significantly impact the effectiveness of these innovations [8].
Future research should prioritize addressing these gaps by developing more holistic frameworks that encompass sustainability metrics in AI models and investigating the socio-economic dimensions of adopting such technologies [9]. Moreover, interdisciplinary collaboration among AI researchers, ergonomists, and industrial engineers is vital to crafting adaptive solutions that are technologically advanced and socially responsible [10]. For instance, establishing standardized practices for the implementation of energy-efficient algorithms may provide pathways for scaling up these technologies in a multitude of industrial settings, as suggested by researchers like [11].
The literature consistently emphasizes this multidimensional approach, showcasing a clear trajectory towards the development of interconnected, sustainable solutions that are integral to the future of industrial practices [12,13,14]. As this field continues to evolve, ongoing research underscores the potential to revolutionize manufacturing processes through the application of AI in wearable technologies [15,16]. As the array of potential applications unfolds, this holistic understanding will ensure that advancements in technology foster positive change across labor-intensive sectors while promoting the well-being of workers and the environment alike [17,18,19].
Despite significant advancements in AI-driven arm movement estimation and wearable technologies [20], several research gaps persist that hinder their sustainable deployment within Industry 4.0. First, while many studies emphasize sensor fusion to improve accuracy, there is limited focus on minimizing sensor count to enhance sustainability and ergonomics—an aspect critical for long-term, real-world applications. Second, existing models often prioritize predictive accuracy without adequately considering energy efficiency or computational cost, resulting in solutions that may be powerful but not ecologically or operationally sustainable. Third, although deep learning methods show high accuracy, their complexity limits real-time implementation, especially in resource-constrained environments where lightweight, interpretable models are preferable. Additionally, generalization across diverse and dynamic movement patterns remains underexplored, with most models tested on narrow, predefined motions. Lastly, while user-centered design is gaining attention, its integration with environmental sustainability principles is still at the beginning. Our research addresses these gaps by introducing a resource-efficient, AI-based solution that accurately estimates arm kinematics from shoulder movements, offering a sustainable alternative to multi-sensor approaches and enabling broader applicability in real-time, human–machine interaction scenarios.
These technical gaps are further compounded by broader organizational challenges associated with implementing Industry 4.0 technologies. As Nardo et al. [21] note, integrating AI, IoT, and big data into industrial frameworks often faces barriers such as workforce upskilling and aligning innovations with existing processes. Moreover, the transition toward Industry 5.0 introduces new demands for sustainability and human-centric design, which require maintenance and monitoring systems to meet both operational and environmental objectives [22]. As Davim [23] emphasizes, achieving sustainable manufacturing entails balancing economic performance with ecological responsibility, underscoring the need for intelligent, adaptable, and resource-efficient solutions. These insights reinforce the relevance of AI-driven wearable technologies that address motion estimation challenges and also support sustainable deployment within evolving industrial ecosystems.
Our research is at the intersection of Industry 4.0 and sustainable engineering, contributing to the development of intelligent, resource-efficient wearable systems that support human–machine collaboration, contributing to the transition toward Industry 5.0. By leveraging AI to estimate upper arm movements based solely on shoulder kinematics, our approach reduces reliance on redundant sensors, thereby minimizing energy consumption, hardware complexity, and electronic waste—key objectives in sustainable product design. This model-driven estimation method enhances the ergonomic integration of wearable technologies in industrial settings and aligns with the principles of green digital transformation [24] by optimizing data acquisition and processing needs. Practical applications of our solution include exoskeletons for industrial workers, where fewer sensors mean lighter and more comfortable systems, rehabilitation robotics for physical therapy with reduced hardware constraints, and smart garments or motion monitoring tools used in occupational safety and preventive health. Our work supports the vision of Industry 4.0 by enabling adaptive, human-centric, and sustainable technologies that enhance productivity without compromising environmental goals.

3. Materials and Methods

Figure 1 illustrates the adopted workflow employed in the present research.
The study employed an experimental research design focused on developing an AI-based model to estimate upper arm orientation using shoulder kinematics data. Data were collected from five healthy, right-handed male participants aged 25–35, using a custom wearable system integrating three inertial measurement units (IMUs) positioned on the upper back (trunk), shoulder, and upper arm. Participants performed standardized repetitive arm movements, including flexion–extension, anterior–posterior swing, and circular motion, while orientation angles along the X, Y, and Z axes were recorded. The trunk IMU served as the global reference frame, and only shoulder and upper arm data were retained for analysis. Data preprocessing involved discarding static initial segments and selecting relevant variables, followed by statistical assessment using Pearson correlation to validate the relationship between input (shoulder) and output (arm) variables. Data distributions were evaluated using histograms, boxplots, and the Kolmogorov–Smirnov test, confirming non-normality and justifying the use of non-parametric models. A Random Forest Regression model was selected for its robustness to nonlinearities and was optimized via grid search across key hyperparameters. Model performance was validated using Root Mean Square Error (RMSE) and R2 metrics across training, test, and unseen datasets, confirming high predictive accuracy and generalizability.

3.1. Experimental Setup

The objective of the experiment was to collect orientation angle data from the shoulder and upper arm during specific movements of the right arm, in order to investigate the correlation between these kinematic variables and to support the development of a predictive model.
Five healthy right-handed male volunteers between the ages of 25 and 35 years participated in the study. None of the participants had a history of neuromuscular disorders affecting the upper limbs. Prior to participation, each subject provided informed consent in accordance with ethical research practices.
To acquire the orientation data, a custom-designed wearable system was developed. This system integrated three inertial measurement units (IMUs) positioned to capture the relevant motion: one sensor was placed on the spine at the center of the upper back, providing trunk orientation data; the second was positioned on the shoulder, at the midpoint between the base of the neck and the glenohumeral joint; and the third was mounted on the upper arm, halfway between the shoulder joint and the elbow (see Figure 2 for schematic).
To ensure consistent and interpretable orientation measurements, the IMU placed on the upper back (aligned with the spinal column) served as the global reference frame. All orientation data from the shoulder and upper arm were expressed relative to this trunk-mounted sensor. This configuration compensates for minor posture shifts and enables consistent analysis across different participants and movement types. The coordinate system for each IMU followed standard anatomical conventions: the X-axis represents flexion–extension in the sagittal plane, the Y-axis corresponds to abduction–adduction in the frontal plane, and the Z-axis denotes internal–external rotation in the transverse plane. By aligning the sensor axes to anatomical planes and using the trunk as a reference, the system provides a stable and reproducible basis for motion tracking and model training.
During data acquisition, participants were instructed to assume a standardized initial posture with the right arm fully extended in the horizontal plane. From this initial position, three types of repetitive arm movements were performed, each targeting different axes of rotation:
  • Vertical flexion–extension (X-axis rotation)—upward and downward arm motion in the sagittal plane (Figure 3).
  • Anterior–posterior swing (Z-axis rotation)—forward and backward movement in the horizontal plane.
  • Circular motion (combined X-Z axis rotation)—describing a continuous circular trajectory.
Data were recorded continuously for each motion type using the wearable IMU system. Initial preprocessing involved discarding the first 3 s of each recording session, corresponding to the static initial posture and not contributing to dynamic motion analysis. Each recording session yielded a comma-separated value (CSV) file containing nine columns. These represent the orientation angles (in degrees) along the X, Y, and Z axes for each of the three anatomical segments: trunk, shoulder, and arm, respectively. The orientation of the trunk served as a reference frame, and was used primarily for calibration and normalization. Consequently, only the last six columns (shoulder and arm orientation data) were retained for training the predictive model in the subsequent phases of analysis.
This experimental configuration ensured consistency across participants, minimized sensor noise due to mounting variations, and provided a high-fidelity dataset suitable for investigating the dynamic relationship between shoulder and arm kinematics.

3.2. Data Analysis

The data analysis process was designed to ensure the relevance, suitability, and interpretability of the predictive model by validating the relationship between input and output variables and selecting the most appropriate regression technique.

3.2.1. Pearson Correlation Analysis

To statistically justify the relationship between the orientation angles of the shoulder (input) and those of the upper arm (output), Pearson correlation coefficients were computed. This analysis quantifies the degree of linear dependence between each input–output variable pair and supports the hypothesis that shoulder and arm movements are synchronously coupled.
The Pearson correlation coefficient r between two variables X and Y is defined as Equation (1):
r = i = 1 n ( X i X ¯ ) ( Y i Y ¯ ) i = 1 n ( X i X ¯ ) 2 · i = 1 n ( Y i Y ¯ ) 2 ,
The strong positive correlations (e.g., r > 0.7) observed—particularly between Shoulder_X and Biceps_X—confirm that as the arm moves, the shoulder exhibits proportional, synchronized movement. This supports the use of a direct mapping model between the input and output orientation data.

3.2.2. Data Distribution Assessment

Before model selection, we analyzed the distributions of the input and output variables to assess normality. This assessment demonstrates whether parametric models (e.g., linear regression) are appropriate or if non-parametric models should be used.
Two approaches were employed:
  • Visual inspection using histograms and boxplots.
  • Statistical testing using the Kolmogorov–Smirnov test for normality, where the null hypothesis assumes that the data are normally distributed. Given the dataset includes over 60,000 samples, we employed the Kolmogorov–Smirnov (K–S) test for normality. The K–S test evaluates the null hypothesis that a sample comes from a normal distribution, by comparing the empirical cumulative distribution function (ECDF) of the sample with the cumulative distribution function (CDF) of the normal distribution.
A p-value below 0.05 indicates that the null hypothesis of normality is rejected. Based on the test results, all the input and output variables exhibited non-normal distributions, supporting the use of more flexible regression models.

3.2.3. Predictive Model Choice

Given the experimental setup and the proportional, immediate relationship observed between shoulder and arm movement, the problem was modeled as a simple multivariate regression task. As the data samples were independent and temporally unlinked, time-series models such as Recurrent Neural Networks (RNNs) or Long Short-Term Memory (LSTM) networks were not required.
Instead, depending on the variable distributions, we selected Random Forest Regression, a robust, non-parametric method capable of modeling nonlinear relationships without requiring distributional assumptions, as Multiple Linear Regression is suitable only for normally distributed data.

3.2.4. Random Forest Parameter Tuning

When employing Random Forests, several hyperparameters were tuned to optimize performance:
  • NumTrees (number of decision trees in the ensemble);
  • MinLeafSize (minimum number of observations per tree leaf);
  • MaxNumSplits (maximum number of splits allowed in a tree);
  • PredictorSelection (split criterion, e.g., ‘curvature’ or ‘interaction-curvature’).
A grid search approach was used to identify the optimal combination of hyperparameters by minimizing validation error RMSE and maximizing R2. Cross-validation was applied to ensure generalization and avoid overfitting.

3.2.5. Model Validation Metrics

To evaluate the trained model’s accuracy, two standard regression performance metrics were used:
Root Mean Square Error is shown in Equation (2):
R M S E = 1 n i = 1 n y i y ^ i 2 ,
Coefficient of Determination is shown in Equation (3):
R 2 = 1 i = 1 n y i y ^ i 2 i = 1 n y i y ¯ 2 ,
These metrics quantify the model’s predictive accuracy and its ability to explain the variability of the target data. An RMSE close to 0 and an R2 value close to 1 indicate a well-performing regression model.

4. Results

4.1. Dataset

4.1.1. Data Distributions

To better understand the statistical properties of the dataset and make the choice of the predictive models, the distribution of each feature was analyzed using histograms. Figure 4 presents the histograms for the orientation angles along the X, Y, and Z axes for both the shoulder and the upper arm (biceps).
Although the experimental setup primarily focused on arm movements around the X and Z axes—namely vertical flexion–extension, anterior–posterior swing, and circular motion—orientation changes along the Y axis at both the shoulder and biceps levels were still included in the prediction model. This inclusion is justified by the presence of secondary or compensatory Y-axis rotations that naturally occur during complex upper limb movements, even when such motions are not directly targeted. In biomechanical terms, movements such as arm elevation or circular swings often involve subtle internal or external shoulder rotation, which manifests as Y-axis orientation changes. These secondary movements, while not dominant, provide valuable contextual information about joint dynamics and spatial posture, helping the model to better interpret the overall kinematic pattern. By incorporating the Y-axis data, the model is able to capture a more holistic and accurate representation of joint behavior, thereby improving the predictive accuracy for the primary movement components. This approach is consistent with biomechanical modeling principles, where multi-axis dependencies are critical for representing natural human motion.
Several key patterns are observed. The Shoulder_X and Biceps_X distributions display a highly non-normal, bimodal structure, with dense concentrations around 0° and 40–50°, meaning discrete or oscillatory motion patterns during arm elevation. Both Shoulder_Y and Biceps_Y are left-skewed, indicating a prevalence of negative angle values, reflecting dominant movement in one direction (e.g., arm adduction or retraction).
In contrast, Shoulder_Z and Biceps_Z show broader and more uniform distributions, with multiple local modes and wider variance, indicative of more continuous rotational movement in the transverse plane. These findings support the necessity of applying non-parametric regression models, such as Random Forests, due to the evident non-normality and multimodal nature of several variables. These visual insights were confirmed statistically using the Kolmogorov–Smirnov test.
To further evaluate the statistical characteristics of the dataset and identify potential outliers, boxplots were generated for each of the six orientation features corresponding to the shoulder and biceps (Figure 5).
The Shoulder_X and Biceps_X variables show relatively symmetrical interquartile ranges centered around zero, but with extended whiskers and multiple extreme values, confirming the bimodal and wide-spread structure observed in the histograms. These features are indicative of complex motion dynamics during flexion-extension movements.
In contrast, the Shoulder_Y and Biceps_Y boxplots are notably left-skewed, with medians shifted toward the lower end of the distribution and a concentration of values below zero. This supports the earlier observation of dominant adduction-like motion and suggests directional bias in horizontal shoulder and arm rotation.
For the Shoulder_Z and Biceps_Z variables, the boxplots reflect a broad interquartile range with the presence of long whiskers and no significant outliers. This dispersion is consistent with circular or multidirectional movements captured on the Z-axis and confirms a wide range of arm orientation angles in the transverse plane.
The boxplot analysis corroborates the non-normal and often asymmetric distribution of the input and output features. These characteristics, combined with the absence of significant noise or anomalous outliers, support the selection of robust regression models such as Random Forests, which do not rely on strict normality assumptions and are capable of capturing complex nonlinear patterns in biomechanical datasets.

4.1.2. Correlation Analysis

To investigate the linear dependencies between input (shoulder orientation) and output (biceps orientation) variables, Pearson correlation coefficients were calculated and visualized in the form of a heatmap (Figure 6). The analysis reveals several significant relationships that justify the use of direct input–output regression modeling.
The strongest positive correlation is observed between Shoulder_X and Biceps_X (r = 0.97), indicating a nearly linear and synchronous relationship along the X-axis, likely corresponding to vertical arm movements (e.g., flexion and extension). Similarly, Shoulder_Y and Biceps_Y show a strong positive correlation (r = 0.71), reinforcing the pattern of coordinated rotation in the sagittal plane.
A high positive correlation is also evident between Shoulder_Z and Biceps_Z (r = 0.91), reflecting the strong kinematic coupling during axial rotations or circular movements. In contrast, the negative correlations, such as between Shoulder_Y and Biceps_Z (r = −0.55) and between Shoulder_Z and Biceps_Y (r = −0.34), suggest compensatory or opposing movements along orthogonal axes during complex gestures.
The correlation matrix supports the hypothesis that the shoulder joint significantly influences arm movement across all three axes. These findings validate the structure of the predictive model, where shoulder orientation can be reliably used as input for estimating arm pose in real-time applications.

4.2. Random Forest Model Optimization

To optimize the Random Forest regressor for predicting the three biceps orientation angles (X, Y, Z), a grid search was conducted using 80% of the dataset for training. The model was evaluated across a range of two key hyperparameters: the number of trees in the ensemble (from 50 to 500) and the random state (seeds used to control reproducibility) set to values {0, 1, 21, 42, 77}, while keeping MinLeafSize = 1 (for maximum flexibility), MaxNumSplits = 50 (for medium complexity), and PredictorSelection = ‘curvature’ (for nonlinear relationships).
Figure 7 illustrates the RMSE surface plots for each output component as a function of the number of trees and random seed value. The surfaces demonstrate consistent behavior across all three components: as the number of trees increases, the RMSE gradually decreases, reaching a plateau around 400–500 trees, beyond which the improvement is marginal. This trend shows that while adding more complexity to the model helps improve prediction accuracy, the improvements become smaller and less significant after a certain point.
  • For Biceps_X, the RMSE decreased from ~1.19 to 1.14 as the number of trees increased, with minimal sensitivity to the random state beyond 100 trees.
  • For Biceps_Y, the RMSE improved from ~0.75 to 0.71, demonstrating slightly higher sensitivity to both parameters, though stabilization occurs beyond 300 trees.
  • For Biceps_Z, the RMSE declined from ~0.78 to 0.735, again showing convergence and robustness for tree counts above 300.
Despite some variation due to the random initialization, the influence of the RandomState parameter was minimal once the ensemble reached a sufficient depth, confirming the model’s stability and generalizability. This robustness is further supported by the very high coefficient of determination (R2 > 0.99) recorded across all configurations, indicating near-perfect fit between the predicted and actual biceps orientation angles.
The tuning process validated the Random Forest regressor as a reliable and high-performing model for this application. A final configuration using 500 trees and RandomState = 42 was selected for further testing due to its consistently low RMSE and reproducibility.

4.2.1. Random Forest Model Evaluation on Test Data

After hyperparameter tuning on the 80% training subset, the Random Forest model’s generalization ability was assessed using the remaining 20% of the dataset. The model was evaluated under the same range of parameters: number of trees varying from 50 to 500, and five RandomState values (0, 1, 21, 42, 77) to ensure robustness across different initialization seeds.
Figure 8 displays the resulting RMSE surfaces for each predicted output component (Biceps_X, Biceps_Y, and Biceps_Z) on the test set.
  • For Biceps_X, RMSE values ranged between 3.28 and 3.37. The error surface shows a shallow valley, with the lowest RMSE obtained at around 200–300 trees and minimal variance across seeds, suggesting a stable model response. Performance appears to plateau beyond 400 trees.
  • For Biceps_Y, RMSE values varied between 1.93 and 1.97, with slightly more sensitivity to random state at lower tree counts. However, the lowest errors were again observed with larger ensembles (≥300 trees), confirming the benefit of deeper forests for this axis of motion.
  • For Biceps_Z, the model achieved RMSE scores between 2.005 and 2.04, and like the other components, showed consistent improvement with increased model complexity. The surface plot indicates an optimal region near 500 trees with random state values between 21 and 42.
Across all output components, the error surfaces confirm that increasing the number of trees improves prediction quality, with diminishing gains beyond 300–400 trees. The model shows excellent generalization capacity, and no overfitting symptoms were observed, as the RMSE values on the test data remained close to those from the training data.
Furthermore, all test configurations yielded coefficients of determination R2 greater than 0.97, confirming that the Random Forest model captures the underlying nonlinear mapping between shoulder and arm orientation with high accuracy on previously unseen data.
These results validate the suitability of the chosen model and parameter range for deployment in real-time biomechanical prediction tasks.

4.2.2. Final Model Testing on New Data

To assess the real-world performance of the Random Forest model trained with the optimal configuration (500 trees, RandomState = 42), we conducted a final evaluation on a separate, previously unseen dataset segment. This dataset includes previously unseen arm movement trajectories performed by the same participants, involving motion patterns that were not included in the training phase.
Figure 9 illustrates the actual vs. predicted curves for each of the three biceps orientation components (X, Y, and Z). The results demonstrate the model’s ability to accurately follow the dynamic evolution of joint angles across varied movement patterns.
  • For Biceps_X, the predicted trajectory closely follows the ground truth sinusoidal motion, with a coefficient of determination R2 = 0.9968 and a RMSE = 2.7721. Small deviations occur near peak transitions, reflecting areas of higher signal variability.
  • For Biceps_Y, the model achieved R2 = 0.9902 and an RMSE = 0.7804, capturing subtle angular modulations with high fidelity and maintaining good agreement across all motion phases.
  • The Biceps_Z component yielded the best performance with R2 = 0.9973 and RMSE = 0.6338, demonstrating near-perfect overlap between predicted and actual values, especially during complex circular or rotational patterns.
The slightly lower RMSE observed on the new data (2.7721 for Biceps_X), compared to the test set (3.28 to 3.37 for Biceps_X), is attributed to the nature of the movements: while the training and test data involved a wide range of ample imposed and deliberately varied movement types, the unseen data consisted of more natural, real-life motion patterns that were less complex and exhibited lower signal variability, leading to improved prediction accuracy during final validation.
These results confirm the model’s ability to generalize well beyond the training data and maintain high accuracy in predicting biomechanical behavior. The strong alignment of the predicted signals with the actual measurements across all three rotational axes demonstrates the robustness and precision of the proposed Random Forest approach in estimating arm kinematics based on shoulder orientation input.

5. Discussion

Table 1 presents a comparative analysis of the proposed model’s performance against existing approaches reported in the literature.
The comparative analysis highlights several key advantages of the proposed model over existing approaches in the literature. First, the model requires significantly fewer sensors, utilizing only two IMUs (shoulder and arm) to achieve high prediction accuracy. This contrasts with methods such as COANN or CNN-LSTM, which rely on multiple input signals or complex deep learning architectures, thereby increasing system complexity, cost, and discomfort for the user. Second, the proposed approach is entirely based on real, experimentally collected data, avoiding dependence on synthetic or reinforcement learning-generated datasets, such as those used in FusionNet or CNN-LSTM with DRL, which may introduce simulation bias or limit real-world generalizability. Third, the Random Forest model offers a low computational footprint and high energy efficiency, enabling real-time deployment on resource-constrained embedded platforms without the need for GPU acceleration, unlike many deep learning counterparts. Furthermore, the model achieves robust performance without requiring explicit calibration procedures or fusion filters like the Madgwick algorithm, as it learns directly from the natural kinematic coupling between the shoulder and upper arm. Lastly, the model demonstrates competitive accuracy across all motion axes, with RMSE values of 2.77°, 0.78°, and 0.63° for the X, Y, and Z directions, respectively, and R2 values exceeding 0.99. These results confirm that the proposed solution performs on par with or better than several state-of-the-art methods while maintaining simplicity, interpretability, and practical suitability for sustainable wearable applications.

5.1. Implications for Sustainable Wearable Design

The proposed AI-based estimation framework demonstrates how accurate biomechanical predictions can be achieved using a minimal sensor configuration, thereby significantly contributing to sustainable wearable design. By requiring only three IMUs in the design phase that reduce to two IMUs in the deployment phase, our system reduces hardware redundancy, material usage, and electronic waste, in alignment with sustainable engineering objectives for Industry 4.0 [24]. This design philosophy is consistent with the direction suggested by Jamwal et al. [31], who emphasize the need for intelligent, energy-efficient technologies in sustainable manufacturing ecosystems. Unlike many sensors fusion systems that increase complexity and power consumption, our approach highlights how targeted optimization can meet both functional and environmental goals. Furthermore, using Random Forests instead of energy-intensive deep learning models substantially lowers computational overhead, supporting deployment in low-power embedded systems [32].

5.2. Relevance to Industry 4.0 Applications

In the context of smart factories and human–robot collaboration, this work presents a practical and sustainable solution that can be integrated into lightweight exoskeletons, real-time ergonomic assessment tools, and assistive robotic systems. The ability to estimate arm movement from shoulder orientation alone enables more ergonomic, low-intrusion wearable devices—a key requirement in physically intensive industrial environments. These outcomes support the transition toward human-centric automation emphasized in Industry 5.0, which builds upon the cyber–physical principles of Industry 4.0 by prioritizing worker well-being and system resilience [32]. Additionally, [33] stressed the importance of model interpretability and real-time performance in AI-based ergonomic systems, both of which are effectively achieved through our Random Forest-based approach.

5.3. Advantages of AI-Based Modeling

The use of a Random Forest Regression model offers compelling advantages over more complex time-series-based models such as LSTM or RNNs. Given the non-normal, multimodal distributions observed in the dataset, Random Forests proved well-suited for modeling nonlinear, frame-wise relationships without temporal dependency [34]. Prior studies [35,36] demonstrated the effectiveness of LSTM networks for gait and locomotion prediction, but these models often require extensive training datasets and GPU-based execution, which are not ideal for wearable deployment. In contrast, our approach delivered R2 values above 0.99 with minimal training data and without deep architectures, thus maintaining model interpretability, robustness to noise, and suitability for edge devices [37]. This simplicity makes the system ideal for applications like rehabilitation robotics, where safety, adaptability, and energy efficiency are fundamental [38].

5.4. Limitations

Despite the strong performance of our system, several limitations must be acknowledged. The dataset was collected from only five healthy adult males, which limits the generalizability of the results to broader populations, including diverse genders, age groups, and individuals with impaired mobility. Similar studies have shown that biomechanical prediction systems can behave differently depending on anthropometric variability, requiring subject-specific calibration or adaptation [33]. In addition, while the model’s performance was validated across new motion patterns, further evaluation under real-world industrial conditions—such as repetitive tasks, load handling, or fatigue—would provide a more comprehensive understanding of its robustness and operational viability.

5.5. Future Work

To extend the practical applicability of this work, future research will focus on deploying the trained model into real-time embedded platforms, such as microcontroller-based exoskeleton controllers or edge AI systems. This will require energy profiling and potential code optimization for embedded inference, in line with Green AI principles [24]. Another promising direction is personalized model tuning, where the system could adapt its prediction behavior to individual users using minimal calibration data—supporting inclusive design and usability across heterogeneous populations. Additionally, expanding the system’s scope to incorporate bilateral or full-body kinematics may offer comprehensive solutions for posture monitoring, exoskeleton coordination, and predictive ergonomics in advanced Industry 4.0 environments.

6. Conclusions

This study presented a sustainable, AI-driven approach to estimating upper arm kinematics using only shoulder orientation data, significantly reducing sensor requirements without compromising predictive performance. By implementing a Random Forest Regression model, we successfully captured the nonlinear relationship between shoulder and arm movements across three rotational axes, achieving high accuracy (R2 > 0.99) and low RMSE values on both test and unseen datasets. The system’s ability to perform robustly with a minimal hardware configuration validates its potential for integration into lightweight, low-power wearable devices.
The proposed framework offers multiple advantages aligned with sustainable engineering principles, including reduced electronic waste, lower energy consumption, and enhanced user comfort. These attributes are critical in the design of next-generation wearable technologies for Industry 4.0, where human–machine collaboration, real-time responsiveness, and eco-efficiency are increasingly prioritized. Furthermore, the model’s suitability for edge deployment and adaptability to various motion patterns positions it as a strong candidate for applications in rehabilitation robotics, industrial exosuits, and ergonomic assessment systems.
The implications of this study highlight the potential for deploying the proposed Random Forest–based estimation model in real-world industrial and rehabilitation scenarios. By achieving accurate upper arm motion estimation with a reduced sensor configuration and low computational overhead, this approach supports the development of ergonomic, lightweight exoskeletons and wearable assistive systems designed for long-term use in smart manufacturing environments. Furthermore, its applicability extends to occupational health monitoring, where early detection of improper posture or repetitive strain can enhance worker safety. In rehabilitation contexts, the model could facilitate personalized, sensor-efficient tracking solutions suitable for both clinical and home-based therapy. These applications align with the goals of transition toward Industry 5.0 by promoting human-centric, adaptive, and sustainable technologies that bridge physical and digital systems.
The relevance of this work relates to the sustainable innovation in smart wearables that is essential for the ethical and efficient evolution of industrial systems. Our approach contributes to a broader shift toward resource-conscious, human-centric automation, supporting both environmental goals and technological advancement in the digital manufacturing era.

Author Contributions

Conceptualization, M.L. and A.C.I.; methodology, A.C.I.; software, E.M.; validation, M.L., E.M. and A.C.I.; formal analysis, M.L.; investigation, E.M.; resources, E.M.; data curation, E.M.; writing—original draft preparation, E.M.; writing—review and editing, M.L. and A.C.I.; visualization, M.L. and A.C.I.; supervision, M.L.; project administration, A.C.I.; funding acquisition, E.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Ethical review and approval were waived for this study due to the policies of University of Petroșani regarding exemption from ethical approval for low-risk and non-invasive technical research involving adult volunteers.

Informed Consent Statement

Written informed consent has been obtained from the subjects to publish this paper.

Data Availability Statement

Acknowledgments

During the preparation of this manuscript, the authors used Microsoft 365 Copilot for the purposes of language editing, drafting assistance and references formatting. The authors have reviewed and edited the output and take full responsibility for the content of this publication.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Fodor, K. Regulatory Science Challenges: Encouraging Innovation Through an Adaptive Regulatory System. Acta Pharm. Hung. 2021, 91, 106–107. [Google Scholar] [CrossRef]
  2. Krishna, D.R.; Devinandana, D.; Jayalakshmi, V.; Sabu, M.M.; Shiny, B. Survey on AI Driven Gym Assistant with Pose Recognition. Int. J. Adv. Eng. Manag. 2024, 6, 314–320. [Google Scholar] [CrossRef]
  3. Güzel, D.; Girgin, M.; Özkılıç, M.A.; Serin, A.; Güler, E.; Yılmaz, A.; Algüner, A.E.; Gürbüz, İ.; Ülger, Ö.; Yildiz, Ş.K. Design of a Telerehabilitation System Including Electromyography Driven Dynamic Arm Support for Duchenne Muscular Dystrophy Patients. In Proceedings of the 2024 32nd Signal Processing and Communications Applications Conference (SIU), Mersin, Turkey, 15–18 May 2024; Available online: https://www.semanticscholar.org/paper/1e8cbae1b25999b931310811a3011e4e465c3f10 (accessed on 24 May 2025).
  4. Spoo, S.; Garcia, F.; Braun, B.; Cabri, J.; Grimm, B. Towards Routine, Low-Cost Clinical Movement Analysis to Assess Shoulder Function Using Computer Vision and Consumer Cameras: A Scoping Review. Orthop. Proc. 2024, 106b, 105. [Google Scholar] [CrossRef]
  5. Dimitropoulos, K.; Daras, P.; Manitsaris, S.; Leymarie, F.; Calinon, S. Editorial: Artificial Intelligence and Human Movement in Industries and Creation. Front. Robot. AI 2021, 8, 712521. [Google Scholar] [CrossRef] [PubMed]
  6. Ciccarelli, M.; Papetti, A.; Germani, M. Exploring How New Industrial Paradigms Affect the Workforce: A Literature Review of Operator 4.0. J. Manuf. Syst. 2023, 70, 464–483. [Google Scholar] [CrossRef]
  7. Gładysz, B.; Tran, T.-A.; Romero, D.; van Erp, T.; Abonyi, J.; Ruppert, T. Current Development on the Operator 4.0 and Transition Towards the Operator 5.0: A Systematic Literature Review in Light of Industry 5.0. J. Manuf. Syst. 2023, 70, 160–185. [Google Scholar] [CrossRef]
  8. Keshvarparast, A.; Battini, D.; Battaïa, O.; Pirayesh, A. Collaborative Robots in Manufacturing and Assembly Systems: Literature Review and Future Research Agenda. J. Intell. Manuf. 2023, 35, 2065–2118. [Google Scholar] [CrossRef]
  9. Lorenzini, M.; Lagomarsino, M.; Fortini, L.; Gholami, S.; Ajoudani, A. Ergonomic Human-Robot Collaboration in Industry: A Review. Front. Robot. AI 2023, 9, 813907. [Google Scholar] [CrossRef]
  10. Abadade, Y.; Temouden, A.; Bamoumen, H.; Benamar, N.; Chtouki, Y.; Hafid, A. A Comprehensive Survey on TinyML. IEEE Access 2023, 11, 96892–96922. [Google Scholar] [CrossRef]
  11. Li, S.; Zheng, P.; Liu, S.; Wang, Z.; Wang, X.V.; Zheng, L.; Wang, L. Proactive Human–Robot Collaboration: Mutual-Cognitive, Predictable, and Self-Organising Perspectives. Robot. Comput.-Integr. Manuf. 2022, 81, 102510. [Google Scholar] [CrossRef]
  12. Baduge, S.K.; Thilakarathna, P.S.M.; Perera, J.S.; Arashpour, M.; Sharafi, P.; Teodosio, B.; Shringi, A.; Mendis, P. Artificial Intelligence and Smart Vision for Building and Construction 4.0: Machine and Deep Learning Methods and Applications. Autom. Constr. 2022, 141, 104440. [Google Scholar] [CrossRef]
  13. Tang, R.; De Donato, L.; Bešinović, N.; Flammini, F.; Goverde, R.M.P.; Lin, Z.; Liu, R.; Tang, T.; Vittorini, V.; Wang, Z. A Literature Review of Artificial Intelligence Applications in Railway Systems. Transp. Res. Part C Emerg. Technol. 2022, 140, 103679. [Google Scholar] [CrossRef]
  14. Al Kuwaiti, A.; Nazer, K.; Alreedy, A.H.; AlShehri, S.D.; Almuhanna, A.; Subbarayalu, A.V.; Al Muhanna, D.; Al Muhanna, F.A. A Review of the Role of Artificial Intelligence in Healthcare. J. Pers. Med. 2023, 13, 951. [Google Scholar] [CrossRef] [PubMed]
  15. Singh, M.; Srivastava, R.; Fuenmayor, E.; Kuts, V.; Qiao, Y.; Murray, N.; Devine, D.M. Applications of Digital Twin across Industries: A Review. Appl. Sci. 2022, 12, 5727. [Google Scholar] [CrossRef]
  16. Park, S.; Kim, Y.-G. A Metaverse: Taxonomy, Components, Applications, and Open Challenges. IEEE Access 2022, 10, 4209–4251. [Google Scholar] [CrossRef]
  17. Heng, W.; Solomon, S.A.; Gao, W. Flexible Electronics and Devices as Human–Machine Interfaces for Medical Robotics. Adv. Mater. 2021, 34, e2107902. [Google Scholar] [CrossRef]
  18. Luo, Y.; Abidian, M.R.; Ahn, J.-H.; Akinwande, D.; Andrews, A.M.; Antonietti, M.; Bao, Z.; Berggren, M.; Berkey, C.A.; Bettinger, C.J. Technology Roadmap for Flexible Sensors. ACS Nano 2023, 17, 5211–5295. [Google Scholar] [CrossRef]
  19. Mardoyo, E.; Lubis, M.; Bhaskoro, S.B. Evaluasi Virtual Reality Menggunakan Technology Acceptance Model (TAM) Terkait Dunia Metaverse. J. Sist. Cerdas 2022, 5, 182–194. [Google Scholar] [CrossRef]
  20. Risteiu, M.; Leba, M.; Arad, A. Exoskeleton for Improving Quality of Life for Low Mobility Persons. Qual.-Access Success 2019, 20 (Suppl. S1), 341–346. [Google Scholar]
  21. Nardo, M.; Madonna, M.; Addonizio, P.; Gallab, M. A mapping analysis of maintenance in industry 4.0. J. Appl. Res. Technol. 2021, 19, 653–675. [Google Scholar] [CrossRef]
  22. Davim, J. Sustainable and intelligent manufacturing: Perceptions in line with 2030 agenda of sustainable development. Bioresources 2023, 19, 4–5. [Google Scholar] [CrossRef]
  23. Davim, J. Perceptions of industry 5.0: Sustainability perspective. Bioresources 2024, 20, 15–16. [Google Scholar] [CrossRef]
  24. Tabbakh, A.; Al Amin, L.; Islam, M.; Mahmud, G.M.I.; Chowdhury, I.K.; Mukta, M.S.H. Towards Sustainable AI: A Comprehensive Framework for Green AI. Discov. Sustain. 2024, 5, 408. [Google Scholar] [CrossRef]
  25. Auepanwiriyakul, C.; Waibel, S.; Songa, J.; Bentley, P.; Faisal, A.A. Accuracy and Acceptability of Wearable Motion Tracking for Inpatient Monitoring Using Smartwatches. Sensors 2020, 20, 7313. [Google Scholar] [CrossRef]
  26. Shin, S.; Li, Z.; Halilaj, E. Markerless Motion Tracking with Noisy Video and IMU Data. IEEE Trans. Biomed. Eng. 2023, 70, 3082–3092. [Google Scholar] [CrossRef]
  27. Rahman, M.M.; Gan, K.B.; Aziz, N.A.A.; Huong, A.; You, H.W. Upper Limb Joint Angle Estimation Using Wearable IMUs and Personalized Calibration Algorithm. Mathematics 2023, 11, 970. [Google Scholar] [CrossRef]
  28. Bao, T.; Zaidi, S.A.R.; Xie, S.; Yang, P.; Zhang, Z.-Q. A CNN-LSTM Hybrid Model for Wrist Kinematics Estimation Using Surface Electromyography. IEEE Trans. Instrum. Meas. 2021, 70, 1–9. [Google Scholar] [CrossRef]
  29. Zong, S.; Li, W.; Sun, D.; Jia, Z.; Yue, Z. Shoulder–Elbow Joint Angle Prediction Using COANN with Multi-Source Information Integration. Appl. Sci. 2025, 15, 5671. [Google Scholar] [CrossRef]
  30. Ahmed, M.H.; Kutsuzawa, K.; Hayashibe, M. Transhumeral Arm Reaching Motion Prediction through Deep Reinforcement Learning-Based Synthetic Motion Cloning. Biomimetics 2023, 8, 367. [Google Scholar] [CrossRef]
  31. Jamwal, A.; Agrawal, R.; Sharma, M.; Giallanza, A. Industry 4.0 Technologies for Manufacturing Sustainability: A Systematic Review and Future Research Directions. Appl. Sci. 2021, 11, 5725. [Google Scholar] [CrossRef]
  32. van Wynsberghe, A. Sustainable AI: AI for Sustainability and the Sustainability of AI. AI Ethics 2021, 1, 213–218. [Google Scholar] [CrossRef]
  33. Klein, L.C.; Chellal, A.A.; Grilo, V.; Gonçalves, J.; Pacheco, M.F.; Fernandes, F.P.; Monteiro, F.C.; Lima, J. Assessing the Reliability of AI-Based Angle Detection for Shoulder and Elbow Rehabilitation. In Optimization, Learning Algorithms and Applications; Springer: Cham, Switzerland, 2023; pp. 3–18. Available online: https://link.springer.com/chapter/10.1007/978-3-031-53036-4_1 (accessed on 24 May 2025).
  34. Ribeiro, P.M.S.; Matos, A.C.; Santos, P.H.; Cardoso, J.S. Machine Learning Improvements to Human Motion Tracking with IMUs. Sensors 2020, 20, 6383. [Google Scholar] [CrossRef] [PubMed]
  35. Zaroug, A.; Garofolini, A.; Lai, D.T.H.; Mudie, K.; Begg, R. Prediction of Gait Trajectories Based on the Long Short Term Memory Neural Networks. PLoS ONE 2021, 16, e0255597. [Google Scholar] [CrossRef] [PubMed]
  36. Sherratt, F.; Plummer, A.; Iravani, P. Understanding LSTM Network Behaviour of IMU-Based Locomotion Mode Recognition for Applications in Prostheses and Wearables. Sensors 2021, 21, 1264. [Google Scholar] [CrossRef]
  37. Coser, O.; Tamantini, C.; Soda, P.; Zollo, L. AI-Based Methodologies for Exoskeleton-Assisted Rehabilitation of the Lower Limb: A Review. Front. Robot. AI 2024, 11, 1341580. [Google Scholar] [CrossRef]
  38. Kim, J.-W.; Choi, J.-Y.; Ha, E.-J.; Choi, J.-H. Human Pose Estimation Using MediaPipe Pose and Optimization Method Based on a Humanoid Model. Appl. Sci. 2023, 13, 2700. [Google Scholar] [CrossRef]
Figure 1. Research workflow.
Figure 1. Research workflow.
Sustainability 17 06372 g001
Figure 2. IMUs capturing device block diagram.
Figure 2. IMUs capturing device block diagram.
Sustainability 17 06372 g002
Figure 3. IMUs capturing device real system.
Figure 3. IMUs capturing device real system.
Sustainability 17 06372 g003
Figure 4. Histograms.
Figure 4. Histograms.
Sustainability 17 06372 g004
Figure 5. Boxplots of bicep and shoulder orientation angles.
Figure 5. Boxplots of bicep and shoulder orientation angles.
Sustainability 17 06372 g005
Figure 6. Pearson correlation coefficients.
Figure 6. Pearson correlation coefficients.
Sustainability 17 06372 g006
Figure 7. RMSE for training data.
Figure 7. RMSE for training data.
Sustainability 17 06372 g007
Figure 8. RMSE for the test dataset.
Figure 8. RMSE for the test dataset.
Sustainability 17 06372 g008
Figure 9. Actual vs. predicted output for new data using optimal Random Forest model.
Figure 9. Actual vs. predicted output for new data using optimal Random Forest model.
Sustainability 17 06372 g009
Table 1. Comparative analysis.
Table 1. Comparative analysis.
Method Type UsedRMSER2Main AdvantageMain Limitation
IMU-based comparison (Apple Watch, Xsens) [25]Apple Watch 3:
0.18 rad/s
Xsens: 1.66 m/s2
Apple Watch 3:
1.00
Xsens: 0.78
Low-cost, high acceptability, and strong accuracy for angular motionLess accurate than lab-grade sensors; limited for high-precision clinical needs
Deep learning: FusionNet, VideoNet, IMUNet [26]FusionNet: 4.5°
VideoNet: 5.3°
IMUNet: 5.6°
FusionNet > 0.90
VideoNet > 0.85
IMUNet > 0.76
Robust to sensor misplacement and noise; suitable for clinical use without calibrationStill relies on synthetic training data; accuracy dependent on noise level
Madgwick filter-based fusion with calibration algorithm [27]3.05°0.99Accurate angle estimation even under acceleration; low-cost systemLimited to elbow angles on rigid body; no deep learning/generalization
CNN-LSTM hybrid model [28]3.39°0.93Combines spatial and temporal feature extraction; superior to ML baselines in both intra-and inter-session testsRequires high computational resources; deep model complexity
COANN (ANN optimized with Cheetah Optimization Algorithm) [29]Elbow: 0.003701°
Shoulder: 0.003591°
Elbow: 0.9998
Shoulder: 0.9978
High accuracy via signal fusion; avoids local optima in trainingRequires multiple signal sources; optimization method may overfit
CNN-LSTM trained with real + synthetic DRL-generated data [30]4.03°Elbow flexion: 1.00
Elbow pronation: 0.98
Augments limited real data; improves generalization for ANNRelies on simulation quality; DRL training is computationally expensive
Our model—Random Forest RegressionShoulder:
X: 2.7721°
Y: 0.7804°
Z: 0.6338°
Shoulder:
X: 0.9968
Y: 0.9902
Z: 0.9973
Resource-efficient, real-data-driven model requiring only two IMUsLimited demographic diversity and task generalizability of the training data
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Muntean, E.; Leba, M.; Ionica, A.C. AI-Driven Arm Movement Estimation for Sustainable Wearable Systems in Industry 4.0. Sustainability 2025, 17, 6372. https://doi.org/10.3390/su17146372

AMA Style

Muntean E, Leba M, Ionica AC. AI-Driven Arm Movement Estimation for Sustainable Wearable Systems in Industry 4.0. Sustainability. 2025; 17(14):6372. https://doi.org/10.3390/su17146372

Chicago/Turabian Style

Muntean, Emanuel, Monica Leba, and Andreea Cristina Ionica. 2025. "AI-Driven Arm Movement Estimation for Sustainable Wearable Systems in Industry 4.0" Sustainability 17, no. 14: 6372. https://doi.org/10.3390/su17146372

APA Style

Muntean, E., Leba, M., & Ionica, A. C. (2025). AI-Driven Arm Movement Estimation for Sustainable Wearable Systems in Industry 4.0. Sustainability, 17(14), 6372. https://doi.org/10.3390/su17146372

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop