Automated Detection of Improper Sitting Postures in Computer Users Based on Motion Capture Sensors

: Prolonged computer-related work can be linked to musculoskeletal disorders (MSD) in the upper limbs and improper posture. In this regard, we report on developing resources supporting improper posture studies based on motion capture sensors. These resources were used to create a baseline detector for the automated detection of improper sitting postures, which was next used to evaluate the applicability of Hjorth’s parameters—Activity, Mobility and Complexity—on the speciﬁc classiﬁcation task. Speciﬁcally, based on accelerometer data, we computed Hjorth’s time-domain parameters, which we stacked as feature vectors and fed to a binary classiﬁer (kNN, decision tree, linear SVM and Gaussian SVM). The experimental evaluation in a setup involving two different keyboard types (standard and ergonomic) validated the practical worth of the proposed sitting posture detection method, and we reported an average classiﬁcation accuracy of up to 98.4%. We deem that this research contributes toward creating an automated system for improper posture monitoring for people working on a computer for prolonged periods.


Introduction
Computer-related activity is becoming a significant factor in the private and professional life of people around the world. In 2021, the average number of people who used computers and the internet for work-related tasks in the European Union was 58% of the workforce, with this percentage varying between 37% and 85% for the different member countries and an average total increase of 14% for the period 2012-2021 [1]. It was documented that prolonged computer use might be linked to a number of health problems, such as eyesight deterioration, weight gain and musculoskeletal disorders [2]. The development of musculoskeletal problems can be additionally highlighted, as it affects 47% of computer users [3], with problems manifesting most commonly as headaches, neck and shoulder problems and back pain. The most common causes of these health issues are incorrect and non-ergonomic work posture and environment, which can lead to neck-shoulder disorders [4][5][6] and discomfort of the upper limbs [7,8]. These conditions often result from static muscle load and awkward wrist position during keyboard operation [9].
Automated monitoring of the work environment and sitting postures during prolonged computer work is an essential component of efforts for musculoskeletal health safeguarding. Studies aimed at assessing workstation usage data [10] show that recordings of workstation behavioral patterns contain sufficient differences, allowing for the distinction of activities that lead to pain and fatigue or help avoid them. Based on these, different approaches for activity monitoring and assistance devices have been proposed. Among those are systems based on image processing techniques, which assess the correctness of the work posture [11][12][13], smart Internet of Things (IoT)-based office chairs, which classify of improper sitting postures based only on data from motion capture sensors. We used this baseline detector to evaluate the applicability of Hjorth's parameters [32]-Activity, Mobility and Complexity-as feature vectors providing a compact representation of multisensory accelerometer data. An experimental evaluation of the applicability of these features, when combined with some well-known and widely used classifiers in setups involving two different keyboards (standard and ergonomic), was shown to provide accurate detection of improper sitting postures. The reported high recognition accuracy validates the applicability of the proposed method and the practical worth of the developed resources. We deem the current research contributes toward creating an automated system for preventing musculoskeletal disorders (MSD) during prolonged periods of computer work. The novel aspects reported in the current study are mainly in the developed resources, which we made publicly available, the validation of Hjorth parameters as features for accelerometer data and the demonstration of accurate automated detection of improper posture with the simple baseline classification methods. Here, we emphasize that Hjorth's parameters were initially proposed for electroencephalographic (EEG) studies and later were also found helpful for the compact representation of peripheral physiological signals [33,34]. To the authors' knowledge, Hjorth's parameters were not studied for accelerometer data to this end.
In Section 2, "Materials and methods", we present a detailed description of the methodology, experimental protocol and dataset used in the experimental evaluation. Section 3, "Data processing and evaluation", provides information about the algorithms and approaches used to parameterize and classify the motion capture data. In Section 4, we report experimental results for recognizing three sitting postures. Finally, in Section 5, we discuss the experimental results and their implications, and in Section 6, concluding remarks and an outline of future research are presented.

Materials and Methods
The current research focused on posture recognition based on motion capture systems data. The overall workflow of data processing in the proposed method for automated posture detection is presented in Figure 1. detector of improper sitting postures based only on data from motion capture sensors. We used this baseline detector to evaluate the applicability of Hjorth's parameters [32]-Activity, Mobility and Complexity-as feature vectors providing a compact representation of multisensory accelerometer data. An experimental evaluation of the applicability of these features, when combined with some well-known and widely used classifiers in setups involving two different keyboards (standard and ergonomic), was shown to provide accurate detection of improper sitting postures. The reported high recognition accuracy validates the applicability of the proposed method and the practical worth of the developed resources. We deem the current research contributes toward creating an automated system for preventing musculoskeletal disorders (MSD) during prolonged periods of computer work. The novel aspects reported in the current study are mainly in the developed resources, which we made publicly available, the validation of Hjorth parameters as features for accelerometer data and the demonstration of accurate automated detection of improper posture with the simple baseline classification methods. Here, we emphasize that Hjorth's parameters were initially proposed for electroencephalographic (EEG) studies and later were also found helpful for the compact representation of peripheral physiological signals [33,34]. To the authors' knowledge, Hjorth's parameters were not studied for accelerometer data to this end. In Section 2, "Materials and methods", we present a detailed description of the methodology, experimental protocol and dataset used in the experimental evaluation. Section 3, "Data processing and evaluation", provides information about the algorithms and approaches used to parameterize and classify the motion capture data. In Section 4, we report experimental results for recognizing three sitting postures. Finally, in Section 5, we discuss the experimental results and their implications, and in Section 6, concluding remarks and an outline of future research are presented.

Materials and Methods
The current research focused on posture recognition based on motion capture systems data. The overall workflow of data processing in the proposed method for automated posture detection is presented in Figure 1. As shown in Figure 1, we rely on a purposely developed motion capture dataset consisting of multisensory measurements corresponding to proper and improper sitting postures when different keyboards are used (a detailed description of the dataset is provided in Section 2.1). Next, the data are subject to preprocessing, mainly consisting of average value removal for each sensor channel and removal of artefacts. Hjorth's parameters-Activity, Mobility and Complexity-are then computed from the preprocessed data. The feature vectors, composed of the concatenation of Hjorth's parameters computed for each sensor channel, are then fed to the classifier to make the final decision. As shown in Figure 1, we rely on a purposely developed motion capture dataset consisting of multisensory measurements corresponding to proper and improper sitting postures when different keyboards are used (a detailed description of the dataset is provided in Section 2.1). Next, the data are subject to preprocessing, mainly consisting of average value removal for each sensor channel and removal of artefacts. Hjorth's parameters-Activity, Mobility and Complexity-are then computed from the preprocessed data. The feature vectors, composed of the concatenation of Hjorth's parameters computed for each sensor channel, are then fed to the classifier to make the final decision.

Dataset
A dataset of motion capture data, images, videos and models was created in an experimental setup that used the Perception Neuron motion-capture system and the Axis Neuron software tool. Specifically, in the current research, we consider three main recording scenarios: • SK-regular working posture when using a standard keyboard; • EK-regular working posture when using an ergonomic keyboard; • EKC-correct working posture when using an ergonomic keyboard.
In the recording protocol, a regular posture is defined as the sitting posture that participants take naturally while working on a computer. For recording a proper work posture, we used a special sitting pillow, which required the sitting person to keep a balanced and proper working posture. A medical specialist in the field of MSD rehabilitation instructed each participant on how to keep a proper posture when sitting on the pillow. The recording context: with/without a pillow and standard/ergonomic keyboards were recorded with short breaks within a 45 min session.
The data recording protocol started with the placement and calibration of motion capture sensors, after which the participants were asked to type a pre-selected text on a computer with a standard 89-button keyboard. Each recording session included three sub-sessions-standard keyboard and usual posture (SK), ergonomic keyboard and usual posture (EK) and ergonomic keyboard and correct posture (EKC). The SK and EK recording sessions had an average length of 10 min, while the EKC sessions had an average length of 3 min. The recording of the participants' physical activity was performed by using the motion-capture system perception neuron. The system included 59 sensors, of which 21 were placed on the body, and 39 were placed on the fingers of both hands. In total, ten persons were recorded (The Motion-Capture DataSet in support of MSD research (MCD-MSD) is publicly available at the ErgoResearch project website, URL: http://isr.tuvarna.bg/ergo/index.php/resursi, accessed on 30 May 2022). Specifically, we used only the motion capture data in the current study.

Initial Processing and Data Transformation
The motion capture recordings were preprocessed to facilitate the subsequent feature extraction process. The preprocessing was needed to segment the recordings of each of the three scenarios, validate the tags introduced during data collection sessions and discard episodes with artefacts associated with setup changes during the breaks. For that purpose, we used the functionality available in the axis neuron tool. Subsequently, we extracted only the data channels relevant to our study. Specifically, the following parametric descriptions of the motion capture data were exported: bone posture quaternion in bone coordinates, displacement; speed in ground coordinates; initial acceleration; gyro data in module coordinates. A result of the raw data used in our study contained each motion capture sensor positioned on the body. The following data were obtained: A total of 16 parameters are calculated from the data recorded from each sensor. The data files were exported in csv format.

Mean Value Removal
In order to eliminate the average value of each sensor channel, we used mean computing with analysis of variance (ANOVA), as the sensor data roughly fitted the Gaussian distribution. The preprocessed data were separated into two subsets. The first subset contained data from 14 sensors positioned on the participants' arms, heads and bodies in the experiments. The second contained data from 38 sensors positioned on the fingers of the participants. The one-way ANOVA approach was used to calculate the mean values, performed separately for each of the three classes (EK, EKC and SK). The preprocessed data were further used for feature extraction.

Feature Extraction
The preprocessing data were used to calculate Hjorth's parameters [32]-Activity, Mobility and Complexity. Hjorth's parameters were initially designed for EEG signal analysis; however, given that they provide information about the spectral characteristics of the signals and have found application in a broader range of fields, we postulated that they would be an elegant way to describe the accelerometer data collected here. We decided to evaluate the applicability of Hjorth's parameters on the sitting posture recognition tasks, as they are easy to compute in the time domain and have a straightforward, intuitive meaning.
Precisely, a high or low value for Activity corresponds to the existence of high or low-frequency components in the analyzed signals: where σ is the standard deviation of the examined signal. The parameter Mobility represents the average frequency of the signal. This feature corresponds to the calculation of the standard deviation of a signal in the frequency domain: where σ d is the first temporal derivative of the standard deviation of the examined signal. The parameter Complexity is a measure of details concerning the curve shape of the sine wave.
where σ dd is the second temporal derivative of the standard deviation of the examined signal. These parameters were extracted for segments of 1 s for each sensor channel. Hjorth's parameters for a segment for all channels were concatenated to form the feature vector.
We formed three datasets. The first datasets contained features calculated only from data acquired from eight sensors positioned on the arms of the participants. The second dataset contained features calculated from data from six sensors positioned on the back and head of the participants. The third dataset contained features from 38 sensors positioned on the fingers of the participants' left and right hands. Specifically, in the third case, where data from the finger sensors were used, the results for the World coordinate displacement (Xx, Xy, Xz) and Velocity (Vx, Vy, Vz) calculations are omitted. This is performed because the results obtained from the preprocessing of these parameters are zero. This is caused by the small values of displacement of the positions of the fingers due to the nature of the performed activity (typing on a keyboard). As a result of the omission, Hjorth's parameters are calculated from 10 parameters (Qs, Qx, Qy, Qz, Ax, Ay, Az, Mx, My, Mz) per sensor.
In addition, for each of the three created datasets, seven subsets of feature data were formed, which are as follows: • Activity-only dataset; • Mobility-only dataset; • Complexity-only dataset; • Activity-and Mobility-only dataset; • Activity-and Complexity-only dataset; • Mobility-and Complexity-only dataset; • Activity, Mobility and Complexity dataset.
The total length of the feature vector for different cases can be calculated as the number of sensors multiplied by the number of parameters calculated from each sensor and the number of features in the subset. The resulting feature vector lengths for the three datasets are as follows: In some cases, during the recording period, no changes in the position of the sensor for over a second occur. This causes the recording of a single, constant value for the entire period, which leads to an inability to correctly calculate features. In these scenarios, the result for the previous (or following) second is used for the second during which a calculation cannot be performed.

Feature Ranking
Feature ranking with the ReliefF algorithm was used to evaluate the relevance of each feature. ReliefF operates by evaluating the worth of a given attribute by repeatedly sampling an instance and comparing the attribute's value with its nearest instances from both the same and different classes [35]. The subsets containing a combination of all the features (Activity, Mobility and Complexity) for each of the three classes (EK, EKC, SK) were used in the ranking. The feature vectors were analyzed using the WEKA machine-learning toolbox [36]. For the ranking of the features, 10-fold cross-validation was performed. During the 10-fold cross-validation, the datasets were randomly separated, and different data sections were used to train and evaluate each validation iteration. After the process is repeated 10 times, a final result is obtained. This approach avoids overfitting and selection bias and provides a more generalized model behavior.

Classification
For the evaluation of the applicability of the feature subsets defined in Section 3.1, we experimented with four well-known and well-understood classification algorithms, such as kNN [37], decision tree [38] and SVM [39], with linear and Gaussian kernels. The classification experiments were conducted using the Matlab Machine learning toolbox, where each classifier was fed with the same feature vectors. In all experiments, we followed the 10-fold cross-validation approach with an automated classifier parameter optimization as follows: • Decision tree: Maximum number of splits and split criterion (Gini's diversity index, Twoing rule, maximum deviance reduction). • kNN: Number of neighbors (range 1-1000), distance metric (Euclidean, city block, Chebyshev, cubic, Mahalanobis, cosine, correlation, Spearman, Hamming, Jaccard), distance weight (equal, inverse, squared inverse) and data standardization. • SVM (for both linear and Gaussian kernels): box constraint level and data standardization. The multiclass method chosen for both kernel types is "One-vs-All".

Average Mean Value Visualization and Analysis
Each participant's data were split into two subsets-body and finger sensors. This operation was performed for each of the 10 participants in the datasets, and the results are averaged. This eliminates person-specific occurrences and allows for a general overview of the observed differences. The average mean values for each class are then subtracted, and three sets of results are obtained: (a) difference between EKC and EK setup; (b) difference between EKC and SK setup; (c) difference between EK and SK setup. This operation is performed for both body and fingers setups, and the results are presented in Figures 2-7. ization. The multiclass method chosen for both kernel types is "One-vs-All".

Average Mean Value Visualization and Analysis
Each participant's data were split into two subsets-body and finger sensors. This operation was performed for each of the 10 participants in the datasets, and the results are averaged. This eliminates person-specific occurrences and allows for a general overview of the observed differences. The average mean values for each class are then subtracted, and three sets of results are obtained: (a) difference between EKC and EK setup; (b) difference between EKC and SK setup; (c) difference between EK and SK setup. This operation is performed for both body and fingers setups, and the results are presented in Figures 2-7.  In these figures, it can be seen that a variation between the calculated mean values of the signals exists for different cases. These variations demonstrate specific characteristics for the cases where different keyboards are used and those where the posture is different, but the keyboard is unchanged. As expected, the slightest difference in the data from the sensors is between a common and correct working position when typing with an ergonomic keyboard. This means that a change in work habits is enough for a positive change in the working position-in some situations, even changing the keyboard could be a sufficient step toward improving the overall health condition before costly improvements in the work environment or the more difficult changes in behavior. However, in this case, the highest difference occurs on the sensors positioned along the spine regarding their acceleration. The differences in the position of body parts when using ergonomic and ordinary keyboards are significant, with substantial variation in the positions of the left and right hands. The differences become even more pronounced when a proper work posture is taken when using the ergonomic keyboard-the change can be measured in the displacement in the Y-axis along all of the sensors, with the highest difference observed in the sensor placed at the right forearm of the participants. The change caused by the proper posture also manifests as higher acceleration measurement in the left and right forearms of the participants.          In these figures, it can be seen that a variation between the calculated mean values of the signals exists for different cases. These variations demonstrate specific characteristics for the cases where different keyboards are used and those where the posture is different,    In these figures, it can be seen that a variation between the calculated mean values of the signals exists for different cases. These variations demonstrate specific characteristics for the cases where different keyboards are used and those where the posture is different,    In these figures, it can be seen that a variation between the calculated mean values of the signals exists for different cases. These variations demonstrate specific characteristics for the cases where different keyboards are used and those where the posture is different, In Figures 5-7, the data from the finger sensors presented as S22 to S40 denote sensors placed on the participants' left hand, while S41 to S59 are sensors positioned on the participants' right hand. In the case where the differences between having a regular and proper working pose when using an ergonomic keyboard are examined, there is a change in the position of the fingers, specifically, the pinkie on the left hand, regarding the Z-axis. Additionally, there is a visible change in the acceleration and position of the thumb of the left and right hands. The differences between the use of ordinary keyboard and ergonomic keyboard, regardless of the posture, are visible, although it can be seen that the amplitudes of change in the case of EKC and SK are higher than in the case of EK and SK.
The differences in both cases are well pronounced along the sensors placed on the left hand of the participants.

Posture and Activity Classification
The averaged classification accuracy for each feature is presented in Tables 1-3, while the graphic representation of the results is provided in Figures 8-10.         In Table 1 and Figure 8, the results obtained from the classification experiments using data obtained from sensors placed on participants' arms are presented. The highest mean classification accuracy of 98.4% was obtained using SVM with linear kernel and the combination of Activity, Mobility and Complexity features. The lowest observed result is 71.5%, obtained using the decision tree classifier and the Complexity features alone. In general, using Mobility and Complexity as sole features leads to lower classification results than in cases where the Activity feature is also used. This effect is amplified in cases where decision tree and kNN classifiers are used-those setups tend to demonstrate lower mean classification accuracies and increased variation in the results of individual participants. The standard deviation for the experiments varies in the range of 1.1% to 6.7% depending on the used features and classifiers.  In Table 1 and Figure 8, the results obtained from the classification experiments using data obtained from sensors placed on participants' arms are presented. The highest mean classification accuracy of 98.4% was obtained using SVM with linear kernel and the combination of Activity, Mobility and Complexity features. The lowest observed result is 71.5%, obtained using the decision tree classifier and the Complexity features alone. In general, using Mobility and Complexity as sole features leads to lower classification results than in cases where the Activity feature is also used. This effect is amplified in cases where decision tree and kNN classifiers are used-those setups tend to demonstrate lower mean classification accuracies and increased variation in the results of individual participants. The standard deviation for the experiments varies in the range of 1.1% to 6.7% depending on the used features and classifiers.
The observed relations between the extracted features and used classifiers are even more pronounced in the cases where data from the back and head are used. Averaged classification accuracies of 95.8-96.0% are observed when the Activity feature is used as the sole feature or combined with Complexity and Mobility, and the decision tree classifier is applied. Due to the standard deviation of ~3%, a single optimal approach cannot be highlighted. As pointed out, the limitations of the Mobility and Complexity features for this case are highlighted by the obtained mean classification results of under 83% and a standard deviation of 7-9% for those cases. The observed relations between the extracted features and used classifiers are even more pronounced in the cases where data from the back and head are used. Averaged classification accuracies of 95.8-96.0% are observed when the Activity feature is used as the sole feature or combined with Complexity and Mobility, and the decision tree classifier is applied. Due to the standard deviation of~3%, a single optimal approach cannot be highlighted. As pointed out, the limitations of the Mobility and Complexity features for this case are highlighted by the obtained mean classification results of under 83% and a standard deviation of 7-9% for those cases.
In Table 3 and Figure 11, we present the averaged classification accuracy for the experiments using data obtained from the sensors positioned on the fingers. Again, the highest accuracy is obtained in cases where the Activity feature is used with no single optimal combination of features. All of the previously discussed relations are observed here too. Although the mean classification accuracy is above 90%, the setup where data from the finger sensors are used demonstrates the lowest results for all the examined setups concerning maximum achieved accuracy. In Table 3 and Figure 11, we present the averaged classification accuracy for the experiments using data obtained from the sensors positioned on the fingers. Again, the highest accuracy is obtained in cases where the Activity feature is used with no single optimal combination of features. All of the previously discussed relations are observed here too. Although the mean classification accuracy is above 90%, the setup where data from the finger sensors are used demonstrates the lowest results for all the examined setups concerning maximum achieved accuracy.

Feature Ranking Results
In addition, feature ranking was performed. The feature subsets containing a combination of all three features were used. As in the classification task, three sensor data sets were examined-back, arms and fingers. After the ranking, the features ranked in the first quartile of the best performing features for different participants were compared. The results of the occurrence evaluation for the different sets are presented in Figures 11-13, with the highest value for the occurrence frequency being 9. The difference in the span of the presented results is due to the differences in the length of the feature vectors constituting the three feature datasets. A detailed list of the features with the highest occurrence rate for each sensor group is provided in Appendix A. Figure 11. The occurrence rate of features ranked in the first quartile after the feature ranking process for data obtained from finger sensors. Figure 11. The occurrence rate of features ranked in the first quartile after the feature ranking process for data obtained from finger sensors.

Feature Ranking Results
In addition, feature ranking was performed. The feature subsets containing a combination of all three features were used. As in the classification task, three sensor data sets were examined-back, arms and fingers. After the ranking, the features ranked in the first quartile of the best performing features for different participants were compared. The results of the occurrence evaluation for the different sets are presented in Figures 11-13, with the highest value for the occurrence frequency being 9. The difference in the span of the presented results is due to the differences in the length of the feature vectors constituting the three feature datasets. A detailed list of the features with the highest occurrence rate for each sensor group is provided in Appendix A.
rate for each sensor group is provided in Appendix A. Figure 11. The occurrence rate of features ranked in the first quartile after the feature ranking process for data obtained from finger sensors.  In addition, for each classification approach, features calculated from different representations of the raw signals demonstrate higher relevance. When data from the sensors positioned on the fingers are used, the coordinate module's posture data parameter Qs demonstrate the highest relevance for Activity, Mobility and Complexity. The Qs parameter also demonstrates relevance when data from sensors positioned on the arms are used. In this case, however, the acceleration of the hands and arms and their general movement In addition, for each classification approach, features calculated from different representations of the raw signals demonstrate higher relevance. When data from the sensors positioned on the fingers are used, the coordinate module's posture data parameter Qs demonstrate the highest relevance for Activity, Mobility and Complexity. The Qs parameter also demonstrates relevance when data from sensors positioned on the arms are used. In this case, however, the acceleration of the hands and arms and their general movement also play a significant role in the classification process. In the last case, where data from sensors positioned along the head and spine of participants are used, the most relevant features are those concerning the movement and displacement of the body.

Discussion
The experimental evaluations show that the classification of computer work postures and activities can be achieved using motion capture sensors positioned in different locations on the body. Although the changes in the position caused by taking a proper sitting posture are different for each separate individual, there are areas of the body where universal and identical effects are observed. One such location is the arms, with the position and movement of the arms being a clear indicator of not only the body posture but also the type of the performed activity and the used hardware. Examination and averaging of participant measurements show that the position of the right forearm is subject to a greater level of change than the spine when a proper sitting posture is assumed. These effects are also observed in the classification stage of the experiments, where the highest mean classification accuracy is achieved when features are extracted from data obtained from sensors positioned along the arms of the participants.
Another essential element of the classification process is the relation between the classification accuracy achieved by using different groups of features and their relevance to the observed states. As previously noted, significantly higher classification accuracies (in most cases between 5 and 10%) are achieved by using Activity as a sole feature than in the experiments where Mobility and Complexity features are used. This contrasts with the results observed for evaluating the relevance of the features where Activity features show lower relevance than Mobility and Complexity features. This demonstrates that the Activity features have a higher connection to the observed states than other examined features. As Activity is used as an indicator of the existence of high-or low-frequency components in the signals, this would suggest that one of the most defining differences for all of the parameters obtained from the raw motion sensor data is the intensity of the measured changes. This observation is indicative that the changes in work posture and the use of an ergonomic keyboard lead to a high difference in the intensity with which different activities are performed. In addition, the lower results of Mobility and Complexity show that the setup change does not significantly change the users' workflow. The low results for the Mobility parameter indicate no significant change in the frequency variance-in the current case, intensity-with which the task is carried out. Similar conclusions can be made for the Complexity parameter, which indicates the degree to which the recorded signals are identical to a sine wave, indicating that the tasks are performed with a relatively similar periodicity. The high standard deviation between the results observed for Mobility and Complexity can also be attributed to these relations between the participants' activities and the calculated features. During the recording sessions, the participants' work patterns varied with the degree of prior experience with using ergonomic keyboards.
All of the observed results are well aligned with previous studies in the field, which indicate that the use of ergonomic keyboards and the work posture do not affect the quality of the work process and reduce the level of strain and exhaustion during work. Using parameters indicating the activity's intensity during work provides the best approach for modeling and detecting work-related activities and correct work posture.
Future research will benefit from the knowledge accumulated for the applicability of Hjorth's parameters toward creating an improper posture detector, which will be an essential component in a camera-less body position monitoring system. Novel developments in the smart (COST Action CA17107 -CONTEXT: European Network to connect research and innovation efforts on advanced Smart Textiles, URL: https://www.context-cost.eu/, accessed on 30 May 2022) and recent commercial innovations will soon lead to smart clothes with integrated accelerometer sensors. These clothes can easily provide the sensor data for working posture assessment unobtrusively. Notifications and reminders could help the user correct body posture during prolonged hours of computer work to decrease the risk of musculoskeletal disorders without needing specialized sitting pillows.

Conclusions
In this article, we reported on developing a publicly available dataset containing images, videos, multisensory accelerometer data and motion capture models of ten participants that can potentially support future research on improper posture detection. We used the motion capture data only to demonstrate the applicability of Hjorth's parameters as features for the automated classifying of computer-related work postures. In this regard, we reported that Hjorth's time-domain features (Activity, Mobility and Complexity) can distinguish improper from proper sitting postures, with an average classification accuracy of 98.4%. Three experimental datasets containing data from smaller subsets of sensors positioned on the fingers, arms and back were tested, and for each group, the classification accuracy was above 92%.