Dynamic Path Planning Based on 3D Cloud Recognition for an Assistive Bathing Robot

: Assistive bathing robots have become a popular point due to their metrics, such as a humanoid working approach in the solution of elder care. However, the abilities of dynamic recognition and path planning are the key to obtain the advantages. This paper proposes a novel approach to recognize and track the dynamical human back, and path planning on it via a 3D point cloud. Firstly, the human back geometric features are recognized through coarse-to-fine alignment. The Intrinsic Shape Signature (ISS) algorithm combined with the Fast Point Feature Histogram (FPFH) and the Sample Consensus Initial Alignment (SAC-IA) algorithm are adopted to complete the coarse alignment, and the Iterative Closest Point (ICP) algorithm is applied to the fine alignment to improve the accuracy of recognition. Then, the dynamic transformation matrix between the contiguous recognized back is deduced based on spatial motion between two adjacent recognized back point clouds. The path can be planned on the tracked human back. Finally, a set of testing experiments are conducted to verify the proposed algorithm. The results show that the running time is reduced by 66.18% and 96.29% compared with the other two common algorithms, respectively.


Introduction
The increasingly serious social aging [1] has drawn attention from all walks of life to products that assist the elderly and the disabled.In recent years, various types of robots have been developed for elderly care, such as bath robots [2], moxibustion robots [3], and massage robots [4].The issue of safety and comfort for these robots during their work is crucial, since they must use their robotic arms to imitate human hands to work on the user's back.Dynamic path planning for bathing is a challenging task because the robot must recognize localization and perform dynamic tracking and path planning during human movements.
The majority of available studies on back recognition are static.Chen H et al. [5] recognized the back by pasting a large number of artificial markers on the body back, which is simple and rapid, but with limited application scenarios.K.C. Jones et al. [6] designed a massage robot that recognizes the back by directly inputting the coordinates of the user's shoulder and waist points.However, it is unable to adapt to the individual differences of users with different body sizes.Meanwhile, the previously inputted coordinates cannot correspond to the user's body parts with the movement of the user during the massage.In practical situations, users inevitably change their sitting posture due to breathing or other factors.Thus, the robot needs to quickly acquire three-dimensional information about the human body surface to recognize the body's back region in different postures.

1.
The human body point cloud is rapidly acquired based on the scene information collected by the depth camera in this paper, which solves the problem of a large amount of collected scene data and redundant point clouds.

2.
This paper recognized the back region using the back geometric features in the human body point cloud, which was without RGB information and evident texture.An effective segmentation method of the back region is proposed for users with different body types in different postures during moving.

3.
This paper proposes a point cloud coarse-to-fine alignment algorithm that incorporates a spatial motion transformation matrix to achieve human back tracking.4.
We provided a method for acquiring bathing paths and realized dynamic path planning by combining the outcomes of back tracking.The issue of the robot being unable to alter the bathing path in time due to the user's involuntary movement during the bathing process has been resolved.5.
The proposed algorithm is compared with the 3Dcs-ICP algorithm and standard coarse-fine alignment algorithm for back tracking experiments, respectively, and the comprehensive performance of the algorithm is illustrated in terms of evaluation metrics such as recognition speed and accuracy.
The remainder of the paper is organized as follows: Sections 2 and 3 introduce the principle of the proposed algorithms.Section 4 presents the experimental platform construction and results, respectively.Finally, the conclusion is drawn in Section 5.

Dynamic Tracking Algorithm
Users usually involuntarily adjust their postures due to breathing or other factors during the bathing process.The original bathing paths should be adjusted accordingly with the change in back postures, thus improving users' comfort.Meanwhile, the object of the assistive bathing robot is oriented to the semi-disabled elderly, and the whole bathing process is performed on a chair.Therefore, the chair can be regarded as the fixed coordinate system O C − X C Y C Z C shown in Figure 1 to calculate the transformation matrix of a point on the human back point cloud during the movement.
bathing process is performed on a chair.Therefore, the chair can be regarded as the  This paper proposes a dynamic tracking algorithm to solve the problem of inv tary random motion of the human body during the working process of the assistive ing robot; the human back space motion transformation matrix is obtained through tration results from two adjacent frames of point clouds.The specific process is sho Figure 2.This paper proposes a dynamic tracking algorithm to solve the problem of involuntary random motion of the human body during the working process of the assistive bathing robot; the human back space motion transformation matrix is obtained through registration results from two adjacent frames of point clouds.The specific process is shown in Figure 2.
bathing process is performed on a chair.Therefore, the chair can be regarded as the fixed  This paper proposes a dynamic tracking algorithm to solve the problem of involuntary random motion of the human body during the working process of the assistive bathing robot; the human back space motion transformation matrix is obtained through registration results from two adjacent frames of point clouds.The specific process is shown in Figure 2.  Firstly, the number of point clouds is reduced by sampling under VoxelGrid filter; secondly, the two-frame point cloud's approximate rotational translation matrix is computed from the coarse alignment after extracting the key points; finally, the exact matrix is obtained by iterating from the fine alignment.

Recognition of the Human Back
As shown in Figure 3a, the point cloud without RGB information collected by the depth camera includes a large amount of redundant information, such as walls and floors.It is necessary to preprocess the point cloud to improve the algorithm efficiency, which is divided into three parts, as illustrated in Figure 3b.Then, the human body point cloud is obtained as shown in Figure 3c.Firstly, the number of point clouds is reduced by sampling under VoxelGrid filter; secondly, the two-frame point cloudʹs approximate rotational translation matrix is computed from the coarse alignment after extracting the key points; finally, the exact matrix is obtained by iterating from the fine alignment.

Recognition of the Human Back
As shown in Figure 3a, the point cloud without RGB information collected by the depth camera includes a large amount of redundant information, such as walls and floors.It is necessary to preprocess the point cloud to improve the algorithm efficiency, which is divided into three parts, as illustrated in Figure 3b.Then, the human body point cloud is obtained as shown in Figure 3c.Firstly, many point clouds of the walls are removed by a passthrough filter.The point cloud 2 P , which contains the human body region and the seat region, can be obtained as Firstly, many point clouds of the walls are removed by a passthrough filter.The point cloud P 2 , which contains the human body region and the seat region, can be obtained as where P 1 = p i p i ∈ R 3 , i = 1, 2, . . ., n denotes the scene point cloud captured by the camera, p i denotes any point in the point cloud, and x 1 , x 2 and z 1 , z 2 are the threshold value in the xand z-directions, respectively.Secondly, the statistical filter of the point cloud is required to minimize the effect of outlier point clouds.The point cloud P 3 can be obtained by removing the outliers whose near-neighbor distance mean is greater than α times the standard deviation: where d i denotes the distance from p i to their nearest-neighbor point; µ and σ denote the mean distance and standard deviation between p i and each point, respectively.Finally, the overlap between the seat point cloud and the point cloud P 3 is deleted to obtain the human body point cloud P body as shown in Figure 3c.The point cloud P body is processed by the geometric feature-based back segmentation method [21] in Figure 3d to obtain the human back point cloud P back as shown in Figure 3e.

Coarse-to-Fine Alignment and Tracking Algorithm
In order to shorten the alignment time and improve the interaction efficiency between the robotic arm and the human body, the voxel downsampling method is chosen to filter the point cloud of the human back.The coordinates of the center of gravity within a voxel can be calculated as where X C , Y C and Z C denote the coordinates of the center of gravity within a voxel, x i , y i and z i denote the coordinates of each point in the voxel, m denotes the number of point clouds in the voxel, and the voxel edge length is 10 mm.A local coordinate system is established with point p i on the point cloud of the human back, and a region with radius r is constructed.Then, the weight w ij of all points in the region with respect to point p i is calculated based on the Euclidean distance formula with the expression: The covariance matrix cov(p i ) is calculated between a point p i and all points in the neighborhood of r with the following expression: The eigenvalues λ 1 i , λ 2 i and λ 3 i of the covariance matrix are obtained by calculating Equation (5), and the set of points that meets the condition is selected as the key point of the back region: where λ 1 i ≤ λ 2 i ≤ λ 3 i ; δ 1 and δ 2 are parameter thresholds that range from 0 to 1.
Electronics 2024, 13, 1170 6 of 17 After voxel filtering, the key points are extracted and coarsely aligned, with the source point cloud being the point cloud prior to the movement and the target point cloud being the point cloud subsequent to the movement of the human body.First, the FPFH of the key points extracted from the two-frame point clouds is computed with the expression: where p s denotes a neighboring point of p i , k denotes the number of neighbors of p i , and w i denotes the distance weight between p i and p s .Next, n sample points are randomly selected in the source point cloud, and the distance between any two points is greater than the minimum distance threshold.Based on this matrix, the distance error function, which is used to measure alignment effectiveness between the points of the converted and target point clouds, can be calculated as where t e denotes the pre-set distance threshold, and e i is the distance difference after transforming the ith set of corresponding points.Then, the iterations are repeated until the maximum number of iterations is reached.The minimum of a set of error functions among all of the transformations can be found, which can determine the optimal transformation and output the final transformation matrix (R, t).
After coarse alignment, the two-frame point cloud is matched to obtain an approximate transformation matrix.To improve alignment precision, these two sets of point clouds are aligned using the ICP algorithm.The error function E(R, t) can be calculated as where p bi denotes the point in the source point cloud that corresponds to a point p ai in the target point cloud, and a denotes the number of neighbors of p ai .At last, the optimal transformation matrix is produced when iterating to the minimum of Equation ( 9).Any point s i in the human back point cloud before motion can be transformed by the alignment transformation matrix (R, t), thus obtaining the point s i ′ after motion as in Equation (10).
where s i denotes any point in the human back point cloud S 0 = s i s i ∈ R 3 , i = 1, 2, . . ., n before motion, R denotes a 3 × 3 rotation matrix, and t denotes a 3 × 1 translation matrix.

Dynamic Bathing Path Planning
User's sitting posture adjustment has good time continuity during the bathing process.The depth camera estimates the position of the user's back surface in consecutive frames by continuously acquiring point clouds.Thus, the initial bathing trajectory can be modified to create a dynamic bathing path by computing the transformation matrix of the changes before and after the body movement via the point cloud alignment algorithm.The flowchart is shown in Figure 4.
cess.The depth camera estimates the position of the user's back surface in consecutive frames by continuously acquiring point clouds.Thus, the initial bathing trajectory can be modified to create a dynamic bathing path by computing the transformation matrix of the changes before and after the body movement via the point cloud alignment algorithm.The flowchart is shown in Figure 4. First, the position points of the bathing path are calculated by dividing the back region.Second, the point cloud slicing method is used to obtain the bathing path points, which are fitted with a polynomial to obtain the bathing preset path.Finally, the preset path is combined with the back tracking positional transformation result to obtain the bathing process's dynamic path.First, the position points of the bathing path are calculated by dividing the back region.Second, the point cloud slicing method is used to obtain the bathing path points, which are fitted with a polynomial to obtain the bathing preset path.Finally, the preset path is combined with the back tracking positional transformation result to obtain the bathing process's dynamic path.

Human Back Region Division
The spine line position is calculated by extracting the feature points of human back feature location, such as shoulder points and lateral hip points, to divide the left and right regions of the back.Then, the waist is calculated in conjunction with human characteristics to divide the upper and lower backs.
As shown in the red region of Figure 5a (200 mm above y hip ), after traversing the values of the x-direction of all of the points in this region, the minimum and maximum points are considered as the left hip point and the right hip point, respectively.Their x-direction coordinate values are x l_hip and x r_hip , as marked by the blue dots in Figure 5a.The blue lines in Figure 5a are the shoulder line, and the red points are the left and right shoulder points, with x-direction coordinate values of x l_sh and x r_sh , respectively.
The back is divided into left and right parts and upper and lower parts according to the spine line and waist line, respectively, where the green region in Figure 5c indicates the left back part, and the red region indicates the right back part; the green region in Figure 5d indicates the upper back part, and the red region indicates the lower back part.A convex hull is formed using the four positions mentioned above to divide the back region through the coordinates of baseline y hip and shoulder line y sh , as illustrated in the blue region of Figure 5b.The waist line is the thinnest region of the human back; therefore, the micro dimensional segmentation is performed above y hip (200 mm, 400 mm), as shown in the red frame in Figure 5b.Then, the y-direction coordinate value corresponding to the segment with the smallest length is calculated as y w and noted as the waist line, as shown by the green line in Figure 5b.The spinal line x centre is calculated by averaging the coordinate values of the right and left shoulder points, as well as the right and left lateral hip points.The formula is as follows: The back is divided into left and right parts and upper and lower parts according to the spine line and waist line, respectively, where the green region in Figure 5c indicates the left back part, and the red region indicates the right back part; the green region in Figure 5d indicates the upper back part, and the red region indicates the lower back part.

The Bathing Path Generation Algorithm
We propose a bathing path generation algorithm based on the 3D point cloud data to obtain the path points of the robot while bathing.The traditional cross-section approach of trajectory solution uses the intersection line between the cross-section plane and the point cloud data.The acquired point cloud of the human back is discrete, and the intersection line between the two cannot be precisely obtained.In this paper, we propose an improvement to this method, i.e., the micro-space cutting plane clusters are created in the point cloud, and their intersection lines with the point cloud are obtained from the planar projection points.
In order to find the line of intersection of the intercepting plane F with the human back, it is necessary to generate a plane F 1 and F 2 on each side of the intercepting plane F that are separated from it by ζ/2, as shown in Figure 6a.Then, all points in planes F 1 and F 2 are projected into plane F to obtain the intersection line of F with the back of the body, as shown in Figure 6b, where the blue points are data points in sliced planes F 1 and F 2 .Meanwhile, the value of the distance parameter ζ between the two side planes should be set based on the point cloud density.Furthermore, the path points obtained by projection in the plane are discrete; th coordinates can be expressed as a curve function by the curve-fitting method.A poly mial fitting method is chosen to fit the path points due to the gentle surface of the hum back, the function expression of which is shown as where m is the polynomial coefficient.
The error in curve fitting was evaluated using the least squares method; the e evaluation function can be expressed as where, i y denotes the actual value, and   i f x denotes the fitted value.When the er evaluation function is minimized, the coefficients of the corresponding fitting function be obtained.Furthermore, the path points obtained by projection in the plane are discrete; their coordinates can be expressed as a curve function by the curve-fitting method.A polynomial fitting method is chosen to fit the path points due to the gentle surface of the human back, the function expression of which is shown as where m is the polynomial coefficient.The error in curve fitting was evaluated using the least squares method; the error evaluation function can be expressed as where, y i denotes the actual value, and f (x i ) denotes the fitted value.When the error evaluation function is minimized, the coefficients of the corresponding fitting function can be obtained.
Since the bathing path cross-section can be split into transverse and longitudinal, it is feasible to build a bathing path by combining various segments of simple transverse and longitudinal paths.The function relationship between xand y-coordinates and time t can be established as follows: Then, the functions used for transverse and longitudinal bathing path points, are as follows, respectively: where the slices are made at y = y m , x = x m , and the projection points are parallel to XOZ and YOZ to generate the fitted functions f (x) and f (y), respectively.A complete path consisting of them can be obtained as Thus, the coordinates of the spatial position of the robot-assisted bathing at t = t m can be calculated as (x(t m ), y(t m ), z(t m )) by Equation ( 18).

Dynamic Path Planning Algorithm
After starting the bathing program, the depth camera starts to capture the scene point cloud and preprocesses the scene point cloud as well as the back recognition to obtain the human back point cloud data.If it is the first time to capture the point cloud, the bathing path S path is planned according to the bathing mode selected by the user, and the robotic arm is started to wash the user's back along the path S path .Furthermore, the back point cloud P m captured at t m is aligned and tracked with P m−1 , and the user's positional transformation matrix T m−1,m can be calculated from Then, the bathing path can be obtained as The point clouds of human back P 0 , P 1 , . . ., P n , which are captured from t 0 to t n , can be obtained as Therefore, the coordinate point M(x(t m ), y(t m ), z(t m )) on the robot's preset path S path becomes M ′ (x ′ (t m ), y ′ (t m ), z ′ (t m )) after the back movement when t = t m .
where t 0 ≤ t m ≤ t n ; T = Π n i=1 T n−1,n .At last, the updated path points are sent to the robot motion control program after being converted to spatial positions in the robot coordinate system.Then, the assistive bathing robot cleans the back of the human body by executing the real-time path, while starting the depth camera to repeat the above steps until the bathing is completed.
In conclusion, a total of three paths are designed as shown in Figure 7.
where 0   m n t t t ; At last, the updated path points are sent to the robot motion control program after being converted to spatial positions in the robot coordinate system.Then, the assistive bathing robot cleans the back of the human body by executing the real-time path, while starting the depth camera to repeat the above steps until the bathing is completed.
In conclusion, a total of three paths are designed as shown in Figure 7.

Experiment
The experimental platform for the simulated bathing built in this paper, as shown in Figure 8, consists of a depth camera, a camera mount, a seat, a user, and a computer.The Intel RealSense D455 camera has a vertical field of view of 58°, a horizontal field of view of 858.

Experiment
The experimental platform for the simulated bathing built in this paper, as shown in Figure 8, consists of a depth camera, a camera mount, a seat, a user, and a computer.The Intel RealSense D455 camera has a vertical field of view of 58 • , a horizontal field of view of 858.

Back Recognition and Tracking
This paper used the Intel RealSense D455 camera to capture point clouds of th continuous movements in five different motion postures, i.e., sitting up, tilting, tw arching, and arm swinging to verify the algorithm's effectiveness in dynamic reco and tracking of the human back.The experimental results are shown in Figure 9 the black point cloud represents the preprocessed body region and the blue poin represents the recognized back region.

Back Recognition and Tracking
This paper used the Intel RealSense D455 camera to capture point clouds of the user's continuous movements in five different motion postures, i.e., sitting up, tilting, twisting, arching, and arm swinging to verify the algorithm's effectiveness in dynamic recognition and tracking of the human back.The experimental results are shown in Figure 9, where the black point cloud represents the preprocessed body region and the blue point cloud represents the recognized back region.
This paper used the Intel RealSense D455 camera to capture point clouds of the user's continuous movements in five different motion postures, i.e., sitting up, tilting, twisting, arching, and arm swinging to verify the algorithm's effectiveness in dynamic recognition and tracking of the human back.The experimental results are shown in Figure 9, where the black point cloud represents the preprocessed body region and the blue point cloud represents the recognized back region.The four motion postures in Figure 9b-e were aligned by the other two algorithms and this paper's algorithm, and the alignment effect and running time were compared as shown in Figure 10 and Table 1, respectively.The four motion postures in Figure 9b-e were aligned by the other two algorithms and this paper's algorithm, and the alignment effect and running time were compared as shown in Figure 10 and Table 1, respectively.The green point cloud is the human back point cloud before the motion, the red point cloud is the human back point cloud after the motion, and the blue point cloud is the human back point cloud obtained from the alignment.The experimental results show that this paper's algorithm and Algorithm 2 have higher alignment accuracy and better robustness than Algorithm 1.This paper's algorithm reduces the alignment time by 66.18% and 96.29% compared to the other two algorithms, respectively.
In this paper, the root mean square error (RMSE) is used to evaluate the human back dynamic tracking alignment accuracy, and the formula is as follows: where q denotes the number of point clouds, X i and Xi denote the Euclidean distance and the truth value of it between the corresponding points after alignment, respectively.In addition, the RMSE in the x-, y-, and z-directions is denoted by RMSE x , RMSE y and RMSE z , respectively.The smaller its calculation result is, the better the alignment effect is.Four root-mean-square errors at RMSE, RMSE x , RMSE y and RMSE z were calculated separately for the four back tracking operations to assess the alignment accuracy.The root mean square errors of the point cloud alignment results are shown in Table 2. Arm swinging posture has the smallest RMSE, RMSE x , RMSE y and RMSE z of the four postural transformations.This is a result of the smallest range of motion of the back during this motion posture.In addition, the mean values of RMSE, RMSE x , RMSE y and RMSE z for the four postures were 6.26 mm, 2.89 mm, 2.36 mm and 4.65 mm, respectively; the maximum root mean square errors in the x-, y-, and z-directions were 4.97 mm, 3.07 mm and 6.51 mm, respectively.In this paper, the diameter of the bathing brush head used in the robot was 75 mm, and the length of the bristle was 12 mm.It can be shown that the errors in the alignment results were within the tolerance of the bathing brush head, which ensures that the bathing brush head fits the human skin under the condition of body movement.In summary, this paper proposes a dynamic tracking algorithm that can satisfy the robot bathing task.

Dynamic Path Generation
The path planning algorithm for back bathing was carried out on two experimenters to verify the generalizability, respectively, where the first subject was of 160 cm in height and 50 kg in weight, and the second subject was of 175 cm in height and 70 kg in weight, as shown in Figure 11.The bathing preset paths obtained in the above process were used to provide path template point clouds for back dynamic path planning.As shown in Figure 11, the algorithm in this paper can effectively recognize the human back features and segment the back region to obtain the bathing preset path.
Furthermore, the experimenter performed a continuous motion on the seat as shown in Figure 12a-f to simulate the real situation of a user during the bathing process.Firstly, the computer processed the captured point cloud information to obtain the preset path as shown in Figure 12a.Secondly, the experimenter continuously changed the posture, and the computer interface displayed the pre-processed point clouds and the online-adjusted bow-shaped preset paths as shown in Figure 12b-f.The bathing preset paths obtained in the above process were used to provide path template point clouds for back dynamic path planning.As shown in Figure 11, the algorithm in this paper can effectively recognize the human back features and segment the back region to obtain the bathing preset path.
Furthermore, the experimenter performed a continuous motion on the seat as shown in Figure 12a-f to simulate the real situation of a user during the bathing process.Firstly, the computer processed the captured point cloud information to obtain the preset path as shown in Figure 12a.Secondly, the experimenter continuously changed the posture, and the computer interface displayed the pre-processed point clouds and the online-adjusted bow-shaped preset paths as shown in Figure 12b-f Finally, the real-time paths in different viewpoints are shown in Figure 13; the black point clouds are the bow-shaped preset path, and the red point clouds are the dynamic path obtained by coupling with the human back tracking results.The dynamic paths deviate significantly from the preset path due to the large motion of the back's upper part, while the back's lower part has less deviation from the preset path due to less motion.The experimental results show that this paper's algorithm can obtain smooth and continuous dynamic paths.

Conclusions
This paper proposes a dynamic tracking and path planning method for the human back, which is divided into three parts.In the first part, the human body is captured through preprocessing a point cloud, and the back region is recognized based on its geometric features.In the second part, the human back dynamic tracking is realized by Finally, the real-time paths in different viewpoints are shown in Figure 13; the black point clouds are the bow-shaped preset path, and the red point clouds are the dynamic path obtained by coupling with the human back tracking results.The dynamic paths deviate significantly from the preset path due to the large motion of the back's upper part, while the back's lower part has less deviation from the preset path due to less motion.The experimental results show that this paper's algorithm can obtain smooth and continuous dynamic paths.

Conclusions
This paper proposes a dynamic tracking and path planning method for the human back, which is divided into three parts.In the first part, the human body is captured through preprocessing a point cloud, and the back region is recognized based on its geometric features.In the second part, the human back dynamic tracking is realized by

Conclusions
This paper proposes a dynamic tracking and path planning method for the human back, which is divided into three parts.In the first part, the human body is captured through preprocessing a point cloud, and the back region is recognized based on its geometric features.In the second part, the human back dynamic tracking is realized by extracting the key points of the back and obtaining the transformation matrix by coarse-fine alignment of the human back point cloud before and after movement.In the third part, the acquisition and planning method of the robot's path for bathing is investigated by fitting the point cloud paths via polynomial function, the bathing preset path is obtained by establishing a link with time, and the dynamic path is generated by coupling the posture transformation matrix.In the end, the experimental platform was built, and the human back tracking experiments proceeded in four different postures.The running time of the proposed algorithm was reduced by 66.18% and 96.29% compared with the other two algorithms, and the average root-mean-square errors of the target region in the x-, y-, and z-directions were 2.64 mm, 2.61 mm and 5.17 mm, respectively.Meanwhile, it can adjust the bathing path online according to the user's posture change.
Z shown in Figure1to calculate the transformation m of a point on the human back point cloud during the movement.

Figure 1 .
Figure 1.Schematic diagram of human posture changes during bathing.

Figure 1 .
Figure 1.Schematic diagram of human posture changes during bathing.
Z shown in Figure1to calculate the transformation matrix of a point on the human back point cloud during the movement.

Figure 1 .
Figure 1.Schematic diagram of human posture changes during bathing.

Figure 3 .
Figure 3. Flowchart of back recognition method.(a) is the scene point cloud, (b) is the preprocessing process, (c) is the body point cloud, (d) is the shoulder line position, and (e) is the human back point cloud.

Figure 3 .
Figure 3. Flowchart of back recognition method.(a) is the scene point cloud, (b) is the preprocessing process, (c) is the body point cloud, (d) is the shoulder line position, and (e) is the human back point cloud.
points are considered as the left hip point and the right hip point, respectively.Their -x direction coordinate values are _ l hip x and _ r hip x , as marked by the blue dots in Figure 5a.The blue lines in Figure 5a are the shoulder line, and the red points are the left and right shoulder points, with -x direction coordinate values of _ l shx and _ r shx , respectively.

Figure 5 .
Figure 5. Back region division diagram: (a) extraction of key points, (b) waist recognition, (c) left and right division and (d) upper and lower division.

Figure 5 .
Figure 5. Back region division diagram: (a) extraction of key points, (b) waist recognition, (c) left and right division and (d) upper and lower division.

1 2 back 1 F and 2 F
of the body, as shown in Figure6b, where the blue points are data points in sli planes .Meanwhile, the value of the distance parameter  between the side planes should be set based on the point cloud density.

Figure 6 .
Figure 6.Schematic of point cloud path generation.(a) Point cloud slicing.(b) Point cloud pro tion.

Figure 6 .
Figure 6.Schematic of point cloud path generation.(a) Point cloud slicing.(b) Point cloud projection.

Figure 7 .
Figure 7. Schematic of the three bathing preset paths: (a) is the right-left path, (b) is the up-down path, and (c) is the bow-shaped path.
7 mm from the human back, and a vertical mounting height of 476.0 mm from the seat surface.The algorithms are run on the Windows 10 operating system and an Intel(R) Core(TM) i5-10500 CPU processor under Visual Studio C++ 2019 with the PCL1.8.1 library.

Figure 7 .
Figure 7. Schematic of the three bathing preset paths: (a) is the right-left path, (b) is the up-down path, and (c) is the bow-shaped path.

Electronics 2024 , 18 Figure 11 .
Figure 11.Preset path processing in (a-e) subject 1 and (f-j) subject 2. (a,f) Human body point cloud.(b,g) Initial recognition of the back.(c,h) Back region obtained from key points.(d,i) Back segmentation.(e,j) Three different point cloud paths and their normal vectors.

Figure 11 .
Figure 11.Preset path processing in (a-e) subject 1 and (f-j) subject 2. (a,f) Human body point cloud.(b,g) Initial recognition of the back.(c,h) Back region obtained from key points.(d,i) Back segmentation.(e,j) Three different point cloud paths and their normal vectors.

Figure 13 .
Figure 13.Real-time path of the bathing process.

Figure 12 . 18 Figure 12 .
Figure 12.Simulating postural changes during bathing.(a) Processing point cloud information (b) Sitting up position (c) Tilting position (d) Twisting position (e) Arching position.(f) Arm swinging position.Finally, the real-time paths in different viewpoints are shown in Figure13; the black point clouds are the bow-shaped preset path, and the red point clouds are the dynamic path obtained by coupling with the human back tracking results.The dynamic paths deviate significantly from the preset path due to the large motion of the back's upper part, while the back's lower part has less deviation from the preset path due to less motion.The experimental results show that this paper's algorithm can obtain smooth and continuous dynamic paths.

Figure 13 .
Figure 13.Real-time path of the bathing process.

Figure 13 .
Figure 13.Real-time path of the bathing process.

Table 1 .
Running time comparison.
Motion Posture Running Time of the Algorithm(s) This Paper's Algorithm 3Dcs-ICP Algorithm Standard Coarse-Fine Alignment Algorithm Tilting 1.538 4.355 40.210

Table 1 .
Running time comparison.

Table 2 .
RMSE analysis of point cloud alignment.