Exploration of Indoor Barrier-Free Plane Intelligent Lofting System Combining BIM and Multi-Sensors

: Lofting is an essential part of construction projects and the high quality of lofting is the basis of e ﬃ cient construction. However, the most common method of lofting currently which uses the total station in a multi-person cooperative way consumes much manpower and time. With the rapid development of remote sensing and robot technology, using robots instead of manpower can e ﬀ ectively solve this problem, but few scholars study this. How to e ﬀ ectively combine remote sensing and robots with lofting is a challenging problem. In this paper, we propose an intelligent lofting system for indoor barrier-free plane environment, and design a high-ﬂexibility, low-cost autonomous mobile robot platform based on single chip microcomputer, Micro Electro Mechanical Systems-Inertial Measurement Unit (MEMS-IMU), wheel encoder, and magnetometer. The robot also combines Building Information Modeling (BIM) laser lofting instrument and WIFI communication technology to get its own position. To ensure the accuracy of localization, the kinematics model of Mecanum wheel robot is built, and Extended Kalman Filter (EKF) is also used to fuse multi-sensor data. It can be seen from the ﬁnal experimental results that this system can signiﬁcantly improve lofting e ﬃ ciency and reduce manpower.


Introduction
The process of construction mainly includes construction preparation, project construction, project acceptance, etc. The lofting is included in the construction project and project acceptance. It is the process of interpreting construction plans and marking the location of points or axis. Lofting is performed to ensure a project is built according to engineering design plans [1,2] and it is an essential and basic link in building construction which runs through the whole construction project.
The quality of a project's lofting depends on its efficiency and cost. In the early stages, limited by the lofting tools and methods, several operators were required to invest a lot of energy and time in lofting and calibration at the construction site. In the past decade, with the rapid development of surveying and mapping industry, the appearance of total station has greatly improved the lofting efficiency. As an instrument integrating the functions of measuring horizontal angle, vertical angle, distance, and elevation difference, it can be applied in almost all surveying fields and is easy to operate [3,4]. However, both the harsh environment of most construction sites, such as tunnels, bridges, and high-rise buildings and the atrocious weather have a non-negligible influence on the operators and the construction period. In recent years, robot technology has been continuously developed and widely used in agriculture and forestry monitoring [5][6][7], industrial measurement and The paper is organized as follows. Section 2 summarizes the related work of robot localization and lofting environmental adaptability. The design of the robot platform is shown in Section 3. Brief introduction of BIM laser lofting instrument and acquisition of location information are introduced in Section 4. Section 5 presents multi-sensor data fusion for attitude estimation. The theory of robot motion control is described in Section 6. Section 7 shows the experimental results. Section 8 is the discussion of the experimental results. Finally, the conclusion is given in Section 9.

Related Work
There are many operating steps in construction lofting, such as setting up instruments, aligning, leveling, locating, and marking, among which locating accurately is a critical step. Therefore, for the robot to reach the control points accurately in a strange environment, it is essential to get the robot's state.
Many scholars have carried out researches on the localization of robots in a strange environment. The related work mainly includes the requirement of precision, application environment, selecting and combining sensors, and others. Multi-sensors are usually used in these researches to estimate the state information of ground robots or flying robots. GPS-RTK positioning method is mature enough and has high positioning accuracy. Jiang [16] has successfully used GPS technology for the layout of the bridge construction horizontal control network and the bridge axis lofting work, and achieved high enough lofting accuracy. However, this method is commonly used for outdoor navigation or positioning and is not suitable for the indoor environment. Simultaneous localization and mapping (SLAM) technology is currently the main method for robot localization in indoor environment. It can be applied in an indoor environment and can be used for real-time mapping of the environment so that the robots can work in this environment for a long time. Many indoor robots have adopted this technology [17,18]. However, this method usually needs to compare with the dynamic map to reduce the errors. Different construction sites are completely different, therefore, SLAM technology cannot be well applied in lofting. Ultra-wideband (UWB) is also one of the common methods for indoor localization, but its accuracy is not enough [19,20]. IMU as one of the most commonly used sensors for robot state estimation can be seen in almost all current robot localization researches. However, due to the huge accumulative errors caused by wheel slip, sensor drift, and other reasons [21] (pp. 483-484), it can hardly be used alone in high-precision state estimation research. More sensors must be used for position or attitude correction, such as Lidar, camera or magnetometer, etc. Lidar/IMU [22][23][24], camera/IMU [25][26][27], Lidar/Camera/IMU [28][29][30], and other multi-sensor combinations methods can estimate the robot's state through feature matching algorithms. However, these methods still have problems such as insufficient real-time performance and environmental adaptability. Non-feature or similar features of the lofting environment also makes these methods inapplicable. Although these methods cannot be fully applied to lofting conditions, IMU is still very efficient in estimating robot state at present.

Related Work
There are many operating steps in construction lofting, such as setting up instruments, aligning, leveling, locating, and marking, among which locating accurately is a critical step. Therefore, for the robot to reach the control points accurately in a strange environment, it is essential to get the robot's state.
Many scholars have carried out researches on the localization of robots in a strange environment. The related work mainly includes the requirement of precision, application environment, selecting and combining sensors, and others. Multi-sensors are usually used in these researches to estimate the state information of ground robots or flying robots. GPS-RTK positioning method is mature enough and has high positioning accuracy. Jiang [16] has successfully used GPS technology for the layout of the bridge construction horizontal control network and the bridge axis lofting work, and achieved high enough lofting accuracy. However, this method is commonly used for outdoor navigation or positioning and is not suitable for the indoor environment. Simultaneous localization and mapping (SLAM) technology is currently the main method for robot localization in indoor environment. It can be applied in an indoor environment and can be used for real-time mapping of the environment so that the robots can work in this environment for a long time. Many indoor robots have adopted this technology [17,18]. However, this method usually needs to compare with the dynamic map to reduce the errors. Different construction sites are completely different, therefore, SLAM technology cannot be well applied in lofting. Ultra-wideband (UWB) is also one of the common methods for indoor localization, but its accuracy is not enough [19,20]. IMU as one of the most commonly used sensors for robot state estimation can be seen in almost all current robot localization researches. However, due to the huge accumulative errors caused by wheel slip, sensor drift, and other reasons [21] (pp. 483-484), it can hardly be used alone in high-precision state estimation research. More sensors must be used for position or attitude correction, such as Lidar, camera or magnetometer, etc. Lidar/IMU [22][23][24], camera/IMU [25][26][27], Lidar/Camera/IMU [28][29][30], and other multi-sensor combinations methods can estimate the robot's state through feature matching algorithms. However, these methods still have problems such as insufficient real-time performance and environmental adaptability. Non-feature or similar features of the lofting environment also makes these methods inapplicable. Although these methods cannot be fully applied to lofting conditions, IMU is still very efficient in estimating robot state at present.
In [31], the author proposed a method of using two GPS-RTK units to fuse IMU data to estimate the vehicle's three-dimensional attitude, position, and velocity. However, this method cannot solve the robot localization problem in indoor or signal occlusion environment. Liu [32] describes a robot localization method based on laser SLAM, but high-precision laser radar is often costly. To solve this problem, visual slam was proposed. This method can match depth data by using feature matching algorithm [33], but the accuracy of this method needs to be corrected by loop detection which wastes a lot of time. A dim environment will also affect localization. In [34], the author used the EKF algorithm to fuse multi-sensor data including laser SLAM, visual SLAM, IMU, magnetometer, and other sensors to locate the robot. It was verified that the method can be applied to indoor and outdoor environments through many experiments. These methods of fusing multi-sensor data have achieved good results from an unknown location in an unknown environment. However, as for lofting, the robot usually starts from a known point and the first party will provide at least two known points (or a known direction and a known point) and the points to be lofted. Therefore, the points to be lofted can be reached only by solving the relative localization relationship between points. The above-mentioned methods will increase the calculation amount and cost. In [35], the author achieved state estimation by analyzing the kinematic model of the robot and corrected wheel-slippage error to a certain extent. However, this approach, which used IMU and wheel encoder requires very accurate motion estimation and cannot solve the problem of accumulative error fundamentally. The lack of external sensors for position correction will also produce huge accumulative errors after long-time or long-distance operation. Ferreira [36] described the method of integrating sensors inside buildings and BIM to determine the location of people in buildings. Although this is the localization of people, it can be seen that BIM has a guiding significance for indoor localization in some degree. Considering the existing technologies, application scenarios, and the purpose of improving lofting efficiency, this paper focuses on the localization and navigation of ground robots. It is used in the indoor barrier-free plane environment by combining BIM laser lofting instrument, BIM model, wheel encoder, MEME-IMU, and magnetometer. The final experiment shows that the results meet the lofting accuracy requirements of most construction projects.

General Design
The intelligent lofting system mainly includes three parts: a ground robot, a BIM laser lofting instrument, and a user terminal. Among them, the ground robot moves on the construction site autonomously, communicates with the BIM laser lofting instrument by WIFI module, and processes the data sent by the lofting instrument and multi-sensors. BIM laser lofting instrument gets the robot plane position information in real-time and sends it to the robot and the user terminal through the WIFI module. The user terminal carries out monitoring, manual analysis, and data recording on the construction site according to the feedback information. Figure 2 shows the general structure of the intelligent lofting system. This section mainly introduces the hardware design of the robot. autonomously, communicates with the BIM laser lofting instrument by WIFI module, and processes the data sent by the lofting instrument and multi-sensors. BIM laser lofting instrument gets the robot plane position information in real-time and sends it to the robot and the user terminal through the WIFI module. The user terminal carries out monitoring, manual analysis, and data recording on the construction site according to the feedback information. Figure 2 shows the general structure of the intelligent lofting system. This section mainly introduces the hardware design of the robot.

Robot Platform Design
The flexibility and robustness of the ground robot are the basic guarantees for high-precision localization so the size of the robot should not be too large. In this paper, the Mecanum Wheel Chassis of the Home of Balancing Trolley was selected as the chassis of the ground robot, and the secondary development was carried out on it. The chassis of the robot adopted aluminum alloy chassis with a total weight of about 3 KG. The overall size of the chassis was 200 × 250 × 70 mm. We deployed a 7 W DC motor to realize four-wheel drive, and used a 12 V lithium battery with a capacity of 3500 mah as the power supply. We also used the STM32F103 series processor as the control system of the robot, and connected the motor, and drive through the CAN bus. The maximum speed of it was 1.2 m/s, the maximum payload was 6 KG, and the maximum runtime can reach 5 h. In addition, the wheel was matched with an MG513 encoder deceleration motor, which can distinguish rotation of 0.23 • .
The BIM laser lofting instrument can obtain the position of the prism on the construction site by linking with the prism through a laser beam. Therefore, the coordinate of the robot on the construction site can be obtained by combining the prism with the robot. To effectively combine the prism with the robot, the ground robot adopted a structure of double-layer. STM32F103 series processor, MEMS-IMU, and magnetometer were installed on the first layer, and a 360 • prism and laser orientation device was fixed on the second layer by a customized chassis. This design can ensure the prism to receive laser beams at all angles and will not cause beam occlusion. Figure 3 shows the overall design of the robot.

Robot Platform Design
The flexibility and robustness of the ground robot are the basic guarantees for high-precision localization so the size of the robot should not be too large. In this paper, the Mecanum Wheel Chassis of the Home of Balancing Trolley was selected as the chassis of the ground robot, and the secondary development was carried out on it. The chassis of the robot adopted aluminum alloy chassis with a total weight of about 3 KG. The overall size of the chassis was 200 × 250 × 70 mm. We deployed a 7 W DC motor to realize four-wheel drive, and used a 12 V lithium battery with a capacity of 3500 mah as the power supply. We also used the STM32F103 series processor as the control system of the robot, and connected the motor, and drive through the CAN bus. The maximum speed of it was 1.2 m/s, the maximum payload was 6 KG, and the maximum runtime can reach 5 h. In addition, the wheel was matched with an MG513 encoder deceleration motor, which can distinguish rotation of °0 .23 . The BIM laser lofting instrument can obtain the position of the prism on the construction site by linking with the prism through a laser beam. Therefore, the coordinate of the robot on the construction site can be obtained by combining the prism with the robot. To effectively combine the prism with the robot, the ground robot adopted a structure of double-layer. STM32F103 series processor, MEMS-IMU, and magnetometer were installed on the first layer, and a °3 60 prism and laser orientation device was fixed on the second layer by a customized chassis. This design can ensure the prism to receive laser beams at all angles and will not cause beam occlusion. Figure 3 shows the overall design of the robot.

Mecanum Wheel
Traditional ground robots usually use rubber wheels as driving wheels, which are low cost, high load-bearing, and durable. However, they are difficult to rotate in the moving, which is not conducive to reaching points in a high-precision way. To solve this problem, the Mecanum wheel with °3 60 self-rotation characteristic was selected. The Mecanum wheel was invented by the Swedish engineer

Mecanum Wheel
Traditional ground robots usually use rubber wheels as driving wheels, which are low cost, high load-bearing, and durable. However, they are difficult to rotate in the moving, which is not conducive to reaching points in a high-precision way. To solve this problem, the Mecanum wheel with 360 • self-rotation characteristic was selected. The Mecanum wheel was invented by the Swedish engineer Bengt Ilon [37], and it can translate and rotate in any direction in the plane through the interaction among a plurality of wheels. This section introduces the mechanical analysis of a single Mecanum wheel and the feasibility of self-rotation. Figure 4a is the structural diagram of the Mecanum wheel, which mainly comprises a hub A and wheel roller B. A is the main support of the whole wheel, and B is a drum mounted on the hub. The angle between A and B is usually ±45 • . Figure 4b shows the force model of the bottom wheel. When the motor drives the wheel to rotate clockwise, friction force F perpendicular to the centerline of the roller is generated. F can be decomposed into a lateral direction force F x and a longitudinal direction force F y as shown by the dotted line. It can be seen that unlike a conventional rubber wheel, the Mecanum wheel generates an extra component force F x perpendicular to the forward direction.  According to the above-mentioned characteristics of the Mecanum wheel, the robot can selfrotate °3 60 through the cooperation of multiple wheels. In our system, there were four Mecanum wheels and each wheel equipped a separate motor drive. As shown in Figure 5a, to verify the feasibility of self-rotation, taking the clockwise rotation of the robot as an example. When the wheels 1, 2, 3, and 4 rotate clockwise at the same velocity, they generate friction forces 1 F , 2 F , 3 F , and 4 F perpendicular to the centerline of the wheel roller, and the forces are decomposed: Due to the interaction between the forces, the force model can be simplified to Figure 5b, of which, the angle between the hub axis and roller axis is F as an example, the two parallel forces are the same in magnitude and opposite in direction, so the generated force couple will make the robot produce a pure rotation effect and make it rotate. The cooperation of the four Mecanum wheels can provide the robot with 3 degrees of freedom necessary for the omnidirectional rotation in the horizontal plane. According to the above-mentioned characteristics of the Mecanum wheel, the robot can self-rotate 360 • through the cooperation of multiple wheels. In our system, there were four Mecanum wheels and each wheel equipped a separate motor drive. As shown in Figure 5a, to verify the feasibility of self-rotation, taking the clockwise rotation of the robot as an example. When the wheels 1, 2, 3, and 4 rotate clockwise at the same velocity, they generate friction forces F 1 , F 2 , F 3 and F 4 perpendicular to the centerline of the wheel roller, and the forces are decomposed: Due to the interaction between the forces, the force model can be simplified to Figure 5b, of which, the angle between the hub axis and roller axis is 45 • , so F xi = F yi (i = 1,2,3,4). Taking F x1 + F x2 and F x3 + F x4 as an example, the two parallel forces are the same in magnitude and opposite in direction, so the generated force couple will make the robot produce a pure rotation effect and make it rotate. The cooperation of the four Mecanum wheels can provide the robot with 3 degrees of freedom necessary for the omnidirectional rotation in the horizontal plane.

Inverse Kinematic Analysis
The velocity control of the robot can ensure that it operates stably and efficiently. Figure 6 shows the inverse kinematics analysis of the Mecanum wheel robot. The robot was equipped with four wheels symmetrically, wherein the angle

Inverse Kinematic Analysis
The velocity control of the robot can ensure that it operates stably and efficiently. Figure 6 shows the inverse kinematics analysis of the Mecanum wheel robot. The robot was equipped with four wheels symmetrically, wherein the angle α = 45 • in our system. When the robot is placed on the ground horizontally, the position and the angle corresponding to the center point o r of the robot are taken as state variables. The vehicle coordinate system o r is established based on point o r , which is also the placement point of the prism.

Inverse Kinematic Analysis
The velocity control of the robot can ensure that it operates stably and efficiently. Figure 6 shows the inverse kinematics analysis of the Mecanum wheel robot. The robot was equipped with four wheels symmetrically, wherein the angle   At this time, the state of the robot in the vehicle coordinate system o r is P r = [ x r y r ψ r ] T , and the kinematic formula of each wheel is expressed as: x is the lateral velocity component in the vehicle coordinate system, with the right as positive, V And the velocity of the robot in the navigation coordinate system o n is: where V (n) is the horizontal velocity component, vertical velocity component, and angular velocity of the robot in the o n , R(ψ r ) is the transformation matrix from o r to o n. The velocity of each wheel in the navigation coordinate system can be obtained from Equations (2) and (3): Although the velocity of the real motion model of the robot is calculated, the velocity of the wheels lags behind due to friction and other reasons during actual running. Direct velocity calculation is difficult to keep the car stable, therefore, this paper uses wheel encoders and PID control to set the velocities: where k p is the proportional coefficient, k i is the integral coefficient, k d is the differential coefficient, is the real velocity of wheels which can be obtained from wheel encoder feedback.

BIM Laser Lofting Instrument
In this paper, we use a BIM laser lofting instrument and 360 • prism to get the high-precision position of the robot. BIM laser lofting instrument refers to a new generation of lofting instrument that combines BIM and laser dynamic beam orientation system. The instrument is easy to operate, and can be automatically leveled or manually leveled with one key after startup. "What we see is what we get" on the construction site is realized through highly visible guiding light, and each point is lofted independently without cumulative errors. Users can use the Personal Digital Assistant (PDA) to monitor the position of the prism in real-time and move it to the points to be lofted according to instructions, with the maximum lofting distance is up to 100 m and measuring accuracy is ±3 mm (Distance)/5" (Angle). We selected the BIM laser lofting instrument of Topcon LN-100 to carry out our experiment. Figure 7a shows the introduction of the instrument, and Figure 7b is a 360 • prism used together. Table 2 shows the main parameters of the Topcon LN-100.

BIM Laser Lofting Instrument
In this paper, we use a BIM laser lofting instrument and °3 60 prism to get the high-precision position of the robot. BIM laser lofting instrument refers to a new generation of lofting instrument that combines BIM and laser dynamic beam orientation system. The instrument is easy to operate, and can be automatically leveled or manually leveled with one key after startup. "What we see is what we get" on the construction site is realized through highly visible guiding light, and each point is lofted independently without cumulative errors. Users can use the Personal Digital Assistant (PDA) to monitor the position of the prism in real-time and move it to the points to be lofted according to instructions, with the maximum lofting distance is up to 100 m and measuring accuracy is ±3 mm (Distance)/5" (Angle). We selected the BIM laser lofting instrument of Topcon LN-100 to carry out our experiment. Figure 7a shows the introduction of the instrument, and Figure 7b is a °3 60 prism used together. Table 2 shows the main parameters of the Topcon LN-100.  Among them, BIM mentioned above refers to a digital model with geometric information, state information, and professional attributes of building construction. In the process of lofting, the model  Among them, BIM mentioned above refers to a digital model with geometric information, state information, and professional attributes of building construction. In the process of lofting, the model mainly provides an independent coordinate system, known points, and position of points to be lofted. Users can intuitively observe the position relationship between prism and points to be lofted through the model. Figure 8 is a BIM of a teaching building in Tongji University built by Autodesk Revit, where Figure 8a is the overall appearance of the building model, and Figure 8b is the first floor of the building. where Figure 8a is the overall appearance of the building model, and Figure 8b is the first floor of the building.  This instrument provides four station layout methods, which are applicable to almost all lofting construction environment. The four methods are, respectively, Resection method, Reference Axis (Base Point and Reference Axis) Measurement method, Backsight Point (Known Point) Measurement method, and Backsight Point (Reference Axis on the Base Point) Measurement method. Figure 9 is a schematic diagram of the four station setting methods, in which red points represent known positions, and blue triangles represent random positions:

1.
Resection: the instrument is set up arbitrarily and measure two or more known points to establish a coordinate system; 2.
Reference Axis (Base Point and Reference Axis) Measurement: the instrument is set up arbitrarily and measure the base point (0,0) and the point on the reference axis (x axis or y axis) to establish a coordinate system; 3.
Backsight  x y of the prism received by the WIFI module was sent to the processor in real-time, wherein, the data transmission frequency is 20 Hz and the maximum communication distance can reach 100 m.

Multi-Sensors Fusion Algorithm
For the attitude estimation of the ground robot, it is usually output by multi-sensors. We use MEMS-IMU and magnetometer to get the attitude of the ground robot. Among them, magnetometer and accelerometer in MEMS-IMU output one group of robot attitude, and gyroscope in MEMS-IMU output another group of robot attitude. However, due to the defects and accumulative errors of a single sensor, neither of the two groups of estimations can be used as the attitude of the robot for a long time alone. Therefore, the EKF algorithm is adopted to fuse the two groups of data for optimal estimation of robot attitude. This part will introduce the EKF algorithm fusing multi-sensor data.

Sensors Angle Output
The common attitude measurement sensors cannot directly obtain angle data. For example, the data output by the gyroscope is usually the angular velocity of the robot and must be converted into angle through integration. This part briefly analyzes the conversion process of accelerometer, magnetometer, and gyroscope:

Accelerometer
The accelerometer can further get the attitude of the object by collecting acceleration components of gravity acceleration on each axis of the object. The navigation coordinate system in this paper is set as the east ( ) x north ( ) y sky ( ) z coordinate system. As shown in Figure 10, a calibrated accelerometer a T is obliquely placed in space, its rotation process is In our experiment, the 360 • prism was fixed at the geometric center of the robot. The robot and Topcon LN-100 are connected via the WIFI communication module and Transmission Control Protocol (TCP). The position (x,y) of the prism received by the WIFI module was sent to the processor in real-time, wherein, the data transmission frequency is 20 Hz and the maximum communication distance can reach 100 m.

Multi-Sensors Fusion Algorithm
For the attitude estimation of the ground robot, it is usually output by multi-sensors. We use MEMS-IMU and magnetometer to get the attitude of the ground robot. Among them, magnetometer and accelerometer in MEMS-IMU output one group of robot attitude, and gyroscope in MEMS-IMU output another group of robot attitude. However, due to the defects and accumulative errors of a single sensor, neither of the two groups of estimations can be used as the attitude of the robot for a long time alone. Therefore, the EKF algorithm is adopted to fuse the two groups of data for optimal estimation of robot attitude. This part will introduce the EKF algorithm fusing multi-sensor data.

Sensors Angle Output
The common attitude measurement sensors cannot directly obtain angle data. For example, the data output by the gyroscope is usually the angular velocity of the robot and must be converted into angle through integration. This part briefly analyzes the conversion process of accelerometer, magnetometer, and gyroscope:

Accelerometer
The accelerometer can further get the attitude of the object by collecting acceleration components of gravity acceleration on each axis of the object. The navigation coordinate system in this paper is set as the east (x) north (y) sky (z) coordinate system. As shown in Figure 10, a calibrated accelerometer T a is obliquely placed in space, its rotation process is R(·) = X(·)Y(·)Z(·), and assumes that it rotates all in the positive direction of the attitude angle. It finally outputs the projection component The rotation matrix is calculated as follows: where g is the gravity acceleration, taking is the components of the gravitational acceleration on the three-axis of the carrier coordinate system  c o , φ , θ and ψ is the Roll angle, Pitch angle, and Yaw angle of According to Equation (6), the converted angle of the data collected by the accelerometer is: = arcsin( / g) = arctan( / )

Magnetometer
The magnetometer can get the magnetic field intensity components on each axis of the object, and then calculate the Yaw angle ψ . Its conversion principle is similar to the accelerometer, so it will not be illustrated here with a figure. First, without considering the magnetic declination, assume that a calibrated magnetometer T is obliquely placed on the ground. Then, T outputs the magnetic The rotation matrix is calculated as follows: where g is the gravity acceleration, taking g = 9.8 m/s 2 , R(·) is the rotation matrix, is the components of the gravitational acceleration on the three-axis of the carrier coordinate system o c , ϕ, θ and ψ is the Roll angle, Pitch angle, and Yaw angle of accelerometer T a in o c .
According to Equation (6), the converted angle of the data collected by the accelerometer is:

Magnetometer
The magnetometer can get the magnetic field intensity components on each axis of the object, and then calculate the Yaw angle ψ. Its conversion principle is similar to the accelerometer, so it will not be illustrated here with a figure. First, without considering the magnetic declination, assume that a calibrated magnetometer T m is obliquely placed on the ground. Then, T m outputs the magnetic field of the earth's magnetic field H on the three axis x c , y c , z c of the carrier coordinate system o c . Finally, the Yaw angle ψ can be calculated as follows: However, in the actual running of the robot, it is impossible to keep a constant plane motion. The unevenness of the road surface and the ground gap will cause the robot to generate an instantaneous Pitch angle or Roll angle. Due to the influence of Pitch and Roll, Equation (8) is no longer valid. Therefore, without considering the magnetic declination, assume that magnetometer T m is placed obliquely in space and its rotation matrix is R(·) = X(·)Y(·)Z(·), all rotation in the positive direction of the attitude angle. Then the rotation matrix can be calculated as follows: cos θ sin ψH y − sin θH z (sin ϕ sin θ sin ψ + cos ϕ cos ψ)H y + sin ϕ cos θH z (cos ϕ sin θ sin ψ + sin ϕ cos ψ)H y + cos ϕ cos θH z where the Pitch angle θ and Roll angle ψ are known from Equation (7) Simultaneous Equations (7) and (9) can get the Yaw angle ψ as follows: The Yaw angle ψ obtained by magnetometer, Pitch angle θ, and Roll angle ϕ obtained by acceleration are fused with gyroscope data as a group of attitude.

Gyroscope
The gyroscope collects angular velocity ω i (i = x, y, z) of each axis when the object rotates and further gets the attitude of the object through integration with time.
As shown in Figure 11, taking the Pitch angle as an example. Assuming that the gyroscope's Pitch angle at time t 0 is θ 0 , at time t 1 is θ 1 , and its angular velocity during this period is a function ω y (t), then: (11) where ω y (t) is the relationship between angular velocity on the axis y and time t.  The calculation of the Roll angle φ and the Yaw angle ψ is the same as above, namely: Pitch angle θ , Roll angle φ , and Yaw angle ψ obtained by the gyroscope are fused with accelerometer and magnetometer data as another group of attitudes. The calculation of the ϕ Roll angle and the Yaw angle ψ is the same as above, namely: Pitch angle θ, Roll angle ϕ, and Yaw angle ψ obtained by the gyroscope are fused with accelerometer and magnetometer data as another group of attitudes.

Data Fusion Algorithm
Kalman filter is one of the most efficient algorithms to fuse multi-sensor data when estimating the state of a robot at present. This algorithm continuously optimizes the state of the system by inputting observation data and outputs the optimal estimation of the state of the system. However, this algorithm is generally applicable to linear systems and is not applicable to the robot system in this paper. Considering the nonlinear characteristics of the system and the high efficiency of the fusion algorithm, we adopted the EKF algorithm for multi-sensor data fusion of nonlinear systems.

Extended Kalman Filter
In this study, the sensors were directly fixed to the robot and have been pre-calibrated. The state equation of the system is expressed as: where regardless of the input variables of the system, u k−1 = 0, x k is state variable, T is the update time, 1 2 γ(w ik ) is a quaternion differential equation expression, w k−1 is process noise, which follows Gaussian distribution with a mean value of 0, namely w k ∼ N(0, Q).
The state variable is selected as: where q = q ω q x q y q z T is a quaternion in the world coordinate system to represent the robot's attitude, w ik (i = x, y, z) represents gyro drift deviation on the axis i at time k.
γ(w ik ) is expressed as: where ω ik (i = x, y, z) represents the angular velocity on the axis i at time k, which can be got from gyroscope. The observation equation of the system is expressed as: where z k is the observation variable, and the accelerometer and magnetometer data are used for the observation, R From Equations (13) and (16), it can be seen that functions f (·) and h(·) are nonlinear and need to be linearized. EKF gets state transition matrices F k and H k by solving partial derivatives of nonlinear functions, that is Jacobi matrices: where,x k is the estimation of x k , δ 1 is the partial derivative of f (·) to ω ik , and the expression is: δ 2 is the partial derivative of h a 1 to q, and the expression is: δ 3 is the partial derivative of h m 2 to q, and the expression is: H y q z + H z q y H y q y − H z q z H y q x + H z q ω H y q ω − H z q x H y q ω − H z q x −H y q x − H z q ω H y q y − H z q z −H y q z − H z q y −H y q x − H z q ω −H y q ω + H z q x H y q z + H z q y H y q y − H z q z Finally, the state update equation is obtained: wherex − k is the prior estimation of x k , K k is the Kalman gain, and the expressions ofx − k and K k are as follows: In Equation (24), P − k is the error covariance matrix of prior estimation at time k, the expression is as follows: where P k−1 is the error covariance matrix of posterior estimation at time k − 1, its update equation is: The update of state variables is obtained through Equation (22), the final attitude is output by the conversion relationship between quaternion and Euler angle. The Yaw angle ψ is taken as the main attitude basis in the lofting. The attitude solution diagram is shown in Figure 12. The update of state variables is obtained through Equation (22), the final attitude is output by the conversion relationship between quaternion and Euler angle. The Yaw angle ψ is taken as the main attitude basis in the lofting. The attitude solution diagram is shown in Figure 12.

Motion Control
The robot can get its own actual position and attitude ( , , ) x y ψ in an independent coordinate system through the BIM laser lofting instrument and fusion algorithm. However, how to reach the points to be lofted efficiently is still a key problem. Considering the time-saving purpose of lofting, we adopt a simple robot motion control theory based on the omnidirectional self-rotation characteristic of the Mecanum wheel. As shown in Figure 13, the robot is at point 1 and needs to reach the point 2 to be lofted.

Motion Control
The robot can get its own actual position and attitude (x, y, ψ) in an independent coordinate system through the BIM laser lofting instrument and fusion algorithm. However, how to reach the points to be lofted efficiently is still a key problem. Considering the time-saving purpose of lofting, we adopt a simple robot motion control theory based on the omnidirectional self-rotation characteristic of the Mecanum wheel.
As shown in Figure 13, the robot is at point 1 and needs to reach the point 2 to be lofted. At this time, the position and attitude of the robot at point 1 is (x 1 , y 1 , ψ 1 ), the position of point 2 is (x 2 , y 2 ) which is provided by the first party. To complete the lofting of point 2, the robot needs to rotate the angle β around the geometric center and move forward L. The calculations about β and L can be solved by known data. characteristic of the Mecanum wheel.
As shown in Figure 13, the robot is at point 1 and needs to reach the point 2 to be lofted. At this time, the position and attitude of the robot at point 1 is 1 1 1 ( , , ) x y ψ , the position of point 2 is 2 2 ( , ) x y which is provided by the first party. To complete the lofting of point 2, the robot needs to rotate the angle β around the geometric center and move forward L . The calculations about β and L can be solved by known data.  The solving formula are as follows: where, (x 1 , y 1 ) is the position of the robot at point 1, ψ 1 is the attitude of the robot at point 1, (x 2 , y 2 ) is the position of point 2, η is the angle between the line of point 1 and point 2 and the positive direction of the x-axis, β is the required rotation angle of the robot, L is the distance between point 1 and point 2. However, it is almost impossible for the robot to accurately rotate the angle β or move forward L. When the robot deviates from the expected trajectory, it will correct in real-time according to its position and attitude (x now , y now , ψ now ) at this time. The methods of solving β now and L now are similar to (27), namely: where, (x now , y now ) is the real-time position of the robot, ψ now is the real-time attitude of the robot, η now is the angle between the line between the real-time position of the robot and point 2 and the positive direction of the x-axis, β now is the required real-time rotation angle of the robot, L is the distance between the real-time position of the robot and point 2.

Experiment and Results
This part mainly introduces the power distribution of the robot and verifies the feasibility of the system, the localization accuracy of the robot is also verified through practical experiments.

Introduction of Power Distribution
The robot platform introduced in this paper was mainly equipped with wheel encoder, MEMS-IMU, and magnetometer sensors. It is used in combination with a BIM laser lofting instrument, which is an The localization accuracy of the robot is crucial for lofting. Table 3 shows the positions of ten points to be lofted and the actual localization accuracy. It can be seen from Table 3, the average localization error in the experiment is 7.84 mm, RMSE is 1.427 mm, and the maximum error is the only 10.6 mm, which meets most of the lofting requirements. To compare the lofting time between our method and the traditional method, we use a total station to stake out the same ten points. For 10 points, the robot takes 7.43 min, and the total station takes 19.6 min. It can be seen that the robot lofting method proposed in this paper improves efficiency by about 163.8% compared with the traditional lofting method and reduces manpower.

Discussion
The intelligent lofting system proposed in this article has practical application values for many projects, such as indoor decoration and indoor lofting, and can improve the efficiency of lofting under the premise of high enough accuracy. Table 4 is the accuracy control standards for indoor lofting in GB 50210 and GB50242 of China.  Tables 3 and 4, our method meets the standards of most indoor lofting projects. But for the maximum localization error, we think it is due to the error of the BIM laser lofting instrument itself and robot manufacturing error. The sensors' resolution and the environment of the experiment site will also have an impact on accuracy. Among them, the lofting environment is one of the main factors that affect accuracy. The rugged ground will have a serious impact on the balance of the Mecanum wheel, which will also affect our subsequent outdoor lofting research. Improving the localization algorithm and the robot motion control theory to further improve lofting accuracy is also one of the future research directions of this paper.
As for the issue of lofting time, whether it is the method proposed in this article or the traditional method, we record the ground marking time as 10s. This time is more realistic, although the marking time of the robot may be shorter in the future. The leveling time of the instrument cannot be ignored, because fast leveling is also one of the advantages of the BIM laser lofting instrument.

Conclusions
We propose an intelligent lofting system for the indoor barrier-free plane environment combined with a BIM laser lofting instrument. This method has certain application value for indoor decoration, indoor point layout, indoor equipment pre-placement, etc. In this paper, the design of the robot platform, velocity control, robot localization, and motion control theory are described. From the final experimental results, it can be seen that the method has achieved the expected goal. For the most important localization problem, we propose a method of combining internal and external multi-sensor data to estimate the state of the robot. Compared with the traditional method, the accuracy of this method meets the requirements of most indoor lofting, and the efficiency is higher, and the robot platform runs well.
The proposal of this method has certain significance for some large-area, boring and repetitive indoor lofting work, such as tile position lofting in large office buildings, indoor decoration, factory equipment pre-placement, etc. Compared with the traditional total station lofting method, our method