Next Article in Journal
Tribological Properties and Wear Mechanism of C/C Composite Applied in Finger Seal
Next Article in Special Issue
Observer-Based Robust Fuzzy Controller Design for Uncertain Singular Fuzzy Systems Subject to Passivity Criterion
Previous Article in Journal
Active Disturbance Rejection Control for Piezoelectric Smart Structures: A Review
Previous Article in Special Issue
On the Use of a Genetic Algorithm for Determining Ho–Cook Coefficients in Continuous Path Planning of Industrial Robotic Manipulators
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Window Shape Estimation for Glass Façade-Cleaning Robot

1
Department of Robotics and Mechatronics, School of Science and Technology for Future Life, Tokyo Denki University, 5 Senju Asahi-cho, Adachi-ku, Tokyo 120-8551, Japan
2
Department of Advanced Machinery Engineering, School of Engineering, Tokyo Denki University, 5 Senju Asahi-cho, Adachi-ku, Tokyo 120-8551, Japan
3
Robotics and Mechatronics, Graduate School of Science and Technology for Future Life, Tokyo Denki University, 5 Senju Asahi-cho, Adachi-ku, Tokyo 120-8551, Japan
*
Author to whom correspondence should be addressed.
Machines 2023, 11(2), 175; https://doi.org/10.3390/machines11020175
Submission received: 27 December 2022 / Revised: 22 January 2023 / Accepted: 24 January 2023 / Published: 27 January 2023
(This article belongs to the Special Issue Modeling, Sensor Fusion and Control Techniques in Applied Robotics)

Abstract

:
This paper presents an approach to the estimation of a window shape for increasing the adaptability of glass façade-cleaning robots to different buildings. For this approach, a window scanning robot equipped with a 2D laser range scanner installed perpendicularly to a window surface is developed for the testbed, and a method for the window shape estimation is proposed, which consists of the robot’s pose estimation with an extended Kalman filter (EKF) and the loop closure based on the robot’s pose estimated. The effectiveness of the proposed approach is demonstrated through an experiment that is carried out on a window placed on a floor. The experimental results show that the window scanning robot can acquire a window shape, moving on a window surface, and the proposed approach is effective in increasing the accuracy of the window shape estimation.

1. Introduction

Increasing skyscrapers built with cutting-edge construction technologies and processes demand the involvement of robots in their maintenance. Such modern skyscrapers often have glass façades, which are maintained and cleaned by labor using gondolas and tethers in high places. The maintenance and cleaning of skyscrapers’ glass façades by labor thus have the potential for causing serious accidents. The out-of-control gondola due to the strong wind at the Shanghai World Financial Center [1] and the gondola suspended at a height of 240 meters at the World Trade Center in New York City [2] are examples of accidents. The application of robots in the maintenance and cleaning of skyscrapers’ glass façades can minimize the risk of such accidents.
Many research works on façade-cleaning robots have been reported [3,4]. The façade-cleaning robots are categorized into two broad types based on the movement mechanism: robots utilizing equipment installed on buildings, such as a crane, winch, gondola, and guide rails, and ones with no need of such equipment.
Concerning robots utilizing the equipment installed on buildings, Elkmann et al. developed an automatic façade-cleaning robot, SIRIUSc, for the Fraunhofer headquarters building in Munich, Germany [5,6,7]. SIRIUSc can move on and clean a building façade, using two pairs of linear modules, called the advanced sliding frame mechanism, and a rooftop gantry. S. Lee et al. suggested a built-in guide-type multi-robot concept and proposed a motion planning algorithm for façade glass cleaning with the robots [8]. Moon and Shin et al. developed a building façade maintenance robot (BFMR) based on built-in guide rails and its cleaning tool [9,10,11]. The BFMR consists of horizontal and vertical units and moves horizontally and vertically along the built-in guide rails. While moving horizontally, the BFMR cleans a façade with the cleaning tool which sprays and suctions water. Y. S. Lee et al. proposed an integrated control system for a built-in guided robot, which is divided into three stages: the preparation stage, cleaning stage, and return stage [12]. C. Lee et al. suggested a three-modular obstacle-climbing robot for façade cleaning, which is composed of a main platform, three modular climbing units, and a winch mechanism set on the top of a building [13]. The robot can clean a building façade with a window-cleaning system installed on the middle module, overcoming obstacles with the climbing units and the winch mechanism. Yoo et al. introduced an unmanned façade-cleaning robot equipped on a gondola, which consists of a two-degrees-of-freedom (DOF) robotic manipulator and a cleaning device [14]. The performance of the robot was tested on the 63 Building in the Republic of Korea. For the gondola-type cleaning robot, Hong et al. designed a cleaning module, applying a passive linkage suspension mechanism and tri-star wheels to overcome step-shaped obstacles [15], and Park et al. designed a 3-DOF manipulator for a cleaning module to compensate for the horizontal disturbance of a gondola [16]. Furthermore, Chae et al. proposed the improved design of the gondola-type cleaning robot, which includes the modularized robot design, a passive obstacle-overcoming mechanism with tri-star wheels and a compliant manipulator, and a position sensing device for the measurement and compensation of the lateral disturbance [17]. Although the performance of the aforementioned robots was demonstrated through experiments, they have the limitation of operational buildings because of their movement mechanism requiring the installed equipment.
For robots with no need of the installed equipment, Zhu, Sun, and Tso developed a climbing robot for glass-wall cleaning and presented motion planning and visual sensing methods for the climbing robot [18,19,20]. The robot can adhere to a window with suction cups and move with a translational mechanism, on which the motion planning and visual sensing enable the robot to track a desired path and measure its position and orientation relative to a window frame and the locations of dirty spots. Zhang et al. proposed a series of autonomous pneumatic climbing robots named sky cleaners for glass-wall cleaning [21]. One of the climbing robots, sky cleaner 3, was built for the glass walls of the Shanghai science and technology museum [22,23,24,25]. The sky cleaner 3 can adhere to glass walls with vacuum suckers and move with cylinders, for which the authors designed an intelligent control system based on a programmable logic controller (PLC) and proposed a method of the segment and variable bang-bang controller. Furthermore, the authors proposed three nonlinear control strategies, the fuzzy PID, segmental proportional control, and segmental variable bang-bang controller, for the sky cleaner 1, 2, and 3 [26]. Zhang et al. also developed a climbing robot for cleaning the spherical surface of the national grand theater in China and designed an intelligent control system based on the CAN bus [27]. Seo and T. Kim et al. designed a wall-climbing robotic platform, ROPE RIDE, for cleaning walls of buildings, and its cleaning unit [28,29]. ROPE RIDE is built on a rope ascender-based locomotion mechanism combined with triangular tracks to climb up walls and overcome obstacles and two propeller thrusters to contact walls. For ROPE RIDE, the authors presented a position-based adaptive impedance control (PAIC) to maintain a constant contact force between a cleaning unit and a wall [30]. Tun et al. developed a glass façade-cleaning robot, vSlider, which has passive suction cups driven by self-locking lead screws to adhere to a glass façade [31]. The robot can perform façade cleaning with the mechanism, reducing the power consumption. Vega-Heredia et al. presented a modular façade-cleaning robot called Mantis and a method of multi-sensor orientation tracking for the robot [32,33]. Mantis can overcome window frames, detecting them with an inductive sensor. Chae et al. designed a novel rope-driven wall-cleaning robot, Edelstro-M2 [34]. Edelstro-M2 can move vertically and horizontally with a dual rope-climbing mechanism and parallel kinematics and can be operated by just fixing two pieces of rope on roof anchors. The robots with no need for the installed equipment can be applied to any building, compared to that utilizing the installed equipment. However, the robots tend to be used to clean the façades of buildings with a conventional appearance.
To improve the adaptability of façade-cleaning robots to façades of various types of building architecture, we have proposed a concept of nested reconfigurable robots for façade cleaning. The concept is aimed at achieving autonomous façade cleaning according to window shapes, employing multiple modular multilegged robots capable of reconfiguring their morphology based on window shapes and letting the robots cooperate. Based on this concept, Nansai et al. suggested two types of glass façade-cleaning robots [35,36]. One is a modular robot assembled by a linear actuator, and another is a modular biped robot. For the modular biped robot, they proposed a foot location algorithm for glass façade cleaning [37]. In the previous works, however, the approach that the robots obtain environmental information and own states was not considered.
The task of façade-cleaning robots to perceive their surrounding environments and own states, which is of little or no consideration in the related works and our previous ones, is required to work autonomously in unknown environments and hence contributes to increasing their adaptability to façades of various types of building architecture. The task is similar to the simultaneous localization and mapping (SLAM) for autonomous driving [38,39]. In the SLAM for autonomous driving, the positions of a car and a map of its surrounding environment are estimated by observing its surroundings with external sensors, such as light detection and ranging (LiDAR) sensors and cameras, and performing feature matching based on observed data. In the exploration for autonomous façade cleaning, however, it is difficult to observe the environment surrounding a façade-cleaning robot and carry out feature matching as with the SLAM for autonomous driving because a façade-cleaning robot needs to observe window frames as their surroundings that have little or no rise from a window surface and have fewer features. Hence, we need to devise a method to obtain environmental information, especially window shapes, and robot states suitable for façade cleaning, which is a situation having difficulties in observing environments and performing feature matching.
In this paper, based on the concept of nested reconfigurable robots, we discuss a method for façade-cleaning robots to estimate a window shape as an approach to obtaining environmental information on a glass façade of a building in order to increase the adaptability of façade-cleaning robots to façades of various types of building architecture. To this end, we assume the following situations, focusing on the window shape estimation.
  • A glass façade-cleaning robot moves on a window surface with a rectangular frame.
  • The robot needs to estimate the window shape it is on with its own external sensor.
According to the assumptions, we require the robot to estimate not only a window shape but also its location on the window surface.
To achieve the window shape estimation, we develop a window scanning robot having a 2D laser range scanner installed perpendicularly to a window surface and an estimation method based on the robot’s pose. The window scanning robot can obtain its odometry data and measure the relative distance between the robot and a window frame, moving on the window surface. The robot’s pose is obtained by applying the extended Kalman filter (EKF) [40,41] with its odometry data, which is adopted due to the simplicity of a model of the window scanning robot. Based on the robot’s pose estimated, the pose graph of the robot is constructed, and the window shape it is on is formed by arranging points obtained by the 2D laser range scanner, according to the pose graph and relative distances between the robot and the window frame. To improve the accuracy of the window shape, a loop closure [42] is performed when the robot returns to the start position, i.e., the loop of the pose graph is closed. The effectiveness of the proposed method is verified through the experiment in which the window scanning robot scans a frame of a window placed on the ground.
This paper is organized as follows. Section 2 refers to the related works concerning the window shape estimation. Section 3 presents the concept of nested reconfigurable robots for façade cleaning. Section 4 introduces the window scanning robot developed. Section 5 describes the method of the window shape estimation, which consists of the pose estimation with a robot model and the EKF and the loop closure based on the robot’s pose, including the loop detection and pose adjustment. Section 6 devotes the experiment and its results to demonstrating the effectiveness of the proposed approach. Section 7 finally presents the concluding remarks and future work.

2. Related Work

This paper focuses on estimating a window shape to improve the adaptability of façade-cleaning robots. Research with respect to the window shape estimation was conducted from the perspective of façade cleaning and others.
In terms of façade cleaning, D. Y. Kim et al. proposed two approaches to detecting windows with a gondola-type robot equipped with a visual camera [43]. The authors utilized connect-component labeling and a histogram in each approach to extract a window from façade images. Furthermore, the authors improved the approach using a histogram to detect a tilted window [44].
The other perspective is the reconstruction of building models. Pu and Vosselman described an approach to extract windows from point clouds acquired by terrestrial laser scanning to reconstruct building models for virtual tourism, urban planning, and navigation systems [45]. To extract windows, the approach groups laser points in planar segments and detects walls, doors, and extrusions, applying feature constraints. Then, windows are detected through two strategies, depending on whether a window is covered with curtains or not. Pu and Vosselman also presented an automatic method for the reconstruction of building façade models from terrestrial laser scanning data, including window extraction [46]. The method provides polyhedron building models, utilizing knowledge about the features’ sizes, positions, orientations, and topology to recognize features in a point cloud. Wang et al. presented an approach to window and façade detection with LiDAR data collected from a moving vehicle [47]. The proposed method combines bottom-up and top-down strategies to extract façade planes, and windows are detected by performing potential window point detection and window localization. Zolanvari et al. introduced a slicing method to quickly detect free-form openings and the overall boundaries from building façades with LiDAR point clouds [48]. In the method, each façade is roughly segmented by a RANSAC-based algorithm and sliced horizontally or vertically. In the slicing step, windows are detected, and then window boundaries are created.
Although the aforementioned works achieved the acquisition of window shapes by observing windows from outside with external sensors, our approach estimates window shapes by observing window frames with an external sensor mounted on a façade-cleaning robot on window surfaces. This is involved because of the need for additional equipment for observing windows from outside, which reduces the adaptability of façade-cleaning robots due to the limitation of installing the equipment in high-rise buildings. Our approach has the following contributions:
(1)
A testbed for observing a window frame, called a window scanning robot, is presented: The window scanning robot having a 2D laser range scanner installed perpendicularly to a window surface is developed on the basis of a concept of nested reconfigurable robots for façade cleaning, detailed in the next section. This allows robots to observe window frames with little or no rising on the window surface they work on and to independently perform cleaning and exploration tasks. The window scanning robot offers an idea to acquire environmental data on a glass façade of a building for façade cleaning.
(2)
A method for façade-cleaning robots to estimate a window shape is proposed: The window shape estimation is achieved by arranging points obtained by an external sensor and performing the loop closure based on the robot’s pose estimated by the EKF. This is due to the environment on a window that has fewer features required for incorporating feature matching in a pose estimation, such as SLAM [38,39]. The method to obtain window shapes on the window surface a robot is on has not been presented to the knowledge of the authors.
(3)
The validities of the window scanning robot and the window shape estimation method are demonstrated: Focusing on demonstrating the effectiveness of the ideas of window scanning and window shape estimation, we experiment with the window scanning robot developed on a window placed on the ground. The experimental results show that the robot can acquire the window shape by scanning the window frame, and the proposed method is effective for estimating the shape of the window the robot works on.

3. Concept of Nested Reconfigurable Robots for Façade Cleaning

This paper discusses a method for window shape estimation based on a concept of nested reconfigurable robots for façade cleaning. The concept of nested reconfigurable robots for façade cleaning aims to develop robots that can be applied to autonomous façade cleaning on buildings with various types of architecture, such as a rounded glass surface and a spherical surface, shown in [4,27], improving the adaptability of façade-cleaning robots. In such various types of building architecture, while façade-cleaning robots work on flat glass panels connected by frames—called windows in this paper—especially in the case that window frames rise from window surfaces, they need to clean windows according to the frame shapes. The concept achieves autonomous façade cleaning according to window shapes, employing multiple modular multilegged robots capable of reconfiguring their morphology based on window shapes, which is executed by transforming their own modules and/or connecting with each other, and letting the robots cooperate (Figure 1). In the concept, the modular multilegged reconfigurable robots carry out tasks for façade cleaning, such as cleaning glass surfaces, the exploration of windows, and moving between windows through overcoming frames on each window.
To achieve autonomous façade cleaning with the modular multilegged reconfigurable robot team, the window shape estimation is performed on each window by scanning window frames with one cleaning robot having an external sensor or a dedicated window scanning robot in the team. In the window shape estimation, a robot scanning a frame first searches for a part of the frame of the window the robot works on with an external sensor, turning at any initial point on the window the robot came in (Figure 2a). Once the robot detects a part of the window frame, the robot gets close to the window frame and starts moving along the frame, measuring the relative distance between the robot and the frame and the robot’s pose (Figure 2b). After going around the same trajectory along the frame, the robot estimates the window shape based on the measured data (Figure 2c). Performing the above way on each window produces all the shapes of the windows in a façade.

4. Window Scanning Robot

We consider that a façade-cleaning robot estimates a shape of a window it is on with data acquired by its own external sensor. In this paper, we focus on demonstrating the effectiveness of a method of the window shape estimation. Hence, we develop a robot to obtain window data with an external sensor, considering no façade-climbing mechanism, and validate a method of the window shape estimation with the developed robot through an experiment on a window placed on the ground.
The window scanning robot developed is shown in Figure 3, whose size is 189.7 × 138.7 × 191.9 mm and weight is 1.26 kg. The robot is equipped with a 2D laser range scanner installed perpendicularly to the ground to observe a window frame with little or no rising from the window surface because it is difficult to observe such a window frame with a 2D scanner installed horizontally. The robot can thus measure the relative distance between the robot and a window frame, moving on a window surface.
The system architecture of the window scanning robot is shown in Figure 4, which is developed on the basis of the architecture of TurtleBot3, the standard robot platform for the robot operating system (ROS). This system employs Raspberry Pi 3 Model B+ to activate the ROS, to which OpenCR, the open-source control module for ROS, and the 2D laser range scanner RPLIDAR A2M8 are connected via USB. OpenCR controls the motors Dynamixel XM430-W210 connected via the TTL communication interface, receiving a velocity command, and acquires data, such as the inertial measurement unit (IMU) and odometry, from sensors installed on the board. RPLIDAR A2M8 is controlled by Raspberry Pi 3 to obtain distance data from the robot to a window frame.
As shown in Figure 5, the 11.1 V 1800 mAh Li-Po battery is used for the power supply to OpenCR. From OpenCR, 5 V 4 A power is supplied to Raspberry Pi 3.

5. Window Shape Estimation

In this paper, the window scanning robot cannot capture the overall shape of a window at one time because the robot scans the frame of a window on its surface. Thus, the window shape estimation is accomplished by measuring the relative distance between the robot and the window frame with the 2D laser range scanner along the frame and arranging the points obtained by the scanner according to the robot’s pose.
The flowchart of the window shape estimation is shown in Figure 6. In the estimation, the robot’s pose is estimated by the EKF [40,41] with the odometry data of the robot and is used to construct the pose graph of the robot. Based on the pose graph, the positions of the points obtained by the 2D scanner are recorded. In the loop closure [42], once it is detected that the robot reaches the end point of the loop of the robot’s trajectory, the pose adjustment is carried out to increase the accuracy of the window shape estimation.
The variables and parameters are summarized in Table 1 and Table 2.

5.1. Pose Estimation

The pose estimation of the window scanning robot is carried out, based on a model of the window scanning robot. Due to the simplicity of the robot model, the EKF [40,41] is employed to obtain the estimate and covariance of the robot’s pose and construct the pose graph of the robot.

5.1.1. Model of the Window Scanning Robot

The model of the window scanning robot used by the EKF is shown in Figure 7. In this model, O X Y is the world coordinate, and O c X c Y c is the fixed coordinate to the robot, where O c is located on the center of the wheel shaft and Y c is along the shaft.
In the model, the motion of the window scanning robot is represented as
x ˙ y ˙ θ ˙ = v cos θ v sin θ ω ,
where ( x , y ) and θ are the position and the heading angle of the robot, and v and ω are the translational and rotational velocities, which are the input of the robot. Thus, let x t = x t , y t , θ t T as the robot’s pose at timestep t and u t = v t , ω t T as the input, the robot’s pose after travel time Δ t is given as follows:
x t + 1 y t + 1 θ t + 1 = x t + v t Δ t cos θ t y t + v t Δ t sin θ t θ t + ω t Δ t .
Based on (2), the following state and observation equations are established for the EKF:
x t + 1 = f x t , u t = x t + v t Δ t cos θ t y t + v t Δ t sin θ t θ t + ω t Δ t ,
u ¯ t = u t + v t ,
y t = h x t + w t = x t y t θ t + w t ,
where v t R 2 and w t R 3 are the noise vectors for input and observation, respectively, and v t N 0 , Q t and w t N 0 , R t with the covariance of noise Q t R 2 × 2 and R t R 3 × 3 . In the equations, u ¯ t denotes the input with the noise, and y t is the observation values, which is the odometry data in the window scanning robot.

5.1.2. Extended Kalman Filter

With the state and observation equations, we can obtain the estimate x ^ and covariance P of the robot’s pose, according to the following algorithm of the EKF [40,41]:
Prediction step: The prior estimate x ^ t and covariance P t are calculated by applying the estimate x ^ t 1 , input u t 1 , and covariance P t 1 in previous timestep as follows:
x ^ t = f x ^ t 1 , u t 1 ,
P t = A t 1 P t 1 A t 1 T + B t 1 Q t 1 B t 1 T ,
where the matrices A t 1 and B t 1 are given by linearizing f x , u :
A t 1 = f x , u x x = x ^ t 1 , u = u t 1 = 1 0 v t 1 Δ t sin θ t 1 0 1 v t 1 Δ t cos θ t 1 0 0 1 ,
B t 1 = f x , u u x = x ^ t 1 , u = u t 1 = Δ t cos θ t 1 0 Δ t sin θ t 1 0 0 Δ t .
Update step: The posterior estimate x ^ t and covariance P t are obtained by updating x ^ t and P t calculated in the prediction step with the observation values y t as follows:
x ^ t = x ^ t + G t y t h x ^ t ,
P t = I G t C t P t ,
where G t is the Kalman gain calculated as
G t = P t C t T C t P t C t T + R t 1 .
In (12), the matrix C t is given by linearizing h x :
C t = h x x x = x ^ t = 1 0 0 0 1 0 0 0 1 .
By repeating the prediction and update steps in every timestep, the estimate and covariance of the robot’s pose are obtained to construct the pose graph of the robot.

5.2. Loop Closure

The loop closure [42] is carried out to reduce accumulated error on the EKF and obtain an accurate shape of window frames. Upon the setup of the window scanning robot, it is difficult to perform the loop closure based on scan matching because its 2D laser range scanner is installed perpendicularly to the ground. In this paper, the loop closure hence exploits the result of the pose estimation.
To carry out the loop closure, the pose graph G T = X T , D T of the window scanning robot moving by the time T, as shown in Figure 8, is applied. In this pose graph, the vertices X T and the edges D T represent the robot’s poses x t in the world coordinate and the relative poses d t = x t , y t , θ t between x t 1 and x t in the fixed coordinate, respectively, that is, X T = x t | t = 0 , 1 , , T and D T = d t | t = 0 , 1 , , T . The pose graph allows us to execute the loop closure through loop detection and pose adjustment.

5.2.1. Loop Detection

The loop detection, whose process is summarized in Algorithm 1, is carried out to obtain the set L of a pair of the time stamps ( s , e ) of the start and end points x s and x e of a loop, based on the pose estimation with the EKF in this paper. On the assumption that the window scanning robot moves around the same trajectory along the window frame on its surface, i.e., at the end of a loop the robot comes back to its start, the loop detection is executed with the following algorithm:
Step 1: Let n = 1 , the time stamp s n of the start point x s n of a loop is set.
Step 2: If the traveling distance d total = i [ s n , t ] x i 2 + y i 2 is larger than a distance threshold d thr , the evaluation value V t of x t is calculated as follows:
V t = x t x s n T W x t x s n ,
where W is a weight matrix.
Step 3: If V t 1 < V t 2 , V t 1 < V t , and V t 1 < V thr , then the timestep t 1 of x t 1 is set as the time stamp e n of the end point x e n of the loop, where V thr is a value threshold.
Step 4: Once a pair of the time stamps s n and e n is given in the one loop, n is updated as n = n + 1 and the timestep t of x t is set as the time stamp s n of the start point of a new loop.
Steps 2, 3, and 4 are repeated until all the loops are detected.

5.2.2. Pose Adjustment

In the pose adjustment, the accurate pose graph is generated by reducing the accumulated error, as shown in Figure 8. The pose adjustment is executed by minimizing the following function with x :
J = x 0 x ^ 0 T K 1 x 0 x ^ 0 + t [ 1 , T ] g x t 1 , x t d t T Σ t 1 g x t 1 , x t d t + ( s , e ) L g x s , x e r s , e T S s , e 1 g x s , x e r s , e ,
where r s , e is the relative pose between the start and end points, g x t 1 , x t and g x s , x e are the functions to calculate a relative pose from x given as
g x t 1 , x t = cos θ t 1 sin θ t 1 0 sin θ t 1 cos θ t 1 0 0 0 1 x t x t 1 y t y t 1 θ t θ t 1 ,
g x s , x e = x e x s y e y s θ e θ s ,
K is covariance to settle the initial pose x 0 to the initial estimate x ^ 0 , and Σ t and S s , e are covariance of the relative poses d t and r s , e , respectively. In this paper, d t and Σ t are obtained from the pose estimate x ^ and the covariance P through the EKF as follows:
d t = g x ^ t 1 , x ^ t ,
Σ t = J d 1 P t J x P t 1 J x T J d T 1 , J x = 1 0 x t sin θ t 1 y t cos θ t 1 0 1 x t cos θ t 1 y t sin θ t 1 0 0 1 , J d = cos θ t 1 sin θ t 1 0 sin θ t 1 cos θ t 1 0 0 0 1 ,
and r s , e , S s , e , and K are adjustable parameters.
Algorithm 1 Loop detection.
  • Require:  G T , W , d thr , V thr
  • Ensure: L
  • L = { }
  • n = 1
  • s n = 0
  • for t = 0 , , T do
  •    d total = i [ s n , t ] x i 2 + y i 2
  •   if  t 2 and d total > d thr  then
  •     V t 2 = x t 2 x s n T W x t 2 x s n
  •     V t 1 = x t 1 x s n T W x t 1 x s n
  •     V t = x t x s n T W x t x s n
  •    if  V t 1 < V t 2 and V t 1 < V t and V t 1 < V thr  then
  •      e n = t 1
  •      L = L s n , e n
  •      n = n + 1
  •      s n = t
  •    end if
  •   end if
  • end for

6. Experiment

We demonstrate the effectiveness of the proposed approach through the experiment. In the experiment, the window scanning robot moves around the same trajectory along the frame of the rectangle window, whose size is 900 × 1800 mm, on its surface placed on the floor (Figure 9), thereby measuring the distance between the robot and its frame with the 2D laser range scanner installed perpendicularly to its surface. Using this experiment data, we carry out the pose estimation of the robot with the EKF offline and generate the robot’s pose graph which stores the pose estimate and covariance with thinning out to reduce the computational load. On the pose graph, all the pairs of the start and end points in a loop are acquired by the loop detection and then the pose adjustment is executed for the loop closure.

6.1. ROS-Based Experimental System

To carry out the experiment, the ROS-based system shown in Figure 10 was implemented. In this implementation, the keyop_sensing_robot node for the manual control of the window scanning robot is activated in a laptop PC to send /velocity_command. This command is processed by sensing_robot_node in Raspberry Pi 3 installed on the robot to generate motor input. sensing_robot_node concurrently provides imu_data and odometry_data. In the Raspberry Pi 3, rplidar_node is also activated to control the 2D laser range scanner and provides scan_data. These data are stored by server_node in the laptop PC to be applied for the evaluation of the proposed approach.

6.2. Variable and Parameter Settings

For the pose estimation with the EKF, we set the initial estimate and covariance of the robot’s pose
x ^ 0 = x 0 , P 0 = diag ( 1.00 × 10 3 , 1.00 × 10 3 , 1.00 × 10 3 ) ,
and the covariance of noise
Q = diag ( 1.00 × 10 3 , 1.00 × 10 3 ) , R = diag ( 1.00 × 10 3 , 1.00 × 10 3 , 1.00 × 10 3 ) .
For the loop detection and the pose adjustment, we set the following parameters:
d thr = 1.00 , V thr = 5.00 × 10 2 , W = diag ( 3.00 , 3.00 , 5.00 × 10 1 ) , r s , e = 0.00 , 0.00 , 2.62 × 10 1 T , S s , e = diag ( 10.0 , 10.0 , 10.0 ) , K = diag ( 1.00 × 10 6 , 1.00 × 10 6 , 1.00 × 10 6 ) .

6.3. Experimental Results

The results of the pose estimation and the loop closure are shown in Figure 11, Figure 12, Figure 13 and Figure 14. Figure 11 shows the robot’s trajectory and the scanned shape of the window frame obtained by the (a) measurement, (b) pose estimation, and (c) loop closure, respectively. This figure indicates the robot’s trajectories with lines and the positions of objects scanned by the 2D laser range scanner with dots, where we removed the dots indicating the positions of objects that are not located at the level of the window frame, such as a floor, walls, and ceiling. Figure 12, Figure 13 and Figure 14 show the time-series data of the robot’s position and heading angle.
Figure 11a indicates that the window scanning robot was able to obtain the shape of the rectangular window frame, which is described outside the robot’s trajectory by the gathered scan dots along the trajectory, and the size of the scanned window is broadly correct. The dots forming the window shape can be distinguished from dots indicating other objects, although the window shape is slightly twisted. This result argues that the shape of a window frame can be measured by scanning a window frame perpendicularly to the window surface with a 2D laser range scanner and this task is viable by a robot on the scanned window.
Figure 11b declares that the pose estimation with the EKF reduces the error of the robot’s trajectory and thereby improves the accuracy of the window shape estimation. The window shape based on the estimates of the robot’s pose is less twisted than that based on the measurement data. This result presents that the pose estimation with the proposed model and the EKF contributes to increasing the feasibility of the proposed window shape estimation with a 2D laser range scanner.
Figure 11c shows that the loop closure based on the pose graph utilizing the estimates influences the estimated shape of the window frame. This estimated window shape is rarely different from that in Figure 11b. However, the isolated dots decrease in Figure 11c. This result indicates that the loop closure is effective for the window shape estimation but also implies the need to improve the proposed method.
In the time scale, Figure 12 and Figure 13 show that the loop closure adjusts the robot’s positions in the horizontal direction during translational movement. By contrast, it hardly affected the heading angle of the robot, as shown in Figure 14.

7. Conclusions

In this paper, we have presented an approach to estimating the shape of a window so that a façade-cleaning robot can acquire information about the window it works on. To this end, we developed a robot equipped with a 2D laser range scanner installed perpendicularly to the ground to observe a window frame and proposed a method of window shape estimation with the developed robot, consisting of pose estimation and a loop closure. For this method, the pose estimation with the EKF was employed and the loop closure algorithm based on the estimate of the robot’s pose was devised. This loop closure was accomplished by defining the start and the end point of a loop in the loop detection and modifying a pose graph of the robot in the pose adjustment. To demonstrate the effectiveness of the proposed approach, the experiment with the window scanning robot developed was carried out on a window placed on the floor. The experimental results have shown that the robot can acquire the window shape by scanning the window frame with the 2D laser range scanner installed perpendicularly to the window, and the proposed approach is effective for estimating a window shape it works on. In future work, we would improve the proposed approach to increase the accuracy of the window shape estimation. To this end, we will apply a different filter to the pose estimation, such as a fuzzy-based Kalman filter [49], and refine the method of the loop closure to increase the applicability to different shapes of windows. Moreover, we will carry out experiments using windows with rough glass surfaces to verify the influence of the geometry of window surfaces on the proposed approach.

Author Contributions

Conceptualization, T.N. and S.N.; methodology, T.N., S.N. and S.I.; software, T.N.; validation, T.N. and S.N.; formal analysis, T.N.; investigation, T.N. and S.N.; resources, S.N., M.I. and H.I.; data curation, T.N. and S.N.; writing—original draft preparation, T.N.; writing—review and editing, T.N., S.N., S.I., M.I. and H.I.; visualization, T.N.; supervision, S.N., M.I. and H.I.; project administration, S.N., M.I. and H.I.; funding acquisition, S.N. and H.I. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Research Institute for Science and Technology of Tokyo Denki University, grant number Q18T-06 and 19T-08/Japan. The APC was funded by Q18T-06 and 19T-08/Japan.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

The authors would like to thank S. Sasaki and M. Sasahira for assistance with the experiments.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
EKFExtended Kalman filter
ROSRobot operating system
IMUInertial measurement unit

References

  1. BBC. Shanghai window cleaning cradle swings out of control. BBC News, 3 April 2015.
  2. BBC. Window washers rescued from high up world trade center. BBC News, 12 November 2014.
  3. Elkmann, N.; Hortig, J.; Fritzsche, M. Cleaning automation. In Springer Handbook of Automation; Springer: Berlin/Heidelberg, Germany, 2009; pp. 1253–1264. [Google Scholar] [CrossRef]
  4. Seo, T.; Jeon, Y.; Park, C.; Kim, J. Survey on glass and façade-cleaning robots: Climbing mechanisms, cleaning methods, and applications. Int. J. Precis. Eng.-Manuf.-Green Technol. 2019, 6, 367–376. [Google Scholar] [CrossRef]
  5. Elkmann, N.; Felsch, T.; Sack, M.; Saenz, J.; Hortig, J. Innovative service robot systems for facade cleaning of difficult-to-access areas. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Lausanne, Switzerland, 30 September–4 October 2002; Volume 1, pp. 756–762. [Google Scholar] [CrossRef]
  6. Elkmann, N.; Kunst, D.; Krueger, T.; Lucke, M.; Böhme, T.; Felsch, T.; Stürze, T. SIRIUSc—Façade cleaning robot for a high-rise building in Munich, Germany. In Climbing and Walking Robots; Springer: Berlin/Heidelberg, Germany, 2005; pp. 1033–1040. [Google Scholar] [CrossRef]
  7. Elkmann, N.; Lucke, M.; Krüger, T.; Kunst, D.; Stürze, T.; Hortig, J. Kinematics, sensors and control of the fully automated façade-cleaning robot SIRIUSc for the Fraunhofer headquarters building, Munich. Ind. Robot Int. J. 2008. [Google Scholar] [CrossRef]
  8. Lee, S.; Kang, M.S.; Han, C.S. Sensor based motion planning and estimation of highrise building façade maintenance robot. Int. J. Precis. Eng. Manuf. 2012, 13, 2127–2134. [Google Scholar] [CrossRef]
  9. Moon, S.M.; Hong, D.; Kim, S.W.; Park, S. Building wall maintenance robot based on built-in guide rail. In Proceedings of the 2012 IEEE International Conference on Industrial Technology, Athens, Greece, 19–21 March 2012; pp. 498–503. [Google Scholar] [CrossRef]
  10. Shin, C.; Moon, S.; Kwon, J.; Huh, J.; Hong, D. Force control of cleaning tool system for building wall maintenance robot on built-in guide rail. In Proceedings of the International Symposium on Automation and Robotics in Construction, Sydney, Australia, 9–11 July 2014; Volume 31, p. 1. [Google Scholar] [CrossRef] [Green Version]
  11. Moon, S.M.; Shin, C.Y.; Huh, J.; Oh, K.W.; Hong, D. Window cleaning system with water circulation for building façade maintenance robot and its efficiency analysis. Int. J. Precis. Eng.-Manuf.-Green Technol. 2015, 2, 65–72. [Google Scholar] [CrossRef] [Green Version]
  12. Lee, Y.S.; Kim, S.H.; Gil, M.S.; Lee, S.H.; Kang, M.S.; Jang, S.H.; Yu, B.H.; Ryu, B.G.; Hong, D.; Han, C.S. The study on the integrated control system for curtain wall building façade cleaning robot. Autom. Constr. 2018, 94, 39–46. [Google Scholar] [CrossRef]
  13. Lee, C.; Chu, B. Three-modular obstacle-climbing robot for cleaning windows on building exterior walls. Int. J. Precis. Eng. Manuf. 2019, 20, 1371–1380. [Google Scholar] [CrossRef]
  14. Yoo, S.; Joo, I.; Hong, J.; Park, C.; Kim, J.; Kim, H.S.; Seo, T. Unmanned high-rise façade cleaning robot implemented on a gondola: Field test on 000-building in Korea. IEEE Access 2019, 7, 30174–30184. [Google Scholar] [CrossRef]
  15. Hong, J.; Park, G.; Lee, J.; Kim, J.; Kim, H.S.; Seo, T. Performance comparison of adaptive mechanisms of cleaning module to overcome step-shaped obstacles on façades. IEEE Access 2019, 7, 159879–159887. [Google Scholar] [CrossRef]
  16. Park, G.; Hong, J.; Yoo, S.; Kim, H.S.; Seo, T. Design of a 3-DOF parallel manipulator to compensate for disturbances in facade cleaning. IEEE Access 2020, 8, 9015–9022. [Google Scholar] [CrossRef]
  17. Chae, H.; Park, G.; Lee, J.; Kim, K.; Kim, T.; Kim, H.S.; Seo, T. Façade cleaning robot with manipulating and sensing devices equipped on a gondola. IEEE/ASME Trans. Mechatron. 2021, 26, 1719–1727. [Google Scholar] [CrossRef]
  18. Zhu, J.; Sun, D.; Tso, S.K. Application of a service climbing robot with motion planning and visual sensing. J. Robot. Syst. 2003, 20, 189–199. [Google Scholar] [CrossRef]
  19. Sun, D.; Zhu, J.; Lai, C.; Tso, S. A visual sensing application to a climbing cleaning robot on the glass surface. Mechatronics 2004, 14, 1089–1104. [Google Scholar] [CrossRef]
  20. Sun, D.; Zhu, J.; Tso, S.K. A Climbing Robot for Cleaning Glass Surface with Motion Planning and Visual Sensing. In Climbing and Walking Robots: Towards New Applications; IntechOpen: London, UK, 2007. [Google Scholar] [CrossRef] [Green Version]
  21. Zhang, H.; Zhang, J.; Zong, G. Requirements of glass cleaning and development of climbing robot systems. In Proceedings of the 2004 International Conference on Intelligent Mechatronics and Automation, Chengdu, China, 26–31 August 2004; pp. 101–106. [Google Scholar] [CrossRef]
  22. Zhang, H.; Zhang, J.; Zong, G. Realization of a service climbing robot for glass-wall cleaning. In Proceedings of the 2004 IEEE International Conference on Robotics and Biomimetics, Shenyang, China, 22–26 August 2004; pp. 395–400. [Google Scholar] [CrossRef]
  23. Zhang, H.; Zhang, J.; Liu, R.; Zong, G. A novel approach to pneumatic position servo control of a glass wall cleaning robot. In Proceedings of the 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)(IEEE Cat. No. 04CH37566), Sendai, Japan, 28 September–2 October 2004; Volume 1, pp. 467–472. [Google Scholar] [CrossRef]
  24. Zhang, H.; Zhang, J.; Zong, G. Effective pneumatic scheme and control strategy of a climbing robot for class wall cleaning on high-rise buildings. Int. J. Adv. Robot. Syst. 2006, 3, 28. [Google Scholar] [CrossRef]
  25. Zhang, H.; Zhang, J.; Zong, G.; Wang, W.; Liu, R. Sky cleaner 3: A real pneumatic climbing robot for glass-wall cleaning. IEEE Robot. Autom. Mag. 2006, 13, 32–41. [Google Scholar] [CrossRef]
  26. Zhang, H.; Zhang, J.; Zong, G. Effective nonlinear control algorithms for a series of pneumatic climbing robots. In Proceedings of the 2006 IEEE International Conference on Robotics and Biomimetics, Kunming, China, 17–20 December 2006; pp. 994–999. [Google Scholar] [CrossRef]
  27. Zhang, H.; Zhang, J.; Liu, R.; Zong, G. Realization of a service robot for cleaning spherical surfaces. Int. J. Adv. Robot. Syst. 2005, 2, 7. [Google Scholar] [CrossRef] [Green Version]
  28. Seo, K.; Cho, S.; Kim, T.; Kim, H.S.; Kim, J. Design and stability analysis of a novel wall-climbing robotic platform (ROPE RIDE). Mech. Mach. Theory 2013, 70, 189–208. [Google Scholar] [CrossRef]
  29. Kim, T.Y.; Kim, J.H.; Seo, K.C.; Kim, H.M.; Lee, G.U.; Kim, J.W.; Kim, H.S. Design and control of a cleaning unit for a novel wall-climbing robot. Appl. Mech. Mater. 2014, 541, 1092–1096. [Google Scholar] [CrossRef]
  30. Kim, T.; Seo, K.; Kim, J.; Kim, H.S. Adaptive impedance control of a cleaning unit for a novel wall-climbing mobile robotic platform (ROPE RIDE). In Proceedings of the 2014 IEEE/ASME International Conference on Advanced Intelligent Mechatronics, Besacon, France, 8–11 July 2014; pp. 994–999. [Google Scholar] [CrossRef]
  31. Tun, T.T.; Elara, M.R.; Kalimuthu, M.; Vengadesh, A. Glass facade cleaning robot with passive suction cups and self-locking trapezoidal lead screw drive. Autom. Constr. 2018, 96, 180–188. [Google Scholar] [CrossRef]
  32. Vega-Heredia, M.; Mohan, R.E.; Wen, T.Y.; Siti’Aisyah, J.; Vengadesh, A.; Ghanta, S.; Vinu, S. Design and modelling of a modular window cleaning robot. Autom. Constr. 2019, 103, 268–278. [Google Scholar] [CrossRef]
  33. Vega-Heredia, M.; Muhammad, I.; Ghanta, S.; Ayyalusami, V.; Aisyah, S.; Elara, M.R. Multi-sensor orientation tracking for a façade-cleaning robot. Sensors 2020, 20, 1483. [Google Scholar] [CrossRef] [Green Version]
  34. Chae, H.; Moon, Y.; Lee, K.; Park, S.; Kim, H.S.; Seo, T. A Tethered Façade Cleaning Robot Based on a Dual Rope Windlass Climbing Mechanism: Design and Experiments. IEEE/ASME Trans. Mechatron. 2022. [Google Scholar] [CrossRef]
  35. Nansai, S.; Elara, M.R.; Tun, T.T.; Veerajagadheswar, P.; Pathmakumar, T. A novel nested reconfigurable approach for a glass façade cleaning robot. Inventions 2017, 2, 18. [Google Scholar] [CrossRef] [Green Version]
  36. Nansai, S.; Onodera, K.; Veerajagadheswar, P.; Rajesh Elara, M.; Iwase, M. Design and experiment of a novel façade cleaning robot with a biped mechanism. Appl. Sci. 2018, 8, 2398. [Google Scholar] [CrossRef] [Green Version]
  37. Nansai, S.; Itoh, H. Foot location algorithm considering geometric constraints of façade cleaning. J. Adv. Simul. Sci. Eng. 2019, 6, 177–188. [Google Scholar] [CrossRef] [Green Version]
  38. Singandhupe, A.; La, H.M. A review of slam techniques and security in autonomous driving. In Proceedings of the 2019 third IEEE international conference on robotic computing (IRC), Naples, Italy, 25–27 February 2019; pp. 602–607. [Google Scholar] [CrossRef]
  39. Cheng, J.; Zhang, L.; Chen, Q.; Hu, X.; Cai, J. A review of visual SLAM methods for autonomous driving vehicles. Eng. Appl. Artif. Intell. 2022, 114, 104992. [Google Scholar] [CrossRef]
  40. Thrun, S.; Burgard, W.; Fox, D. Probabilistic Robotics; MIT Press: Cambridge, MA, USA, 2005; ISBN 0-262-20162-3. [Google Scholar]
  41. Adachi, S.; Maruta, I. Fundamentals of Kalman Filter; Tokyo Denki University Press: Tokyo, Japan, 2012. (In Japanese) [Google Scholar]
  42. Tomono, M. Simultaneous Localization and Mapping; Ohmsha: Tokyo, Japan, 2018. (In Japanese) [Google Scholar]
  43. Kim, D.Y.; Yoon, J.; Sun, H.; Park, C.W. Window detection for gondola robot using a visual camera. In Proceedings of the 2012 IEEE International Conference on Automation Science and Engineering (CASE), Seoul, Republic of Korea, 20–24 August 2012; pp. 998–1003. [Google Scholar] [CrossRef]
  44. Kim, D.Y.; Yoon, J.; Cha, D.H.; Park, C.W. Tilted Window Detection for Gondolatyped Facade Robot. Int. J. Control Theory Comput. Model. (IJCTCM) 2013, 3, 1–10. [Google Scholar] [CrossRef]
  45. Pu, S.; Vosselman, G. Extracting windows from terrestrial laser scanning. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2007, 36, 12–14. [Google Scholar]
  46. Pu, S.; Vosselman, G. Knowledge based reconstruction of building models from terrestrial laser scanning data. ISPRS J. Photogramm. Remote Sens. 2009, 64, 575–584. [Google Scholar] [CrossRef]
  47. Wang, R.; Bach, J.; Ferrie, F.P. Window detection from mobile LiDAR data. In Proceedings of the 2011 IEEE Workshop on Applications of Computer Vision (WACV), Kona, HI, USA, 5–7 January 2011; pp. 58–65. [Google Scholar] [CrossRef]
  48. Zolanvari, S.I.; Laefer, D.F. Slicing Method for curved façade and window extraction from point clouds. ISPRS J. Photogramm. Remote Sens. 2016, 119, 334–346. [Google Scholar] [CrossRef]
  49. Qasem, S.N.; Ahmadian, A.; Mohammadzadeh, A.; Rathinasamy, S.; Pahlevanzadeh, B. A type-3 logic fuzzy system: Optimized by a correntropy based Kalman filter with adaptive fuzzy kernel size. Inf. Sci. 2021, 572, 424–443. [Google Scholar] [CrossRef]
Figure 1. Concept of nested reconfigurable robots for façade cleaning. The concept employs multiple modular multilegged robots capable of reconfiguring their morphology based on window shapes. The modular multilegged reconfigurable robots can transform their own modules and connect with each other.
Figure 1. Concept of nested reconfigurable robots for façade cleaning. The concept employs multiple modular multilegged robots capable of reconfiguring their morphology based on window shapes. The modular multilegged reconfigurable robots can transform their own modules and connect with each other.
Machines 11 00175 g001
Figure 2. Exploration for the window shape estimation. (a) The scanning robot searches for a part of the window frame with an external sensor, turning at an initial point on the window the robot came in. (b) The robot gets close to the window frame and starts moving along the frame. (c) The robot goes around the same trajectory along the frame.
Figure 2. Exploration for the window shape estimation. (a) The scanning robot searches for a part of the window frame with an external sensor, turning at an initial point on the window the robot came in. (b) The robot gets close to the window frame and starts moving along the frame. (c) The robot goes around the same trajectory along the frame.
Machines 11 00175 g002
Figure 3. Window scanning robot. The robot’s size is 189.7 × 138.7 × 191.9 mm, and its weight is 1.26 kg. The robot can obtain location information of window frames using a 2D laser range scanner installed perpendicularly to the ground.
Figure 3. Window scanning robot. The robot’s size is 189.7 × 138.7 × 191.9 mm, and its weight is 1.26 kg. The robot can obtain location information of window frames using a 2D laser range scanner installed perpendicularly to the ground.
Machines 11 00175 g003
Figure 4. System architecture of the window scanning robot. The system is developed on the basis of the architecture of TurtleBot3 and consists of Raspberry Pi 3 Model B+, OpenCR, the RPLIDAR A2M8 2D laser range scanner, two Dynamixel XM430-W210 motors.
Figure 4. System architecture of the window scanning robot. The system is developed on the basis of the architecture of TurtleBot3 and consists of Raspberry Pi 3 Model B+, OpenCR, the RPLIDAR A2M8 2D laser range scanner, two Dynamixel XM430-W210 motors.
Machines 11 00175 g004
Figure 5. Power supply of the window scanning robot. The robot has an 11.1 V 1800 mAh Li-Po battery for the power supply to OpenCR. From OpenCR, 5 V 4 V power is supplied to Raspberry Pi 3.
Figure 5. Power supply of the window scanning robot. The robot has an 11.1 V 1800 mAh Li-Po battery for the power supply to OpenCR. From OpenCR, 5 V 4 V power is supplied to Raspberry Pi 3.
Machines 11 00175 g005
Figure 6. Flowchart of the window shape estimation. In the estimation, the robot’s pose is estimated by the EKF with odometry data, and the estimate and the covariance of the robot’s pose are registered on the pose graph of the robot. Based on the pose graph, the positions of the points obtained by the 2D scanner are recorded. In the loop closure, the end point of the loop of the robot’s trajectory is detected, and the pose adjustment is carried out.
Figure 6. Flowchart of the window shape estimation. In the estimation, the robot’s pose is estimated by the EKF with odometry data, and the estimate and the covariance of the robot’s pose are registered on the pose graph of the robot. Based on the pose graph, the positions of the points obtained by the 2D scanner are recorded. In the loop closure, the end point of the loop of the robot’s trajectory is detected, and the pose adjustment is carried out.
Machines 11 00175 g006
Figure 7. Model of the window scanning robot. O X Y is the world coordinate, and O c X c Y c is the fixed coordinate to the robot, where O c is located on the center of the wheel shaft and Y c is along the haft.
Figure 7. Model of the window scanning robot. O X Y is the world coordinate, and O c X c Y c is the fixed coordinate to the robot, where O c is located on the center of the wheel shaft and Y c is along the haft.
Machines 11 00175 g007
Figure 8. Pose graph. (a) The pose graph has the robot’s pose x t in the world coordinate as the vertex and the relative pose d t = x t , y t , θ t between x t 1 and x t in the fixed coordinate as the edge. (b) On the pose graph, the loop closure is carried out by matching the end point x e of the loop to the start point x s .
Figure 8. Pose graph. (a) The pose graph has the robot’s pose x t in the world coordinate as the vertex and the relative pose d t = x t , y t , θ t between x t 1 and x t in the fixed coordinate as the edge. (b) On the pose graph, the loop closure is carried out by matching the end point x e of the loop to the start point x s .
Machines 11 00175 g008
Figure 9. Experimental setup. In the experiments, the window scanning robot moves around the same trajectory along the frame of the rectangle window on its surface placed on the floor. Its size is 900 × 1800 mm.
Figure 9. Experimental setup. In the experiments, the window scanning robot moves around the same trajectory along the frame of the rectangle window on its surface placed on the floor. Its size is 900 × 1800 mm.
Machines 11 00175 g009
Figure 10. ROS-based implementation. The implementation consists of four ROS nodes: keyop_sensing_robot, server_node, sensing_robot_node, and rplidar_node.
Figure 10. ROS-based implementation. The implementation consists of four ROS nodes: keyop_sensing_robot, server_node, sensing_robot_node, and rplidar_node.
Machines 11 00175 g010
Figure 11. Robot’s trajectory and the scanned shape of the window frame. They are obtained by (a) measurement, (b) pose estimation, and (c) loop closure, respectively. They indicate the robot’s trajectories with lines and the positions of objects scanned by the 2D laser range scanner with dots. The shape of the rectangular window frame is described outside the robot’s trajectory by the gathered scan dots along the trajectory.
Figure 11. Robot’s trajectory and the scanned shape of the window frame. They are obtained by (a) measurement, (b) pose estimation, and (c) loop closure, respectively. They indicate the robot’s trajectories with lines and the positions of objects scanned by the 2D laser range scanner with dots. The shape of the rectangular window frame is described outside the robot’s trajectory by the gathered scan dots along the trajectory.
Machines 11 00175 g011
Figure 12. X-direction position of the robot. The robot’s positions in the horizontal direction during translational movement are adjusted by the loop closure.
Figure 12. X-direction position of the robot. The robot’s positions in the horizontal direction during translational movement are adjusted by the loop closure.
Machines 11 00175 g012
Figure 13. Y-direction position of the robot. The robot’s positions in the horizontal direction during translational movement are adjusted by the loop closure.
Figure 13. Y-direction position of the robot. The robot’s positions in the horizontal direction during translational movement are adjusted by the loop closure.
Machines 11 00175 g013
Figure 14. Heading angle of the robot. The heading angle of the robot is hardly affected by the loop closure.
Figure 14. Heading angle of the robot. The heading angle of the robot is hardly affected by the loop closure.
Machines 11 00175 g014
Table 1. Variable descriptions.
Table 1. Variable descriptions.
SymbolDescription
xRobot position in X direction
yRobot position in Y direction
θ Robot heading angle
vTranslational velocity input
ω Rotational velocity input
tTimestep
x Robot’s pose: x = x , y , θ T
y Robot’s observation: y = x , y , θ T
u Robot’s input: u = v , ω T
d Relative robot’s pose: d = x , y , θ
v Input noise: v R 2
w Observation noise: w R 3
Q Covariance of input noise: Q R 2 × 2
R Covariance of observation noise: R R 3 × 3
P Covariance of robot’s pose: P R 3 × 3
Σ Covariance of relative robot’s pose: Σ R 3 × 3
X Set of robot’s poses: x X
D Set of relative robot’s poses: d D
G Set representing pose graph: G = X , D
Table 2. Parameter descriptions.
Table 2. Parameter descriptions.
SymbolDescription
Δ t Travel time
TTime of the end of robot movement
sTime stamp at the start point of a loop
eTime stamp at the end point of a loop
d thr Traveling distance threshold
V thr Evaluation value threshold
r Relative pose between start and end points: r R 3
S Covariance of relative pose between start and end points: S R 3 × 3
K Covariance for the initial pose settlement: K R 3 × 3
W Weight matrix: W R 3 × 3
L Set of pairs of the time stamps s and e: ( s , e ) L
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Nemoto, T.; Nansai, S.; Iizuka, S.; Iwase, M.; Itoh, H. Window Shape Estimation for Glass Façade-Cleaning Robot. Machines 2023, 11, 175. https://doi.org/10.3390/machines11020175

AMA Style

Nemoto T, Nansai S, Iizuka S, Iwase M, Itoh H. Window Shape Estimation for Glass Façade-Cleaning Robot. Machines. 2023; 11(2):175. https://doi.org/10.3390/machines11020175

Chicago/Turabian Style

Nemoto, Takuma, Shunsuke Nansai, Shohei Iizuka, Masami Iwase, and Hiroshi Itoh. 2023. "Window Shape Estimation for Glass Façade-Cleaning Robot" Machines 11, no. 2: 175. https://doi.org/10.3390/machines11020175

APA Style

Nemoto, T., Nansai, S., Iizuka, S., Iwase, M., & Itoh, H. (2023). Window Shape Estimation for Glass Façade-Cleaning Robot. Machines, 11(2), 175. https://doi.org/10.3390/machines11020175

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop