Next Article in Journal
IoT Based Monitoring System for White Button Mushroom Farming
Previous Article in Journal
Monitoring of the Ceramic Kerf During the Laser Cutting Process through Piezoelectric Transducer
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

An Efficient Algorithm for Cleaning Robots Using Vision Sensors †

1
Faculty of Engineering, Kitami Institute of Technology, School of Regional Innovation and Social Design Engineering, Kitami, Hokkaido 090-8507, Japan
2
Division of Human Mechanical Systems and Design, Faculty of Engineering, Hokkaido University, Sapporo, Hokkaido 060-8628, Japan
*
Author to whom correspondence should be addressed.
Presented at the 6th International Electronic Conference on Sensors and Applications, 15–30 November 2019; Available online: https://ecsa-6.sciforum.net/.
Proceedings 2020, 42(1), 45; https://doi.org/10.3390/ecsa-6-06578
Published: 14 November 2019

Abstract

:
In recent years, cleaning robots like Roomba have gained popularity. These cleaning robots have limited battery power, and therefore, efficient cleaning is important. Efforts are being undertaken to improve the efficiency of cleaning robots. Most of the previous works have used on-robot cameras, developed dirt detection sensors which are mounted on the cleaning robot, or built a map of the environment to clean periodically. However, a critical limitation of all the previous works is that robots cannot know if the floor is clean or not unless they actually visit that place. Hence, timely information is not available if the room needs to be cleaned. To overcome such limitations, we propose a novel approach that uses external cameras, which can communicate with the robots. The external cameras are fixed in the room and detect if the floor is untidy or not through image processing. The external camera detects if the floor is untidy, along with the exact areas, and coordinates of the portions of the floor that must be cleaned. This information is communicated to the cleaning robot through a wireless network. Thus, cleaning robots have access to a ‘bird’s-eye view’ of the environment for efficient cleaning. In this paper, we demonstrate the dirt detection using external camera and communication with robot in actual scenarios.

1. Introduction

Public places like hospitals and industries are required to maintain standards of hygiene and cleanliness. Traditionally, the cleaning task has been performed by people. However, due to various factors like shortage of workers, unavailability of 24-h service, or health concerns related to working with toxic chemicals used for cleaning, autonomous robots have been seen as alternatives. In recent years, cleaning robots like Roomba [1] have gained popularity. These cleaning robots have limited battery power, and therefore, efficient cleaning is important. Efforts are being undertaken to improve the efficiency of cleaning robots.
The most rudimentary type of cleaning robot is the one with bump sensors and encoders, which simply keeps cleaning the room while the battery has charge. Other approaches use dirt sensors attached to the robot to clean only the untidy portions of the floor. Researchers have also proposed to attach cameras on the robot to detect dirt and clean. However, a critical limitation of all the previous works is that a robot cannot know if the floor is clean or not unless it actually visit that place. Hence, timely information is not available on whether the room needs to be cleaned or not, which is a major limitation in achieving efficiency.
To overcome such limitations, we propose a novel approach that uses external cameras, which can communicate with the robots. The external cameras are fixed in the room and detect if the floor is untidy or not through image processing. The external camera detects if the floor is untidy, along with the exact areas, and coordinates of the portions of the floor, which must be cleaned. This information is communicated to the cleaning robot through a wireless network. Unlike previous works that use on-board robot sensors, the novel contribution of the proposed work lies in using external cameras and intelligent communication between the camera node and robot. In the proposed work, we demonstrate results of dirt detection using an external camera and communication with the robot. Our future work will comprise navigating the robot to the coordinates received from the external camera.
The proposed method enables cleaning robots to have an access to a ‘bird’s-eye view’ of the environment for efficient cleaning. We demonstrate how normal web-cameras can be used for dirt detection. The proposed cleaning algorithm is targeted for homes, factories, hospitals, airports, universities, and other public places. The scope of our current work is limited to indoor environments; however, an extension to external environments is straightforward. In this paper, we demonstrate the current algorithm through actual sensors in real-world scenarios.

2. Related Works

Dirt detection and cleaning using robots is an active area of research. Dirt detection using visual sensors is proposed in [2,3]. Similarly, mud and dirt separation has been proposed in [4]. Some researchers have focused on particular environments in which cleaning robots will be used. This includes dirt detection and cleaning in office environments [2], wall cleaning robot [5], cleaning robot for greenhouse roofs [6], and cleaning robot for swimming pools [7]. In terms of algorithms for dirt detection, histogram and saliency detection based methods have been proposed [8,9]. Other researchers have focused on developing algorithms for cleaning large area with multiple robots [10,11], system design and obstacle avoidance of vacuum cleaning robot [12], algorithms for complete coverage path planning for cleaning task using multiple robots [13], and algorithms for cleaning robot’s cognitive abilities [14]. Commercial cleaning robots are also available and widely used [1,15,16]. Many researchers have focused on outdoor mud and dirt detection on roads for unmanned ground robots and vehicles [17].
A map is a prerequisite for path planning. A map is generated by using any of the SLAM (Simultaneous Localization and Mapping) algorithms available in the literature (see [18,19]). Given a map and goal location, a robot can plan a path from its current position. The literature is full of global and local planning of mobile robots. The most widely used algorithms for global planning are: A* [20], and Dijkstra‘s algorithm [21], among many others.
Networking and communication between robots and sensors is common in many research projects, which include inter-robot communication [22,23] and path sharing [24]. With respect to the proposed research, communication between external camera and robot has been proposed in [25] to guide the path of the robot.

3. Dirt Detection and Robot Notification Algorithm

Figure 1 shows the flowchart of the dirt detection and robot notification algorithm. It is assumed that a camera is setup on the ceiling of the room to monitor the dirt on the floor.
The algorithm starts by reading the current frame ( I t ) from the camera. If the background image ( B G ) is unset, the frame is set as the background frame. This frame is the one that does not contain dirt and is the ideal state of the room’s floor. This frame could also be setup manually by user’s instruction.
Once the background image is set, the subsequent frames are first blurred or smoothed to reduce noise. Smoothing is done by applying a filter, in which an output pixel’s value (i.e., g ( i , j ) ) is determined as a weighted sum of input pixel values (i.e., I ( i + k , j + l ) ):
g ( i , j ) = k l I ( i + k , j + l ) h ( k , l ) ,
where h ( k , l ) is called the kernel, which has the coefficients of the filter.
The next step is to take the absolute difference ( I diff ) of the current frame and background image, I diff = | I t B G | . Threshold operation is then applied on the smoothed image to generate the threshold image ( I thresh ),
I thresh = 255 , if I diff Thresh 0 , otherwise .
We then find contours in the threshold image and check for any blobs, which represent the dirt on the floor. If blobs are found, we calculate the total area of the blobs. If the total area is greater than the threshold area ( δ thresh ), the algorithm calculates the coordinates of the blobs. These coordinates are then transferred to the robot for cleaning.

4. Experiment and Results

The experiment setup is shown in Figure 2. Figure 2a shows the Logicool HD 1080 camera setup in the corner of the experiment room at a height of 255cm on a pole. The camera was connected to a laptop computer for image processing. Figure 2b shows the dimensions (395 × 190 cm) of the experiment area.
Figure 3 shows the results of the experiments. Figure 3a shows the background image, which is the image without dirt. This image is manually set by the user. Since this image contains parts of the room that have furniture and boxes, which could be moved, we set the region of interest by masking the image. This is shown in Figure 3b. Figure 3c shows the image with dirt. For dirt, we used many pieces of paper which were all 3 × 3 cm in size. Figure 3d shows the difference between the background image (Figure 3b) and the current frame (Figure 3c). Threshold operation is then applied on this image and the blobs are detected as shown in Figure 3e. The algorithm calculates the total area of the blob and the cleaning area, which is shown in Figure 3f. The coordinates of the bounding box in Figure 3f are transferred to the robot with an instruction to clean.
The transfer of coordinates was tested between the camera computer and robot computer, which were on the same network. The camera computer was set to IP address 192.168.0.11 and robot computer was on the IP address 192.168.0.15. The transferred data was: < x : 135 , y : 171 , w : 379 , h : 273 > , where x , y , w and h represent the x-coordinate, y-coordinate, width, and height of the dust area, respectively. In the proposed work, we confirmed receiving the data on the cleaning robot’s computer. Actual navigation to the dirty area is the next phase of the project and will be developed in the future.

5. Conclusions

In this paper we proposed an algorithm to improve the efficiency of cleaning robots by using external cameras. Unlike previous research, which uses an on-robot camera for dirt detection, the external camera mounted on the ceiling provides a bird’s-eye view of the environment and detects dirt. We proposed an algorithm to detect dirt and calculate its total area and coordinates. This information is transferred to the cleaning robot. The advantage of the proposed algorithm is that the cleaning robot can remotely know the coordinates of the dirty areas to clean.
In the proposed work, we developed and experimented with dirt detection using an external camera and notification to the robot. In the next phase of the project, we will develop the shortest path algorithm for the robot and navigate the cleaning robot to the coordinates of the dirty areas received from the external camera.

Author Contributions

A.R. and A.A.R. conceived the idea, performed the experiments, and developed the algorithm for dirt detection and robot notification. M.W. and Y.H. provided suggestions to analyze data and improve the manuscript. The article was written by A.R.

Funding

This research received no external funding.

Acknowledgments

The authors would like to thank Fujisawa Kai who is an undergraduate student at Kitami Institute of Technology, Japan, for helping with the experiments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Website. iRobot Roomba 2019. Available online: https://www.irobot.com/roomba (accessed on 10 October 2019).
  2. Bormann, R.; Weisshardt, F.; Arbeiter, G.; Fischer, J. Autonomous dirt detection for cleaning in office environments. In Proceedings of the 2013 IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, 6–10 May 2013; pp. 1260–1267. [Google Scholar] [CrossRef]
  3. Bormann, R.; Fischer, J.; Arbeiter, G.; Weisshardt, F.; Verl, A. A Visual Dirt Detection System for Mobile Service Robots. In Proceedings of the ROBOTIK 2012—7th German Conference on Robotics, Munich, Germany, 21–22 May 2012; pp. 1–6. [Google Scholar]
  4. Milinda, H.G.T.; Madhusanka, B.G.D.A. Mud and dirt separation method for floor cleaning robot. In Proceedings of the 2017 Moratuwa Engineering Research Conference (MERCon), Moratuwa, Sri Lanka, 29–31 May 2017; pp. 316–320. [Google Scholar] [CrossRef]
  5. Gao, X.; Kikuchi, K. Study on a Kind of Wall Cleaning Robot. In Proceedings of the 2004 IEEE International Conference on Robotics and Biomimetics, Shenyang, China, 22–26 August 2004; pp. 391–394. [Google Scholar] [CrossRef]
  6. Seemuang, N. A cleaning robot for greenhouse roofs. In Proceedings of the 2017 2nd International Conference on Control and Robotics Engineering (ICCRE), Bangkok, Thailand, 1–3 April 2017; pp. 49–52. [Google Scholar] [CrossRef]
  7. Yuan, F.-C.; Hu, S.-J.; Sun, H.-L.; Wang, L.-Z. Design of cleaning robot for swimming pools. In Proceedings of the MSIE 2011, Harbin, China, 8–9 January 2011; pp. 1175–1178. [Google Scholar] [CrossRef]
  8. Hou, X.; Zhang, L. Saliency Detection: A Spectral Residual Approach. In Proceedings of the 2007 IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, MN, USA, 17–22 June 2007; pp. 1–8. [Google Scholar] [CrossRef]
  9. Itti, L.; Koch, C.; Niebur, E. A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 1998, 20, 1254–1259. [Google Scholar] [CrossRef]
  10. Jeon, S.; Jang, M.; Lee, D.; Lee, C.; Cho, Y. Strategy for cleaning large area with multiple robots. In Proceedings of the 2013 10th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI), Jeju, Korea, 30 October–2 November 2013; pp. 652–654. [Google Scholar] [CrossRef]
  11. Jeon, S.; Jang, M.; Lee, D.; Cho, Y.; Lee, J. Multiple robots task allocation for cleaning a large public space. In Proceedings of the 2015 SAI Intelligent Systems Conference (IntelliSys), London, UK, 10–11 November 2015; pp. 315–319. [Google Scholar] [CrossRef]
  12. Guangling, L.; Yonghui, P. System Design and Obstacle Avoidance Algorithm Research of Vacuum Cleaning Robot. In Proceedings of the 2015 14th International Symposium on Distributed Computing and Applications for Business Engineering and Science (DCABES), Guiyang, China, 18–24 August 2015; pp. 171–175. [Google Scholar] [CrossRef]
  13. Lee, J.H.; Choi, J.S.; Lee, B.H.; Lee, K.W. Complete coverage path planning for cleaning task using multiple robots. In Proceedings of the 2009 IEEE International Conference on Systems, Man and Cybernetics, San Antonio, TX, USA, 11–14 October 2009; pp. 3618–3622. [Google Scholar] [CrossRef]
  14. Liu, S.; Zheng, L.; Wang, S.; Li, R.; Zhao, Y. Cognitive abilities of indoor cleaning robots. In Proceedings of the 2016 12th World Congress on Intelligent Control and Automation (WCICA), Guilin, China, 12–15 June 2016; pp. 1508–1513. [Google Scholar] [CrossRef]
  15. Choi, Y.-H.; Jung, K.-M. Windoro: The world’s first commercialized window cleaning robot for domestic use. In Proceedings of the 2011 8th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI), Incheon, Korea, 23–26 November 2011; pp. 131–136. [Google Scholar] [CrossRef]
  16. Website. Neato Robot 2019. Available online: https://www.neatorobotics.com/jp/ja/ (accessed on 10 October 2019).
  17. Rankin, A.L.; Matthies, L.H. Passive sensor evaluation for unmanned ground vehicle mud detection. J. Field Robot. 2010, 27, 473–490. [Google Scholar] [CrossRef]
  18. Ravankar, A.; Ravankar, A.A.; Hoshino, Y.; Emaru, T.; Kobayashi, Y. On a Hopping-points SVD and Hough Transform Based Line Detection Algorithm for Robot Localization and Mapping. Int. J. Adv. Robot. Syst. 2016, 13, 98. [Google Scholar] [CrossRef]
  19. Ravankar, A.A.; Ravankar, A.; Emaru, T.; Kobayashi, Y. A hybrid topological mapping and navigation method for large area robot mapping. In Proceedings of the 2017 56th Annual Conference of the Society of Instrument and Control Engineers of Japan (SICE), Kanazawa, Japan, 19–22 September 2017; pp. 1104–1107. [Google Scholar] [CrossRef]
  20. Hart, P.; Nilsson, N.; Raphael, B. A Formal Basis for the Heuristic Determination of Minimum Cost Paths. IEEE Trans. Syst. Sci. Cybern. 1968, 4, 100–107. [Google Scholar] [CrossRef]
  21. Dijkstra, E.W. A Note on Two Problems in Connexion with Graphs. Numerische Mathematik 1959, 1, 269–271. [Google Scholar] [CrossRef]
  22. Ravankar, A.; Ravankar, A.; Kobayashi, Y.; Emaru, T. Symbiotic Navigation in Multi-Robot Systems with Remote Obstacle Knowledge Sharing. Sensors 2017, 17, 1581. [Google Scholar] [CrossRef] [PubMed]
  23. Ravankar, A.; Ravankar, A.A.; Hoshino, Y.; Kobayashi, Y. On Sharing Spatial Data with Uncertainty Integration Amongst Multiple Robots Having Different Maps. Appl. Sci. 2019, 9, 2753. [Google Scholar] [CrossRef]
  24. Ravankar, A.; Ravankar, A.A.; Kobayashi, Y.; Emaru, T. Can robots help each other to plan optimal paths in dynamic maps? In Proceedings of the 2017 56th Annual Conference of the Society of Instrument and Control Engineers of Japan (SICE), Kanazawa, Japan, 19–22 September 2017; pp. 317–320. [Google Scholar] [CrossRef]
  25. Ravankar, A.; Ravankar, A.A.; Kobayashi, Y.; Jixin, L.; Emaru, T.; Hoshino, Y. A novel vision based adaptive transmission power control algorithm for energy efficiency in wireless sensor networks employing mobile robots. In Proceedings of the 2015 Seventh International Conference on Ubiquitous and Future Networks, Sapporo, Japan, 7–10 July 2015; pp. 300–305. [Google Scholar] [CrossRef]
Figure 1. Flowchart of dirt detection and robot notification algorithm.
Figure 1. Flowchart of dirt detection and robot notification algorithm.
Proceedings 42 00045 g001
Figure 2. Experiment setup. (a) Camera and computer. (b) Room dimensions.
Figure 2. Experiment setup. (a) Camera and computer. (b) Room dimensions.
Proceedings 42 00045 g002
Figure 3. Dirt detection results. (a) Clean floor. (b) Masking the region of interest. (c) Image with dirt. (d) Difference image with threshold. (e) Blobs detected. (f) Coordinates and cleaning area detected.
Figure 3. Dirt detection results. (a) Clean floor. (b) Masking the region of interest. (c) Image with dirt. (d) Difference image with threshold. (e) Blobs detected. (f) Coordinates and cleaning area detected.
Proceedings 42 00045 g003
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ravankar, A.; Ravankar, A.A.; Watanabe, M.; Hoshino, Y. An Efficient Algorithm for Cleaning Robots Using Vision Sensors. Proceedings 2020, 42, 45. https://doi.org/10.3390/ecsa-6-06578

AMA Style

Ravankar A, Ravankar AA, Watanabe M, Hoshino Y. An Efficient Algorithm for Cleaning Robots Using Vision Sensors. Proceedings. 2020; 42(1):45. https://doi.org/10.3390/ecsa-6-06578

Chicago/Turabian Style

Ravankar, Abhijeet, Ankit A. Ravankar, Michiko Watanabe, and Yohei Hoshino. 2020. "An Efficient Algorithm for Cleaning Robots Using Vision Sensors" Proceedings 42, no. 1: 45. https://doi.org/10.3390/ecsa-6-06578

Article Metrics

Back to TopTop