Skip to main content

An Efficient Algorithm for Cleaning Robots Using Vision Sensors

 Abhijeet Ravankar , Ankit A. Ravankar , Michiko Watanabe and Yohei Hoshino

Paper Link


image Courtesy: the Verge

Public places like hospitals and industries are required to maintain standards of hygiene and cleanliness. Traditionally, the cleaning task has been performed by people. However, due to various factors like shortage of workers, unavailability of 24-h service, or health concerns related to working with toxic chemicals used for cleaning, autonomous robots have been seen as alternatives. In recent years, cleaning robots like Roomba have gained popularity. These cleaning robots have limited battery power, and therefore, efficient cleaning is important. Efforts are being undertaken to improve the efficiency of cleaning robots. 

The most rudimentary type of cleaning robot is the one with bump sensors and encoders, which simply keeps cleaning the room while the battery has charge. Other approaches use dirt sensors attached to the robot to clean only the untidy portions of the floor. Researchers have also proposed to attach cameras on the robot to detect dirt and clean. However, a critical limitation of all the previous works is that a robot cannot know if the floor is clean or not unless it actually visit that place. Hence, timely information is not available on whether the room needs to be cleaned or not, which is a major limitation in achieving efficiency.




Abstract

In recent years, cleaning robots like Roomba have gained popularity. These cleaning robots have limited battery power, and therefore, efficient cleaning is important. Efforts are being undertaken to improve the efficiency of cleaning robots. Most of the previous works have used on-robot cameras, developed dirt detection sensors which are mounted on the cleaning robot, or built a map of the environment to clean periodically. However, a critical limitation of all the previous works is that robots cannot know if the floor is clean or not unless they actually visit that place. Hence, timely information is not available if the room needs to be cleaned. To overcome such limitations, we propose a novel approach that uses external cameras, which can communicate with the robots. The external cameras are fixed in the room and detect if the floor is untidy or not through image processing. The external camera detects if the floor is untidy, along with the exact areas, and coordinates of the portions of the floor that must be cleaned. This information is communicated to the cleaning robot through a wireless network. Thus, cleaning robots have access to a ‘bird’s-eye view’ of the environment for efficient cleaning. In this paper, we demonstrate the dirt detection using external camera and communication with robot in actual scenarios.


The proposed method enables cleaning robots to have an access to a ‘bird’s-eye view’ of the environment for efficient cleaning. We demonstrate how normal web-cameras can be used for dirt detection. The proposed cleaning algorithm is targeted for homes, factories, hospitals, airports, universities, and other public places. The scope of our current work is limited to indoor environments; however, an extension to external environments is straightforward. In this paper, we demonstrate the current algorithm through actual sensors in real-world scenarios.

Dirt Detection and Robot Notification Algorithm




Figure 1 shows the flowchart of the dirt detection and robot notification algorithm. It is assumed that a camera is setup on the ceiling of the room to monitor the dirt on the floor.

Experiment and Results


Figure 3 shows the results of the experiments. Figure 3a shows the background image, which is the image without dirt. This image is manually set by the user. Since this image contains parts of the room that have furniture and boxes, which could be moved, we set the region of interest by masking the image. This is shown in Figure 3b. Figure 3c shows the image with dirt. For dirt, we used many pieces of paper which were all 3 × 3 cm in size. Figure 3d shows the difference between the background image (Figure 3b) and the current frame (Figure 3c). Threshold operation is then applied on this image and the blobs are detected as shown in Figure 3e. The algorithm calculates the total area of the blob and the cleaning area, which is shown in Figure 3f. The coordinates of the bounding box in Figure 3f are transferred to the robot with an instruction to clean. The transfer of coordinates was tested between the camera computer and robot computer, which were on the same network. The camera computer was set to IP address 192.168.0.11 and robot computer was on the IP address 192.168.0.15. The transferred data was: < x : 135, y : 171, w : 379, h : 273 >, where x, y, w and h represent the x-coordinate, y-coordinate, width, and height of the dust area, respectively. In the proposed work, we confirmed receiving the data on the cleaning robot’s computer. Actual navigation to the dirty area is the next phase of the project and will be developed in the future.



Conclusions 

In this paper, we proposed an algorithm to improve the efficiency of cleaning robots by using external cameras. Unlike previous research, which uses an on-robot camera for dirt detection, the external camera mounted on the ceiling provides a bird’s-eye view of the environment and detects dirt. We proposed an algorithm to detect dirt and calculate its total area and coordinates. This information is transferred to the cleaning robot. The advantage of the proposed algorithm is that the cleaning robot can remotely know the coordinates of the dirty areas to clean. In the proposed work, we developed and experimented with dirt detection using an external camera and notification to the robot. In the next phase of the project, we will develop the shortest path algorithm for the robot and navigate the cleaning robot to the coordinates of the dirty areas received from the external camera. 

 

Comments

Popular posts from this blog

ABOD and its PyOD python module

Angle based detection By  Hans-Peter Kriegel, Matthias Schubert, Arthur Zimek  Ludwig-Maximilians-Universität München  Oettingenstr. 67, 80538 München, Germany Ref Link PyOD By  Yue Zhao   Zain Nasrullah   Department of Computer Science, University of Toronto, Toronto, ON M5S 2E4, Canada  Zheng Li jk  Northeastern University Toronto, Toronto, ON M5X 1E2, Canada I am combining two papers to summarize Anomaly detection. First one is Angle Based Outlier Detection (ABOD) and other one is python module that  uses ABOD along with over 20 other apis (PyOD) . This is third part in the series of Anomaly detection. First article exhibits survey that covered length and breadth of subject, Second article highlighted on data preparation and pre-processing.  Angle Based Outlier Detection. Angles are more stable than distances in high dimensional spaces for example the popularity of cosine-based sim...

Ownership at Large

 Open Problems and Challenges in Ownership Management -By John Ahlgren, Maria Eugenia Berezin, Kinga Bojarczuk, Elena Dulskyte, Inna Dvortsova, Johann George, Natalija Gucevska, Mark Harman, Shan He, Ralf Lämmel, Erik Meijer, Silvia Sapora, and Justin Spahr-Summers Facebook Inc.  Software-intensive organizations rely on large numbers of software assets of different types, e.g., source-code files, tables in the data warehouse, and software configurations. Who is the most suitable owner of a given asset changes over time, e.g., due to reorganization and individual function changes. New forms of automation can help suggest more suitable owners for any given asset at a given point in time. By such efforts on ownership health, accountability of ownership is increased. The problem of finding the most suitable owners for an asset is essentially a program comprehension problem: how do we automatically determine who would be best placed to understand, maintain, ev...

Hybrid Approach to Automation, RPA and Machine Learning

- By Wiesław Kopec´, Kinga Skorupska, Piotr Gago, Krzysztof Marasek  Polish-Japanese Academy of Information Technology Paper Link Courtesy DZone   Abstract One of the more prominent trends within Industry 4.0 is the drive to employ Robotic Process Automation (RPA), especially as one of the elements of the Lean approach.     The full implementation of RPA is riddled with challenges relating both to the reality of everyday business operations, from SMEs to SSCs and beyond, and the social effects of the changing job market. To successfully address these points there is a need to develop a solution that would adjust to the existing business operations and at the same time lower the negative social impact of the automation process. To achieve these goals we propose a hybrid, human-centred approach to the development of software robots. This design and  implementation method combines the Living Lab approach with empowerment through part...