Skip to main content

An Efficient Algorithm for Cleaning Robots Using Vision Sensors

 Abhijeet Ravankar , Ankit A. Ravankar , Michiko Watanabe and Yohei Hoshino

Paper Link


image Courtesy: the Verge

Public places like hospitals and industries are required to maintain standards of hygiene and cleanliness. Traditionally, the cleaning task has been performed by people. However, due to various factors like shortage of workers, unavailability of 24-h service, or health concerns related to working with toxic chemicals used for cleaning, autonomous robots have been seen as alternatives. In recent years, cleaning robots like Roomba have gained popularity. These cleaning robots have limited battery power, and therefore, efficient cleaning is important. Efforts are being undertaken to improve the efficiency of cleaning robots. 

The most rudimentary type of cleaning robot is the one with bump sensors and encoders, which simply keeps cleaning the room while the battery has charge. Other approaches use dirt sensors attached to the robot to clean only the untidy portions of the floor. Researchers have also proposed to attach cameras on the robot to detect dirt and clean. However, a critical limitation of all the previous works is that a robot cannot know if the floor is clean or not unless it actually visit that place. Hence, timely information is not available on whether the room needs to be cleaned or not, which is a major limitation in achieving efficiency.




Abstract

In recent years, cleaning robots like Roomba have gained popularity. These cleaning robots have limited battery power, and therefore, efficient cleaning is important. Efforts are being undertaken to improve the efficiency of cleaning robots. Most of the previous works have used on-robot cameras, developed dirt detection sensors which are mounted on the cleaning robot, or built a map of the environment to clean periodically. However, a critical limitation of all the previous works is that robots cannot know if the floor is clean or not unless they actually visit that place. Hence, timely information is not available if the room needs to be cleaned. To overcome such limitations, we propose a novel approach that uses external cameras, which can communicate with the robots. The external cameras are fixed in the room and detect if the floor is untidy or not through image processing. The external camera detects if the floor is untidy, along with the exact areas, and coordinates of the portions of the floor that must be cleaned. This information is communicated to the cleaning robot through a wireless network. Thus, cleaning robots have access to a ‘bird’s-eye view’ of the environment for efficient cleaning. In this paper, we demonstrate the dirt detection using external camera and communication with robot in actual scenarios.


The proposed method enables cleaning robots to have an access to a ‘bird’s-eye view’ of the environment for efficient cleaning. We demonstrate how normal web-cameras can be used for dirt detection. The proposed cleaning algorithm is targeted for homes, factories, hospitals, airports, universities, and other public places. The scope of our current work is limited to indoor environments; however, an extension to external environments is straightforward. In this paper, we demonstrate the current algorithm through actual sensors in real-world scenarios.

Dirt Detection and Robot Notification Algorithm




Figure 1 shows the flowchart of the dirt detection and robot notification algorithm. It is assumed that a camera is setup on the ceiling of the room to monitor the dirt on the floor.

Experiment and Results


Figure 3 shows the results of the experiments. Figure 3a shows the background image, which is the image without dirt. This image is manually set by the user. Since this image contains parts of the room that have furniture and boxes, which could be moved, we set the region of interest by masking the image. This is shown in Figure 3b. Figure 3c shows the image with dirt. For dirt, we used many pieces of paper which were all 3 × 3 cm in size. Figure 3d shows the difference between the background image (Figure 3b) and the current frame (Figure 3c). Threshold operation is then applied on this image and the blobs are detected as shown in Figure 3e. The algorithm calculates the total area of the blob and the cleaning area, which is shown in Figure 3f. The coordinates of the bounding box in Figure 3f are transferred to the robot with an instruction to clean. The transfer of coordinates was tested between the camera computer and robot computer, which were on the same network. The camera computer was set to IP address 192.168.0.11 and robot computer was on the IP address 192.168.0.15. The transferred data was: < x : 135, y : 171, w : 379, h : 273 >, where x, y, w and h represent the x-coordinate, y-coordinate, width, and height of the dust area, respectively. In the proposed work, we confirmed receiving the data on the cleaning robot’s computer. Actual navigation to the dirty area is the next phase of the project and will be developed in the future.



Conclusions 

In this paper, we proposed an algorithm to improve the efficiency of cleaning robots by using external cameras. Unlike previous research, which uses an on-robot camera for dirt detection, the external camera mounted on the ceiling provides a bird’s-eye view of the environment and detects dirt. We proposed an algorithm to detect dirt and calculate its total area and coordinates. This information is transferred to the cleaning robot. The advantage of the proposed algorithm is that the cleaning robot can remotely know the coordinates of the dirty areas to clean. In the proposed work, we developed and experimented with dirt detection using an external camera and notification to the robot. In the next phase of the project, we will develop the shortest path algorithm for the robot and navigate the cleaning robot to the coordinates of the dirty areas received from the external camera. 

 

Comments

Popular posts from this blog

ABOD and its PyOD python module

Angle based detection By  Hans-Peter Kriegel, Matthias Schubert, Arthur Zimek  Ludwig-Maximilians-Universität München  Oettingenstr. 67, 80538 München, Germany Ref Link PyOD By  Yue Zhao   Zain Nasrullah   Department of Computer Science, University of Toronto, Toronto, ON M5S 2E4, Canada  Zheng Li jk  Northeastern University Toronto, Toronto, ON M5X 1E2, Canada I am combining two papers to summarize Anomaly detection. First one is Angle Based Outlier Detection (ABOD) and other one is python module that  uses ABOD along with over 20 other apis (PyOD) . This is third part in the series of Anomaly detection. First article exhibits survey that covered length and breadth of subject, Second article highlighted on data preparation and pre-processing.  Angle Based Outlier Detection. Angles are more stable than distances in high dimensional spaces for example the popularity of cosine-based similarity measures for text data. Object o is an out

TableSense: Spreadsheet Table Detection with Convolutional Neural Networks

 - By Haoyu Dong, Shijie Liu, Shi Han, Zhouyu Fu, Dongmei Zhang Microsoft Research, Beijing 100080, China. Beihang University, Beijing 100191, China Paper Link Abstract Spreadsheet table detection is the task of detecting all tables on a given sheet and locating their respective ranges. Automatic table detection is a key enabling technique and an initial step in spreadsheet data intelligence. However, the detection task is challenged by the diversity of table structures and table layouts on the spreadsheet. Considering the analogy between a cell matrix as spreadsheet and a pixel matrix as image, and encouraged by the successful application of Convolutional Neural Networks (CNN) in computer vision, we have developed TableSense, a novel end-to-end framework for spreadsheet table detection. First, we devise an effective cell featurization scheme to better leverage the rich information in each cell; second, we develop an enhanced convolutional neural network model for tab

DEEP LEARNING FOR ANOMALY DETECTION: A SURVEY

-By  Raghavendra Chalapathy  University of Sydney,  Capital Markets Co-operative Research Centre (CMCRC)  Sanjay Chawla  Qatar Computing Research Institute (QCRI),  HBKU  Paper Link Anomaly detection also known as outlier detection is the identification of rare items, events or observations which raise suspicions by differing significantly from the majority of the data. Typically the anomalous items will translate to some kind of problem such as bank fraud, a structural defect, medical problems or errors in a text. Anomalies are also referred to as outliers, novelties, noise, deviations and exceptions Hawkins defines an outlier as an observation that deviates so significantly from other observations as to arouse suspicion that it was generated by a different mechanism. Aim of this paper is two-fold, First is a structured and comprehensive overview of research methods in deep learning-based anomaly detection. Furthermore the adoption of these methods