The motivation for this research area stems from a discussion topic started by the National Institute of Standards and Technology (NIST) in their effort to develop standards for evaluating rescue robots for Urban Search and Rescue (USAR). The discussion addresses the need to characterize rubble which is common in structural collapses. Although there is a general agreement over the importance of characterizing rubble, a method to do so has not yet been developed. In order to characterize rubble, there must first be a method of collecting rubble data. We propose to start by collecting this data in the form of 3D scene.
Our research has various smaller components that contributes to overall goal.
- 3D Point cloud model generation system
- Modeling rubble
- Converting point cloud to meshed surface model
- Visualizing 3D models
3D Point cloud model generation system
We employ the Kinect camera system by Microsoft as a 3D imaging sensor. The Kinect is the first commercial off the shelf sensor system that retails for less than $200. Despite the low cost, the Kinect provides robust 3D visual data which makes it a compelling choice as a sensor for our system. The algorithm developed to perform point cloud registration is described in a paper recently published in the workshop on Safety, Security and Rescue Robotics (SSRR2011) in Kyoto Japan, Nov 1-6, 2011. The title of the paper is “Low-Cost 3D Scene Reconstruction for Response Robots in Real-time” and an electronic copy of it can be found in the Publication section of our website.
Currently, we are working on implementing loop closing techniques to improve the accuracy of our model.
In order to prepare for disasters, emergency responders train on purposely built rubble pile. Due to the close partnership with the Ontario Provincial Police (OPP) of the USAR CBRNE(Chemical, Biological, Radiological, and Nuclear Explosive) Response Team (UCRT), we had access to their facility which includes their rubble pile located in Bolton, Ontario. We conducted experiments on modeling the rubble pile at Bolton. We used a 6 Rotors Unmanned Aerial Vehicle (UAV) equipped with an embedded computer and a Kinect sensor to fly over the rubble pile. 3D point cloud data was recorded input into our model generation system. We were successful in building several models of parts of the rubble pile. The video below shows footages of our experiments and their results. An in depth description of our methodologies can be found in a paper called “3D Modeling of Complex Disaster Environment using Unmanned Arial Vehicle” also published in SSRR2011 and a copy of it can be found in the Publication section of our website.
We were invited to attend the Response Robot Evaluation Exercise (RREE2011) organized by NIST, hosted by Texas Engineering Extension (TEEX) in November 2011. As usual, the event was held at “Disaster City” in College Station, Texas. At the event we were able to run more experiments and build models of the rubble pile (rubble pile 1) at Disaster City. Learning from our previous experiments at Bolton, we were able to generate better models. This can be seen in the videos below.