Shape Recognition

by Gabe Butterick and Audrey Lewis

We made our own implementation of a simple shape recognizer.

  • Squares are blue
  • Triangles are red
  • Fast (enough)
  • Uses ROS
  • Written by real live college students
  • Check out our repo

Implementation

Edge Recognition

The edge_detector() function is a very basic implementation of edge detection that functions by iterating through all the pixels in a provided image and marks pixels that have a substantial pixel change within one pixel of them as an 'edge'. The edges can then be passed to a contour detection algorithm or to the create_image() function to get back contours and a edge detected image respectively.

Contour Finding

The find_contours() algorithm ultimately takes in a list of key points from an image that it assumes are edges and creates a list of lists, where each list is a complete contour containing all the points that must be part of the contour. Currently its efficiency leaves something to be desired, and if we had more time for the project we would work to make it function in a reasonable amount of time so it could be used in the main program. It makes use of one helper function for finding the distance between two points.

Shape Trainer

shape_trainer() is probably the most interesting function that isn't directly run on the robot. Given an image, it reads the image using OpenCV, detects edges on it using Canny edge detection, feeds the edges into OpenCV's findContours(), and feeds the resulting contours into the Ramer Douglas Peuker algorithm we implemented. From there, the algorithm simplifies the given list of points down to the key, defining coordinates of the shape for each given contour. This is extremely important because we know we will always get the same number of key points for a square or triangle, regardless of size or rotation. We then examine the resulting list of lists containing key points and pull out the lengths of each contour of key points, which we define as shapes, and compare them to what we know the shapes are. For instance, we use this program on a picture of a square, and it returns the number 5, signaling to us that squares will have 5 key points. We then use this information to guide the shape_recognizer() code.

Shape Recognizer

shape_recognizer() is the node that runs and interfaces with an actual Neato robot. It subscribes to the robot's video feed, runs those images through OpenCV's edge and contour detection algorithms, and the through our Ramer Douglas Peuker implementation. Unfortunately, our edge and contour detection algorithms aren't efficient enough to be used with the actual robot. The node displays the image with contours, and when it detects that one of these contours has the appropriate amount of key points for a known shape, draws it on the image. For a square, it highlights it in blue, and for a triangle, in red.

Shape Recognizer running

Ramer Douglas Peucker

The Ramer Douglas Peucker algorithm is what we use for finding the number of keypoints along a contour. The algorithm takes two parameters - an ordered list of points, which we found using OpenCV's contours, and "epsilon" which controls how closely to the shape to the contours. The algorithm start by calculating the line segment between the two endpoints. It uses the "distance" function to find the point on the contour that is the furthest distance away from this hypothetical line sement. Then, it splits the line into two new lines bounded by the endpoints and the new point. It calls the algorithm again with these new lines. The recursion terminates when the furthest point away is too close to matter, which is defined by being closer than the distance epsilon. Then, it passes back up through all the layers the enpoints that it has terminated on, and we're able to use those keypoints to describe the contour.

Project Updates