In this project the usage of optical flow for motion segmentation was evaluated. The optical flow is represented by a vector field that describes the displacement of pixels in subsequent frames.
The motion vectors hold the magnitude and direction corresponding to the movement of the pixels. They can be either computed for every pixel in the image or just for certain, important pixels, such as corner points.
The segmentation process clusters pixels with similar motion vectors to regions that can be used for further processing, such as tracking of objects or collision avoidance for home-robotics.
The computation of the optical flow for every pixel in the image (dense optical flow) is computationally extensive. Although modern graphic hardware offers the possibility of real-time computation an attractive alternative to save resources is the computation of the motion vectors only for certain points in the image (which is called sparse optical flow).
This work covers the theoretic fundamentals as well as a detailed description of the implemented segmentation algorithm. Different image sequences produced in the lab and datasets from an evaluation-database were taken to demonstrate the segmentation of dense optical flow and sparse optical flow, respectively. Additionally, both methods are compared. From the results it is concluded that the less computationally extensive sparse optical flow offers a comparable performance and is therefore preferable over the dense optical flow.