This thesis presents a method for converting a point cloud into a combination of polyhedral primitives. The process is supported by the use of digital imagery, which also forms the basis for the 3D dataset. Modern techniques from the field of computer vision allow the reconstruction of 3D objects or entire scenes out of a series of 2D images. The different coordinates of corresponding features on 2 or more images define the position of the respective point in 3D space as well as the camera parameters of the source images. The result is a 3D point cloud, whose appearance is equal to the real scene. Post-processing of this kind of data may be necessary, but it challenges the user due to its large amount of samples and lack of structure. This thesis describes an approach for converting such point clouds into a combination of parameterized polyhedral primitives, whose appearance approximates the real object. Part of the work on this thesis was the implementation of a tool for the reduction of point clouds to the previously mentioned combination of primitives. The fitting of those objects is automated, but the user defines the parts of the dataset to be reconstructed. The selection and displaying of the dataset's source images forms an important part in the navigation through the point cloud, accelerating the reconstruction task. The resulting set of primitives forms a strong reduction of manipulation tasks. The main part of this thesis concentrates on the research, evaluation and implementation of an opimization technique for the fitting of parameterized polyhedral primitives to the point cloud. During the work both image-based and 3D-based optimization approaches have been studied, which form an important part of current work on this topic. The principle is presented and evaluated with sample datasets. The results show, that the combination of automated optimization and manual user interaction leads to a simplified dataset with high accuracy.