Recent developments in modern robotics show that making robots more autonomous is an important, but difficult task for the robotic industry. Imagine a simple pick and place scenario from our daily life. For example, taking a cup of coffee from the kitchen table and putting it in the sink. This seems like a fairly simple task, at least for a human being. If we want to teach a robot to deal with this situation autonomously, some advanced problems need to be solved. First of all the robot needs to find the table and recognize the cup of coffee. Furthermore, the robot needs to bring its arm and fingers in the right position to grasp the object properly. The goal of this thesis is to integrate an object recognition system into the framework of an existing robotic arm, which has already smart grasping algorithms implemented. Combining the vision system with these grasping algorithms enables the robotic arm to recognize and grasp objects, like a cup of coffee, autonomously. Till now the user of the robot needed to provide the location and orientation information of objects manually. Typing in this data takes time and makes the robot less autonomous. In this thesis, we tested and evaluated existing general purpose object recognition algorithms and optimized the results with respect to speed and accuracy for basic table top situations. The whole recognition process is divided into different steps, from modeling real objects to extracting location and pose of recognized objects in a given scene. Finally, we created simple interfaces of the modeling and recognition system to make it easily operable for users who are new to computer vision.