Modern production systems have to meet the challenges of changing market trends, high flexibility in product types and variants, and short innovation cycles in order to stay competitive. In this context, Human-Robot-Cooperation is considered as a key technology to improve efficiency and to reduce operating costs. In order to realize a cooperative and effective relationship between a human and a robot, e.g. when performing an assembly task together, concepts for communication and coordination among the actors, as well as for the representation of domain knowledge are required. Such a robotic assistant must do the right thing to the right object in the right way and reason about the task that it has to do in cooperation with the human. The ability of the robot to recognize and learn the current state of an assembly task is crucial, because it will be applied for further reasoning, e.g. establishing the next actions to be done by the robot or by the human and to derive assembly task variants. The overall aim of this master thesis is to develop a reasoning system which hypothesizes the current state of the assembly task by integrating and combining a knowledge base task model with the input from sensing and actuating components. A general task model which represents the task states has to be build, were each task state consists of a human state, robot state and a set of object relations. Having such a model whose construction conforms to the information received by sensing and actuating components enables the capability to predict the current state of the assembly task with the help of supervised machine learning algorithms.