Titelaufnahme

Titel
Object modelling for cognitive robotics / von Thomas Mörwald
VerfasserMörwald, Thomas
Begutachter / BegutachterinVincze, Markus ; Leonardis, Ale
Erschienen2013
UmfangIV, 100 S. : Ill., graph. Darst.
HochschulschriftWien, Techn. Univ., Diss., 2013
SpracheEnglisch
Bibl. ReferenzOeBB
DokumenttypDissertation
Schlagwörter (DE)Computer Vision / Objektrekonstruierung / Objektverfolgung / Bewegungsprediktion / Oberflächenanpassung / Kurvenanpassung / B-splines / Monte Carlo Partikel Filter / Objekt Segmentierung / Maschinelles Sehen in der Robotik
Schlagwörter (EN)computer vision / object reconstruction / visual tracking / motion prediction / surface fitting / curve fitting / B-splines / Monte Carlo particle filtering / object segmentation / robot vision
Schlagwörter (GND)Roboter / Maschinelles Sehen / Bildrekonstruktion / Objektverfolgung / B-Spline / Monte-Carlo-Simulation / Filter <Informatik>
URNurn:nbn:at:at-ubtuw:1-57712 Persistent Identifier (URN)
Zugriffsbeschränkung
 Das Werk ist frei verfügbar
Dateien
Object modelling for cognitive robotics [3.26 mb]
Links
Nachweis
Klassifikation
Zusammenfassung (Englisch)

The development of robots received great attention in the last decades. Progressing from hard-coded pick and place operations common for industrial applications, the need for more intelligent solutions emerged. The field of cognitive robotics evolved, where tasks are no more hard-coded processes that are executed monotonously. The concept of intelligent robots, known form science fiction, found its way into science, where soon methods appeared that allow for reasoning, representing knowledge, interacting with humans and so forth. In this thesis the focus lies on cognitive perception, that allows to learn and reason about objects as the robot perceives them. The importance of never-ending learning methods, the ability to handle partial information and fusion of knowledge from different cues for a better understanding of the appearance and properties of objects is demonstrated.

The appearance of objects is given by their shape and colour. Geometric models, such as B-spline curves and surfaces are used to segment range images and simultaneously reconstruct the shape of smooth, continuous surface patches. These patches are grouped to objects according to relations inspired by Gestalt principles. Colour information is mapped onto the shape, resulting in a model for the appearance of the object for a single view.

The key for learning and reasoning is to identify objects that have already been learned and to assign new information to them. In the context of perception that is, to visually track it and to self-evaluate observations to distinguish good and bad sensor data (e.g. sensor noise, occlusions, reflections, and so forth). For visual tracking the previously reconstructed appearance of the object is used. With respect to a prior pose, a Monte Carlo Particle Filter (MCPF) evaluates various pose hypothesis, efficiently following the object movement, including rigid 3D translations and rotations. A novel algorithm for evaluation of the tracking state, called tracking-state-detection (TSD), is proposed which allows to reason about the tracking quality, detects whether tracking is valid or lost or if the object is occluded.

The TSD allows to identify new good views, from which new information can be used to extend existing appearance models, but also to learn about the physical behaviour of objects. The trajectory of an object under robotic manipulation is observed, where a robotic finger is pushing it. This allows to learn or extend a probabilistic motion model.

That is, to assign probabilities between coordinate frames attached to the object, the robotic finger and the environment. The advantage of a probabilistic physical model is, in contrast to Newtonian mechanics, that it allows for generalisation to new object shapes and pushing configurations. This makes it perfectly suitable for cognitive robotics.

Furthermore physical predictions can be used as prior poses for tracking, increasing accuracy and robustness especially in difficult situations (e.g. motion blur during fast movement, partial or full occlusion).