This thesis introduces statistical classification algorithms for positron emission tomography (PET) images to support computational treatment planning in radiotherapy. Common clinical practice is based on manual delineation and on threshold methods. These methods suffer from certain shortcomings in connection with the following problems with PET images: (1) the images are very noisy; (2) the low resolution of the images leads to partial volume effects (PVE) with discrete clusterings. To improve the clinical state of the art, we consider probabilistic models that are capable of describing partial membership of the image entities (voxels). As we are not given data sets to train learning models, our approaches have to estimate the labeling as well as the parameters describing the models. We first study classical methods like the expectation-maximization procedure to fit Gaussian models to the data (EMGMM). Due to bad statistical ensembles of small clusters, we study whether a Bayesian treatment of the parameters is beneficial for our task. To countersteer the image distortions which arise due to point spread effects, graphical models are considered next. With graphical models we can easily capture dependencies among voxels. The price to pay is a more complex optimization procedure for the parameters as well as for the labellings. The parameter estimation becomes a convex optimization problem which is solved by adapting the probability distributions to the corresponding empirical statistics of some intermediate labelled image. To solve the labelling problem we are either sampling label configurations according to local probability distributions or apply marginalization procedures like belief propagation. The proposed algorithms are numerically assessed using PET images of a modified NEMA sphere phantom. These were acquired at the General Hospital of Vienna using general clinical settings at a Siemens Biograph True Point 64 PET/CT scanner. Multiple measurements have been performed using different signal-to-background ratios (SBR). To test the algorithms in tough conditions, a small sphere of 8mm diameter not used in previous investigations, has been added to the NEMA phantom. Moreover a measurement with SBR of 2.06 has not been used in previous works. The small statistical ensembles of the NEMA spheres result in unreliable parameter estimates of the Gaussian distributions modeling the sphere voxels. Therefore the volume of the spheres get overestimated including also many outliers located in background regions. A Bayesian treatment of the parameters could not resolve this problem. Hence the mean and standard deviation of the sphere clusters are assigned ad hoc using information gained from the image, e.g. assigning the maximum intensity value to the average of the sphere voxels. This way, the EMGMM outperforms the clinical state of the art methods regarding their volume predictability as well as regarding their capability of detecting spheres comprised in the noisy background reservoir of the NEMA phantom. Nevertheless Bayesian models yield powerful detection algorithms improving the detection statistic of the EMGMM. Using Markov random fields (MRF), the overestimation of small clusters can be reduced more selectively with the drawback of influencing also the volume estimates of larger ensembles. Nevertheless the results are competitive with those of the EMGMM algorithm. Moreover with graphical models, reasonabe results are obtained by employing the parameter updates derived from the model. Finally, by applying MRFs only during defined correction steps, more stable results regarding the size of the analyzed image region are obtained.