In high-dimensional classification tasks, e.g., case-control experiments for biomarker detection, variable filters constitute a simple and scalable method to discard uninformative variables. When screening for interesting variables the neglect of the operating conditions of the classification task can lead to false conclusions. This thesis, thus, proposes filtering statistics based on the expected prediction error of a univariate classifier. The choice of classifier will generally determine the characteristics of the filtering method. An obvious parametric approach was the Bayesian classifier for a mixture of Gaussian class conditionals. This approach, however, relies heavily on the parametric assumptions and any deviation from these will reduce the performance of the filter dramatically. Opting for a non-parametric classifier seems reasonable in the context of screening thousands of different variables. Thus, instead of assuming a parametric family for the class conditionals we will assume that the optimal classifier is a member of the family of threshold or interval classifiers. This assumption should hold true for a great number of different distributions. Furthermore, these methods exhibit another interesting property. It is possible to obtain the exact finite sample distribution of the test statistic under the null hypothesis of equal class conditional distributions by means of a fast recursive algorithm. In this thesis, I have studied the proposed screening methods analytically and evaluated their screening characteristics by means of simulated as well as real data.