The development of statistical methods for high-dimensional data has become an important focus in recent research. Classical regression and classification approaches require full rank data matrices, with more observations than variables. In many areas of application (e.g. bioinformatics and chemometrics) this assumption is not met. Sparse methods describe a class of approaches where a penalty is imposed on the coefficient estimate to favour exact zero values and so intrinsically perform variable selection. Another challenge in many applications are outliers in the data, which are observations that do not follow the structure of the majority of the data and so violate the distribution assumptions which are necessary for classical model estimation. Robust methods give stable estimates when outliers are present and model the relationship of the majority of the data. The focus of this thesis is on the development of regression and classification methods, which are appropriate for high-dimensional data and data with outliers. Sparse partial robust M regression is a robust and sparse regression method. A robust subspace is identified, including only a subset of the original variables, where a robust regression model is estimated. This approach is then extended to binary classification problems. With the help of the optimal scoring approach, regression methods can be applied to classification problems. Robust sparse optimal scoring is a classification method based on least trimmed squares regression. Finally, sparse and robust linear regression and logistic regression methods are introduced based on least trimmed squares with an elastic net penalty, which induces sparsity and at the same time favours similar coefficient estimates for highly correlated variables.