In finance the importance of volatility is increasing. When pricing options in the Black-Scholes-Model, volatility is the only parameter that is not measurable. Therefore we need volatility estimators. The obvious estimator for a given time series of high-frequency financial data is the Realized Variance, the sum of squared returns between trades. Unfortunately this estimator leads to divergence in the presence of Market Microstructure Noise, an effect which appears in cause of frictions like bid-ask Bounces or measurement errors. This problem can be solved by reducing the sampling frequency. If sampling sparsely at the optimal frequency, one is throwing away a large amount of data, in particular up to 99 per cent. Due to the fact that sampling sparsely is not optimal, there was need for new classes of volatility estimators. First, estimators that include autocovariances higher order have been developed. Second, methods like the Fourier method, the class of Kernel estimators, and a linear combination over multiple scales emerged. The question concerning the most appropriate volatility estimator is not an easy one. Every estimator itself has his advantages and disadvantages. Convergence rate, computational effort and asymptotic behaviour have to be balanced out. The correct implementation and the right choice of the parameters of freedom is extremely important. It requires a good data analysis and a big amount of know-how to obtain reasonable estimated values for volatility.