The concept of crowdsourced measurements allows service providers to outsource the expensive and time consuming task of performance measurements to the end user device. This approach is obviously appealing for the industry, though giving up the control on the experiment in favor. In this thesis we took a first step into the analysis of crowdsourced data sets in order to build up a view of the current status of the network. This work aims at answering the question whether it is possible to use crowdsourced data to meaningfully characterize properties of the network. ^The thesis i) introduces theoretical concepts for systematic description and processing of data rate series based on data volume samples; ii) describes implementation of generic framework allowing for repeated measurements and benchmarking various measurement tools; iii) analyzes results of controlled measurements produced by the same tools which are used for crowdsourced measuremnts; iv) analyzes results of crowdsourced measurements. Controlled measurements reveal great potential of crowdsourced open data, yielding positive answer to the stated question. The use of real data set collected by the regulatory body in Austria, RTR, proofed to be challenging due to multiple factors: The first element is the pollution of the data set by systematic events, e.g. operator tests at special locations, operator optimizations in user profiling and many more. The second element is the dynamic of the data set in temporal and spatial dimension. ^The analysis and the reference measurements reveal clear time of day effects, e.g. clear diurnal load cycles in the cells. Finally, current implementation of RTR data is recorded in a lossy fashion. The analysis shows that the information loss cannot be recovered. Therefore, limitations due to user tariffs and network dynamics cannot be precisely removed. However, this work shows that even considering all these effects it is possible to already use the data set to gather network performance benchmark figures.