Steady-state visual evoked potentials (SSVEPs) are brain signals used to operate Brain-Computer Interface (BCI) systems. SSVEP BCIs offer excellent information transfer rates (ITR). Reliably generating SSVEP stimuli in a virtual reality environment, allows for a much easier implementation of motivating, more realistic simulations of real-world applications. The aims of this thesis are (i) to integrate a highly configurable and flexible virtual reality application, that can generate SSVEP stimuli and display feedback in form of the movements of an avatar. Based on the Studierstube mixed reality framework, three Open Inventor (Coin3D) compatible components (a) StbAvatar (extended), (b) StbBCICommApp (extended) and (c) StbBCIToolbox (from scratch) were integrated and combined in order to create scenarios for both, accuracy measurements with a phototransistor circuit and online EEG experiments with five subjects. The experiments were successful, and showed, that both, moving and static, triangular or rectangular non-VSync software SSVEP stimuli at frequencies between 5 and 29Hz on a standard 60Hz TFT monitor, proved suitable to elicit steady-state visual evoked potentials. The results were reasonably good in direct comparison, with two out of five subjects performing worse by less than 7%. Only one out of five subjects performed significantly worse by 25%. The amplitudes of the peaks at the target frequencies and harmonics in the FFT spectra of the software SSVEP measurements were smaller for all except of one subject, which suggests, that the quality of the software SSVEP stimulation is to a certain degree inferior to that of the LEDs. This research direction could lead to vastly improved immersive virtual environments, that allow both disabled and healthy users to seamlessly navigate, communicate or interact through an intuitive, natural and friendly interface.