The human visual system can process only a fraction of the details in our environment at a time. Since the human eye can only resolve about 2 of angle in the field-of-view with high accuracy, visual attention guides eye movements to collect details only for the most important features or objects. Thus, attention plays a major role in visual perception. It actually determines what we see and what we do not see. This thesis is concerned with visual attention in virtual environments that are generated with real-time computer-graphics technology. Under this scope, research has been carried out in two directions: On the one hand, it was investigated how three-dimensional virtual environments can be used to better study visual attention. On the other hand, approaches were explored which try improving three-dimensional graphics by taking into account visual attention of a user. Since eye movements can be considered as the most obvious external manifestation of visual attention, eye tracking is the technology of choice to observe a user-s visual attention. In a setup with a display that shows a rendered image of a three-dimensional environment, an eye tracker outputs a 2D spatial screen-space coordinate that corresponds to the direction of a user-s gaze. Although eye-tracking methodology has recently advanced, previous research focused mostly on analyzing the eye movements in screen space. This approach is not so appropriate to analyze a user-s attention in dynamic and three-dimensional applications. In those applications, the viewpoint and scene objects change often, sometimes even rapidly. For every point on a graphics display, we have to assume a variation of the stimuli. Thus, it would be more interesting to know what a user is attending rather to which pixel gaze is directed to. A main idea of this thesis is hence to go beyond screen space and to correlate gaze with 3D scene objects instead of 2D pixels. Analyzing gaze data in object space allows linking visual attention to object properties, i.e., semantics, which may have the strongest influence on gaze behavior. Moreover, since attention is most of the time directed to objects rather than to locations, inferring what a user is attending is a more appropriate approach for algorithms which perceptually optimize graphics. To approach this ambitious goal of linking visual attention to semantics, two challenges have been addressed: First, inferring the object of attention at a certain point in time from the current output of an eye tracker - a technique which we denote as gaze-to-object mapping - and second, deriving a statistical model for visual attention, a data structure we denote as importance map, from sequences of gaze samples recorded from many users. While addressing these challenges is a crucial step towards advancing gaze analysis and research on visual attention which employs modern computer graphics, the results may also be used in applications which attempt to perceptually optimize rendering. This defines the third challenge addressed in this thesis, which is to explore an example application for attention-aware rendering techniques, where gaze-to-object mapping or importance maps can be employed to determine or predict the object of attention at run time. Thus, this thesis concludes with a pilot study on an application that dynamically adjusts the configuration of a stereo 3D display such that the object being attended by the user can be seen most comfortably.