While outdoor navigation has received, by now, much attention and reached quite standardized technologies and techniques to automatic localization and navigation (mainly based on GPS-tracking), the problem of navigating people indoor has received relatively less attention. The very different nature of outdoor and indoor spaces makes the problem of navigating people through indoor environments also very different from the outdoor navigation problem. Most of the indoor navigation techniques developed so far try to replicate the satellite triangulation applicable outdoor by replacing GPS signal by other signals such as WIFI, RFID, and Bluetooth. Another type of approach tries, as soon as GPS coverage is lost, to build movement trajectories from sensors available in modern mobile devices such as compasses, magnetometers, and accelerometers. This work explores and analyses a different solution to navigating people in indoor environments, one that does not require installation and usage of any extra devices in the environments. The idea is to navigate people based on qualitative coordinates, which can be described as what one can see at a certain position. The main requirement to exploit this technique is to have a detailed database of the environment in order to compute visibility between objects. With that we divide space into regions, i.e. extract qualitative coordinates. By using these regions, it is possible to create landmark-based instructions, which are, according to the literature, cognitively more appropriate (in the sense that they produce a lower cognitive load on the user) instructions than metric instructions. Since visibility is a spatial property directly related to 3D space, we present the idea of visibility-based navigation by using the data in 3D. More specifically, we analyse the extension to 3D space of an existing 2D qualitative localization and navigation system based on visibility information, we present the main challenges that need to be tackled in order to get a functional implementation of such a landmark-based navigation system based on 3D visibility information, and address a subset of them. The analysis shows we do not need to compute visibility between all objects in the database, but only between landmarks and obstacles. Therefore, we propose the usage of some criteria to minimize produced visibility regions. Objects with specific role in indoor environment should be used to simplify some regions or to reduce the computational load. Furthermore, it is suggested to avoid landmarks in the route instructions which are not fixed in time or place. We realized we need to take into account the agent's point of view to successfully navigate him. Moreover, it is recommended to merge metric and topological graph into weighted graph. However, some challenges still remain for future work like the implementation of the cutting function, panoramic view of the agent, usage of transparent objects in 3D space and simplifications of the instructions based on visibility.