The essential objective of physiological studies is the measurement and evaluation of various biosignals, which are often acquired continuously and simultaneously. The investigated characteristic physiology is usually concealed within the recorded signals or their interrelation and has to be extracted by the use of various data processing methods. Revealing this internal information of a signal (e.g. the heart rate from an electrocardiogram) alone, however, is not sufficient to receive the desired results. External information, describing the specific context (e.g. events, study group, etc.), has to be assigned to the data as well to allow meaningful comparison and evaluation. Unfortunately, this task is hindered by the lack of comprehensive solutions and unified procedures. To address this issue, a strategic approach was developed in order to improve efficiency and deepen the understanding of data processing and organisation management in physiological studies and experiments. Based on the specific requirements of internal information management, a layer model was designed defining four distinct stages in data processing in order to standardise common procedures of this task. These layers are (1) data acquisition (e.g. recording of signals), (2) validation (removal of artefacts), (3) preprocessing (calculation of physiological parameters) and (4) statistics (extraction of tendencies). In addition, a concept of a server-based software architecture, that is capable of handling the identified challenges of internal and external information management on a large scale, was created. A standalone prototype, focusing on external information management, was implemented and applied to data, which was obtained from five previously published experimental studies. It could be demonstrated, that the structure and overview of physiological studies can be greatly improved by this systematic approach. Furthermore, data, sharing similar external information, could be retrieved fast and efficiently for further processing. The full potential of this novel concept, however, has yet to be discovered in future implementations focusing on the aspects of centralization and data processing.