Modern building automation systems have to deal with more and more increasingly complex tasks. From an abstract view, the control units for theses systems have to solve goal-oriented prob-lems in a given environment and meet security constraints and other conditions.
State of the art rule-based algorithms are already reaching their limits and new, more "intelligent" methods for decision making are in the focus of worldwide research.
The ARS project (Artificial Recognition System) is developing a concept which combines models from the field of psychoanalysis and neurology with methods of artificial intelligence. To evaluate the possibilities of this scheme, a new artificial life simulation is being developed. In the course of this work, an interface between the agent's body and decision unit is designed and imple-mented to allow the agent to perform actions in this virtual world.
Due to the agile development process of the project and other constraints, the new compo-nents are required to be particularly extendable, robust, and testable. Approved patterns are chosen to ensure this, and a general infrastructure for calling and executing actions is designed. The respec-tive classes allow calling commands and collecting them until the execution phase of the simulation. Here they are validated and checked for energy demand, mutual exclusions and other restrictions before they are actually dispatched. The first 20 actions are also implemented and allow the agent to move, eat, attack, and more.
Using these commands, the first real test simulations can already be carried out putting the new components to actual use. Since other developers will use and enhance the software later, a documentation template is developed to create a uniform, compact, but also exhaustive catalogue of actions.