In this work, a robust and simple voice command system is developed, implemented and tested. The algorithms are particularly suitable for speaker-independent and speaker-dependent isolated-word recognition, for example for the purpose of controlling the environment.
i) State of the art. Inside voice command systems, command words are usually represented via command word models. Each of the command word models consists of a sequence of "sub-word unit models", for example "phoneme models". The Viterbi dynamic programming algorithm is currently the de facto standard in order to determine the naturally varying durations of the "sub-word units", for example linguistic "phones", that are considered to be contained in an observed speech signal in the course of evaluating a command word model. The points in time, at which a command word model switches from one sub-word unit model to the next, are determined optimally, in some sense. The Viterbi dynamic programming algorithm is an essential integral part of the hidden Markov model speech recognizers.
ii) Main thesis. It is evaluated whether, corresponding to the main thesis, a particular modification of the Viterbi dynamic programming algorithm called "run-length limited dynamic programming algorithm" causes a statistically significant improvement of the confusion error rate: Using this modification, the points in time at which the sub-word unit models are switched, are searched within a restricted set of solutions, containing only those solutions, for which additional constraints apply for the run-lengths of the sub-word-unit models.
iii) Side theses. It is further evaluated whether, corresponding to the side theses, the following measures cause a statistically significant improvement of the confusion error rate: Linear transformation of the feature vectors according to a linear discriminant analysis; usage of alternative window functions for the short-time Fourier transformation; improvement of the modeling of the command words by additional prediction models that consist of "hidden control neural networks". iv) Methods. The measures to be evaluated are activated in all possible combinations and are tested using two speech databases (German command words, English command words). Each of the measures is tested for statistical significance by means of a Wilcoxon signed-rank test with Bonferroni correction. v) Results. The measure of the main thesis is the only one that leads to a statistically significant reduction of the confusion error rate. In this case, the improved modeling of the command words using "hidden control neural networks" is not necessary and simple "centroid sub-word unit models" can be used.
vi) Application. The voice command system developed in this work enables the AUTONOMY system [Zagler1997] [Panek2002] [Loidolt1995] to be controlled by voice commands. The AUTONOMY system consists of an environmental control system combined with an alternative and augmentative communication system. Based on the needs of people with disabilities, requirements for the system design of the voice command system are stated. In the course of this work the voice command system is implemented and prepared for practical operation. It is noteworthy that the voice command system is designed to be used by people with speech disorders: The respective command words can be chosen arbitrarily and individually, as long as the implemented "automatic quality check module" qualifies the command word recordings to be sufficient to provide a good classification quality. In preparation for future research, special logging dialogs are configured, which occasionally ask the user, whether the system did understand correctly.