Holt, W. (2018). Sparse superposition codes for the Gaussian channel [Diploma Thesis, Technische Universität Wien]. reposiTUm. https://doi.org/10.34726/hss.2018.44021
Recently sparse regression codes / sparse superposition codes have been proposed and it has been shown that those codes asymptotically (that is for large block size) achieve the performance limits of information theory. The codes are formed from Gaussian random sequences that are arranged in the columns of a large matrix, and groups of data bits are used to identify which column-vectors are chosen for superimposed transmission over a Gaussian channel. Hence the coding process can be written as a matrix-vector multiplication, with the matrix containing the Gaussian code-sequences and the data vector containing only few non-zero components (that are determined by the data bits) that identify which of the code-column vectors are used to form a codeword. Since the vector is sparse, the decoding can be understood as a sparse recovery problem that has been recently and intensively studied in the field of compressed sensing. Some of the best sparse recovery algorithms are based on approximate message passing (AMP) which is also a good candidate decoder for sparse regression codes (SPARCs). A particularity of the SPARCs is that for good decoding by AMP not only values of constant amplitude are used to identify which of the column vectors are chosen but rather a sophisticated power allocation scheme that determines the scaling factors is required, and only then theoretical limits can be achieved. The first goal of the thesis is to understand the construction, encoding, power allocation, and decoding of SPARCs for the unconstrained Gaussian channel from the recent literature. In a second step, the codes shall be implemented (mostly in Matlab) and simulations shall be conducted to verify performance known from the literature. In terms of novelty, several interesting topics occur: one is to investigate SPARCs for various block sizes to understand how those codes would work in practical applications with strong delay constraints , that naturally limit the block size. Another interesting question is if it makes sense to concatenate sparse regression codes with classical binary channel codes and, if possible, to apply iterative decoding between the component codes.