This thesis addresses one of the most fundamental challenges for modern science: how can the
brain as a network of neurons process information how can it create and store internal models
of our world and how can it infer conclusions from ambiguous data? The author addresses these
questions with the rigorous language of mathematics and theoretical physics an approach that
requires a high degree of abstraction to transfer results of wet lab biology to formal models.
The thesis starts with an in-depth description of the state-of-the-art in theoretical
neuroscience which it subsequently uses as a basis to develop several new and original ideas.
Throughout the text the author connects the form and function of neuronal networks. This is
done in order to achieve functional performance of biological brains by transferring their form
to synthetic electronics substrates an approach referred to as neuromorphic computing. The
obvious aspect that this transfer can never be perfect but necessarily leads to performance
differences is substantiated and explored in detail. The author also introduces a novel
interpretation of the firing activity of neurons. He proposes a probabilistic interpretation of
this activity and shows by means of formal derivations that stochastic neurons can sample from
internally stored probability distributions. This is corroborated by the author's recent
findings which confirm that biological features like the high conductance state of networks
enable this mechanism. The author goes on to show that neural sampling can be implemented on
synthetic neuromorphic circuits paving the way for future applications in machine learning and
cognitive computing for example as energy-efficient implementations of deep learning networks.
The thesis offers an essential resource for newcomers to the field and an inspiration for
scientists working in theoretical neuroscience and the future of computing.