This open access book provides a comprehensive overview of the state of the art in research and
applications of Foundation Models and is intended for readers familiar with basic Natural
Language Processing (NLP) concepts. Over the recent years a revolutionary new paradigm has
been developed for training models for NLP. These models are first pre-trained on large
collections of text documents to acquire general syntactic knowledge and semantic information.
Then they are fine-tuned for specific tasks which they can often solve with superhuman
accuracy. When the models are large enough they can be instructed by prompts to solve new
tasks without any fine-tuning. Moreover they can be applied to a wide range of different media
and problem domains ranging from image and video processing to robot control learning. Because
they provide a blueprint for solving many tasks in artificial intelligence they have been
called Foundation Models. After a brief introduction to basic NLP models the main pre-trained
language models BERT GPT and sequence-to-sequence transformer are described as well as the
concepts of self-attention and context-sensitive embedding. Then different approaches to
improving these models are discussed such as expanding the pre-training criteria increasing
the length of input texts or including extra knowledge. An overview of the best-performing
models for about twenty application areas is then presented e.g. question answering
translation story generation dialog systems generating images from text etc. For each
application area the strengths and weaknesses of current models are discussed and an outlook
on further developments is given. In addition links are provided to freely available program
code. A concluding chapter summarizes the economic opportunities mitigation of risks and
potential developments of AI.