The linguistic form of decisions of the German Federal Constitutional Court remains largely
unexplored. In this dissertation the author undertakes a systematic empirical analysis that
combines close readings of individual texts with large-scale computational methods. The
foundation is a corpus of around 3 400 rulings from the Official Digest and 6 900 additional
decisions of the German Federal Constitutional Court. Special attention is given to text
segments such as rubric tenor reasoning statement of facts admissibility merits and the
so-called "standards section". These are annotated either automatically or manually and
analysed with respect to their frequency length and linguistic properties. The findings show
that the degree of conventionalisation varies according to type of decision outcome and
judicial body with official rulings being more standardised than chamber decisions. Norm
control procedures for example produce longer and more complex structures. The study also
identifies diachronic developments: texts become longer more uniform and more consistently
organised with introductory summary sentences increasingly established. Initial
peculiarities-such as missing text segments or unusual subdivisions-disappear over time. By
applying digital methods to constitutional jurisprudence the work uncovers patterns invisible
in single-case studies confirming some scholarly intuitions while challenging others. It
thereby opens new avenues for both linguistic and legal research and exemplifies the effective
intersection of law and the digital humanities.