How do artificial neural networks and other forms of artificial intelligence interfere with
methods and practices in the sciences? Which interdisciplinary epistemological challenges arise
when we think about the use of AI beyond its dependency on big data? Not only the natural
sciences but also the social sciences and the humanities seem to be increasingly affected by
current approaches of subsymbolic AI which master problems of quality (fuzziness uncertainty)
in a hitherto unknown way. But what are the conditions implications and effects of these
(potential) epistemic transformations and how must research on AI be configured to address them
adequately?