Light field is one of the most representative image-based rendering techniques that generate
novel virtual views from images instead of 3D models. The light field capture and rendering
process can be considered as a procedure of sampling the light rays in the space and
interpolating those in novel views. As a result light field can be studied as a
high-dimensional signal sampling problem which has attracted a lot of research interest and
become a convergence point between computer graphics and signal processing and even computer
vision. This lecture focuses on answering two questions regarding light field sampling namely
how many images are needed for a light field and if such number is limited where we should
capture them. The book can be divided into three parts. First we give a complete analysis on
uniform sampling of IBR data. By introducing the surface plenoptic function we are able to
analyze the Fourier spectrum of non-Lambertian and occluded scenes. Given the spectrum we also
apply the generalized sampling theorem on the IBR data which results in better rendering
quality than rectangular sampling for complex scenes. Such uniform sampling analysis provides
general guidelines on how the images in IBR should be taken. For instance it shows that
non-Lambertian and occluded scenes often require a higher sampling rate. Next we describe a
very general sampling framework named freeform sampling. Freeform sampling handles three kinds
of problems: sample reduction minimum sampling rate to meet an error requirement and
minimization of reconstruction error given a fixed number of samples. When the
to-be-reconstructed function values are unknown freeform sampling becomes active sampling.
Algorithms of active sampling are developed for light field and show better results than the
traditional uniform sampling approach. Third we present a self-reconfigurable camera array
that we developed which features a very efficient algorithm for real-time rendering and the
ability of automatically reconfiguring the cameras to improve the rendering quality. Both are
based on active sampling. Our camera array is able to render dynamic scenes interactively at
high quality. To the best of our knowledge it is the first camera array that can reconfigure
the camera positions automatically.