Current vision systems are designed to perform in normal weather condition. However no one can
escape from severe weather conditions. Bad weather reduces scene contrast and visibility which
results in degradation in the performance of various computer vision algorithms such as object
tracking segmentation and recognition. Thus current vision systems must include some
mechanisms that enable them to perform up to the mark in bad weather conditions such as rain
and fog. Rain causes the spatial and temporal intensity variations in images or video frames.
These intensity changes are due to the random distribution and high velocities of the
raindrops. Fog causes low contrast and whiteness in the image and leads to a shift in the
color. This book has studied rain and fog from the perspective of vision. The book has two main
goals: 1) removal of rain from videos captured by a moving and static camera 2) removal of the
fog from images and videos captured by a moving single uncalibrated camera system. The book
begins with a literature survey. Pros and cons of the selected prior art algorithms are
described and a general framework for the development of an efficient rain removal algorithm
is explored. Temporal and spatiotemporal properties of rain pixels are analyzed and using these
properties two rain removal algorithms for the videos captured by a static camera are
developed. For the removal of rain temporal and spatiotemporal algorithms require fewer
numbers of consecutive frames which reduces buffer size and delay. These algorithms do not
assume the shape size and velocity of raindrops which make it robust to different rain
conditions (i.e. heavy rain light rain and moderate rain). In a practical situation there is
no ground truth available for rain video. Thus no reference quality metric is very useful in
measuring the efficacy of the rain removal algorithms. Temporal variance and spatiotemporal
variance are presented in this book as no reference quality metrics. An efficient rain removal
algorithm using meteorological properties of rain is developed. The relation among the
orientation of the raindrops wind velocity and terminal velocity is established. This relation
is used in the estimation of shape-based features of the raindrop. Meteorological
property-based features helped to discriminate the rain and non-rain pixels. Most of the prior
art algorithms are designed for the videos captured by a static camera. The use of global
motion compensation with all rain removal algorithms designed for videos captured by static
camera results in better accuracy for videos captured by moving camera. Qualitative and
quantitative results confirm that probabilistic temporal spatiotemporal and meteorological
algorithms outperformed other prior art algorithms in terms of the perceptual quality buffer
size execution delay and system cost. The work presented in this book can find wide
application in entertainment industries transportation tracking and consumer electronics.
Table of Contents: Acknowledgments Introduction Analysis of Rain Dataset and Performance
Metrics Important Rain Detection Algorithms Probabilistic Approach for Detection and
Removal of Rain Impact of Camera Motion on Detection of Rain Meteorological Approach for
Detection and Removal of Rain from Videos Conclusion and Scope of Future Work Bibliography
Authors' Biographies